Miranian, A; Abdollahzade, M
2013-02-01
Local modeling approaches, owing to their ability to model different operating regimes of nonlinear systems and processes by independent local models, seem appealing for modeling, identification, and prediction applications. In this paper, we propose a local neuro-fuzzy (LNF) approach based on the least-squares support vector machines (LSSVMs). The proposed LNF approach employs LSSVMs, which are powerful in modeling and predicting time series, as local models and uses hierarchical binary tree (HBT) learning algorithm for fast and efficient estimation of its parameters. The HBT algorithm heuristically partitions the input space into smaller subdomains by axis-orthogonal splits. In each partitioning, the validity functions automatically form a unity partition and therefore normalization side effects, e.g., reactivation, are prevented. Integration of LSSVMs into the LNF network as local models, along with the HBT learning algorithm, yield a high-performance approach for modeling and prediction of complex nonlinear time series. The proposed approach is applied to modeling and predictions of different nonlinear and chaotic real-world and hand-designed systems and time series. Analysis of the prediction results and comparisons with recent and old studies demonstrate the promising performance of the proposed LNF approach with the HBT learning algorithm for modeling and prediction of nonlinear and chaotic systems and time series.
[Application of ARIMA model on prediction of malaria incidence].
Jing, Xia; Hua-Xun, Zhang; Wen, Lin; Su-Jian, Pei; Ling-Cong, Sun; Xiao-Rong, Dong; Mu-Min, Cao; Dong-Ni, Wu; Shunxiang, Cai
2016-01-29
To predict the incidence of local malaria of Hubei Province applying the Autoregressive Integrated Moving Average model (ARIMA). SPSS 13.0 software was applied to construct the ARIMA model based on the monthly local malaria incidence in Hubei Province from 2004 to 2009. The local malaria incidence data of 2010 were used for model validation and evaluation. The model of ARIMA (1, 1, 1) (1, 1, 0) 12 was tested as relatively the best optimal with the AIC of 76.085 and SBC of 84.395. All the actual incidence data were in the range of 95% CI of predicted value of the model. The prediction effect of the model was acceptable. The ARIMA model could effectively fit and predict the incidence of local malaria of Hubei Province.
Real-time prediction of respiratory motion based on a local dynamic model in an augmented space
NASA Astrophysics Data System (ADS)
Hong, S.-M.; Jung, B.-H.; Ruan, D.
2011-03-01
Motion-adaptive radiotherapy aims to deliver ablative radiation dose to the tumor target with minimal normal tissue exposure, by accounting for real-time target movement. In practice, prediction is usually necessary to compensate for system latency induced by measurement, communication and control. This work focuses on predicting respiratory motion, which is most dominant for thoracic and abdominal tumors. We develop and investigate the use of a local dynamic model in an augmented space, motivated by the observation that respiratory movement exhibits a locally circular pattern in a plane augmented with a delayed axis. By including the angular velocity as part of the system state, the proposed dynamic model effectively captures the natural evolution of respiratory motion. The first-order extended Kalman filter is used to propagate and update the state estimate. The target location is predicted by evaluating the local dynamic model equations at the required prediction length. This method is complementary to existing work in that (1) the local circular motion model characterizes 'turning', overcoming the limitation of linear motion models; (2) it uses a natural state representation including the local angular velocity and updates the state estimate systematically, offering explicit physical interpretations; (3) it relies on a parametric model and is much less data-satiate than the typical adaptive semiparametric or nonparametric method. We tested the performance of the proposed method with ten RPM traces, using the normalized root mean squared difference between the predicted value and the retrospective observation as the error metric. Its performance was compared with predictors based on the linear model, the interacting multiple linear models and the kernel density estimator for various combinations of prediction lengths and observation rates. The local dynamic model based approach provides the best performance for short to medium prediction lengths under relatively low observation rate. Sensitivity analysis indicates its robustness toward the choice of parameters. Its simplicity, robustness and low computation cost makes the proposed local dynamic model an attractive tool for real-time prediction with system latencies below 0.4 s.
Real-time prediction of respiratory motion based on a local dynamic model in an augmented space.
Hong, S-M; Jung, B-H; Ruan, D
2011-03-21
Motion-adaptive radiotherapy aims to deliver ablative radiation dose to the tumor target with minimal normal tissue exposure, by accounting for real-time target movement. In practice, prediction is usually necessary to compensate for system latency induced by measurement, communication and control. This work focuses on predicting respiratory motion, which is most dominant for thoracic and abdominal tumors. We develop and investigate the use of a local dynamic model in an augmented space, motivated by the observation that respiratory movement exhibits a locally circular pattern in a plane augmented with a delayed axis. By including the angular velocity as part of the system state, the proposed dynamic model effectively captures the natural evolution of respiratory motion. The first-order extended Kalman filter is used to propagate and update the state estimate. The target location is predicted by evaluating the local dynamic model equations at the required prediction length. This method is complementary to existing work in that (1) the local circular motion model characterizes 'turning', overcoming the limitation of linear motion models; (2) it uses a natural state representation including the local angular velocity and updates the state estimate systematically, offering explicit physical interpretations; (3) it relies on a parametric model and is much less data-satiate than the typical adaptive semiparametric or nonparametric method. We tested the performance of the proposed method with ten RPM traces, using the normalized root mean squared difference between the predicted value and the retrospective observation as the error metric. Its performance was compared with predictors based on the linear model, the interacting multiple linear models and the kernel density estimator for various combinations of prediction lengths and observation rates. The local dynamic model based approach provides the best performance for short to medium prediction lengths under relatively low observation rate. Sensitivity analysis indicates its robustness toward the choice of parameters. Its simplicity, robustness and low computation cost makes the proposed local dynamic model an attractive tool for real-time prediction with system latencies below 0.4 s.
Evaluation of procedures for prediction of unconventional gas in the presence of geologic trends
Attanasi, E.D.; Coburn, T.C.
2009-01-01
This study extends the application of local spatial nonparametric prediction models to the estimation of recoverable gas volumes in continuous-type gas plays to regimes where there is a single geologic trend. A transformation is presented, originally proposed by Tomczak, that offsets the distortions caused by the trend. This article reports on numerical experiments that compare predictive and classification performance of the local nonparametric prediction models based on the transformation with models based on Euclidean distance. The transformation offers improvement in average root mean square error when the trend is not severely misspecified. Because of the local nature of the models, even those based on Euclidean distance in the presence of trends are reasonably robust. The tests based on other model performance metrics such as prediction error associated with the high-grade tracts and the ability of the models to identify sites with the largest gas volumes also demonstrate the robustness of both local modeling approaches. ?? International Association for Mathematical Geology 2009.
Micro Finite Element models of the vertebral body: Validation of local displacement predictions.
Costa, Maria Cristiana; Tozzi, Gianluca; Cristofolini, Luca; Danesi, Valentina; Viceconti, Marco; Dall'Ara, Enrico
2017-01-01
The estimation of local and structural mechanical properties of bones with micro Finite Element (microFE) models based on Micro Computed Tomography images depends on the quality bone geometry is captured, reconstructed and modelled. The aim of this study was to validate microFE models predictions of local displacements for vertebral bodies and to evaluate the effect of the elastic tissue modulus on model's predictions of axial forces. Four porcine thoracic vertebrae were axially compressed in situ, in a step-wise fashion and scanned at approximately 39μm resolution in preloaded and loaded conditions. A global digital volume correlation (DVC) approach was used to compute the full-field displacements. Homogeneous, isotropic and linear elastic microFE models were generated with boundary conditions assigned from the interpolated displacement field measured from the DVC. Measured and predicted local displacements were compared for the cortical and trabecular compartments in the middle of the specimens. Models were run with two different tissue moduli defined from microindentation data (12.0GPa) and a back-calculation procedure (4.6GPa). The predicted sum of axial reaction forces was compared to the experimental values for each specimen. MicroFE models predicted more than 87% of the variation in the displacement measurements (R2 = 0.87-0.99). However, model predictions of axial forces were largely overestimated (80-369%) for a tissue modulus of 12.0GPa, whereas differences in the range 10-80% were found for a back-calculated tissue modulus. The specimen with the lowest density showed a large number of elements strained beyond yield and the highest predictive errors. This study shows that the simplest microFE models can accurately predict quantitatively the local displacements and qualitatively the strain distribution within the vertebral body, independently from the considered bone types.
A Bayesian network approach for modeling local failure in lung cancer
NASA Astrophysics Data System (ADS)
Oh, Jung Hun; Craft, Jeffrey; Lozi, Rawan Al; Vaidya, Manushka; Meng, Yifan; Deasy, Joseph O.; Bradley, Jeffrey D.; El Naqa, Issam
2011-03-01
Locally advanced non-small cell lung cancer (NSCLC) patients suffer from a high local failure rate following radiotherapy. Despite many efforts to develop new dose-volume models for early detection of tumor local failure, there was no reported significant improvement in their application prospectively. Based on recent studies of biomarker proteins' role in hypoxia and inflammation in predicting tumor response to radiotherapy, we hypothesize that combining physical and biological factors with a suitable framework could improve the overall prediction. To test this hypothesis, we propose a graphical Bayesian network framework for predicting local failure in lung cancer. The proposed approach was tested using two different datasets of locally advanced NSCLC patients treated with radiotherapy. The first dataset was collected retrospectively, which comprises clinical and dosimetric variables only. The second dataset was collected prospectively in which in addition to clinical and dosimetric information, blood was drawn from the patients at various time points to extract candidate biomarkers as well. Our preliminary results show that the proposed method can be used as an efficient method to develop predictive models of local failure in these patients and to interpret relationships among the different variables in the models. We also demonstrate the potential use of heterogeneous physical and biological variables to improve the model prediction. With the first dataset, we achieved better performance compared with competing Bayesian-based classifiers. With the second dataset, the combined model had a slightly higher performance compared to individual physical and biological models, with the biological variables making the largest contribution. Our preliminary results highlight the potential of the proposed integrated approach for predicting post-radiotherapy local failure in NSCLC patients.
Estimation and prediction under local volatility jump-diffusion model
NASA Astrophysics Data System (ADS)
Kim, Namhyoung; Lee, Younhee
2018-02-01
Volatility is an important factor in operating a company and managing risk. In the portfolio optimization and risk hedging using the option, the value of the option is evaluated using the volatility model. Various attempts have been made to predict option value. Recent studies have shown that stochastic volatility models and jump-diffusion models reflect stock price movements accurately. However, these models have practical limitations. Combining them with the local volatility model, which is widely used among practitioners, may lead to better performance. In this study, we propose a more effective and efficient method of estimating option prices by combining the local volatility model with the jump-diffusion model and apply it using both artificial and actual market data to evaluate its performance. The calibration process for estimating the jump parameters and local volatility surfaces is divided into three stages. We apply the local volatility model, stochastic volatility model, and local volatility jump-diffusion model estimated by the proposed method to KOSPI 200 index option pricing. The proposed method displays good estimation and prediction performance.
Haplotype-Based Genome-Wide Prediction Models Exploit Local Epistatic Interactions Among Markers
Jiang, Yong; Schmidt, Renate H.; Reif, Jochen C.
2018-01-01
Genome-wide prediction approaches represent versatile tools for the analysis and prediction of complex traits. Mostly they rely on marker-based information, but scenarios have been reported in which models capitalizing on closely-linked markers that were combined into haplotypes outperformed marker-based models. Detailed comparisons were undertaken to reveal under which circumstances haplotype-based genome-wide prediction models are superior to marker-based models. Specifically, it was of interest to analyze whether and how haplotype-based models may take local epistatic effects between markers into account. Assuming that populations consisted of fully homozygous individuals, a marker-based model in which local epistatic effects inside haplotype blocks were exploited (LEGBLUP) was linearly transformable into a haplotype-based model (HGBLUP). This theoretical derivation formally revealed that haplotype-based genome-wide prediction models capitalize on local epistatic effects among markers. Simulation studies corroborated this finding. Due to its computational efficiency the HGBLUP model promises to be an interesting tool for studies in which ultra-high-density SNP data sets are studied. Applying the HGBLUP model to empirical data sets revealed higher prediction accuracies than for marker-based models for both traits studied using a mouse panel. In contrast, only a small subset of the traits analyzed in crop populations showed such a benefit. Cases in which higher prediction accuracies are observed for HGBLUP than for marker-based models are expected to be of immediate relevance for breeders, due to the tight linkage a beneficial haplotype will be preserved for many generations. In this respect the inheritance of local epistatic effects very much resembles the one of additive effects. PMID:29549092
Haplotype-Based Genome-Wide Prediction Models Exploit Local Epistatic Interactions Among Markers.
Jiang, Yong; Schmidt, Renate H; Reif, Jochen C
2018-05-04
Genome-wide prediction approaches represent versatile tools for the analysis and prediction of complex traits. Mostly they rely on marker-based information, but scenarios have been reported in which models capitalizing on closely-linked markers that were combined into haplotypes outperformed marker-based models. Detailed comparisons were undertaken to reveal under which circumstances haplotype-based genome-wide prediction models are superior to marker-based models. Specifically, it was of interest to analyze whether and how haplotype-based models may take local epistatic effects between markers into account. Assuming that populations consisted of fully homozygous individuals, a marker-based model in which local epistatic effects inside haplotype blocks were exploited (LEGBLUP) was linearly transformable into a haplotype-based model (HGBLUP). This theoretical derivation formally revealed that haplotype-based genome-wide prediction models capitalize on local epistatic effects among markers. Simulation studies corroborated this finding. Due to its computational efficiency the HGBLUP model promises to be an interesting tool for studies in which ultra-high-density SNP data sets are studied. Applying the HGBLUP model to empirical data sets revealed higher prediction accuracies than for marker-based models for both traits studied using a mouse panel. In contrast, only a small subset of the traits analyzed in crop populations showed such a benefit. Cases in which higher prediction accuracies are observed for HGBLUP than for marker-based models are expected to be of immediate relevance for breeders, due to the tight linkage a beneficial haplotype will be preserved for many generations. In this respect the inheritance of local epistatic effects very much resembles the one of additive effects. Copyright © 2018 Jiang et al.
Streamflow Prediction based on Chaos Theory
NASA Astrophysics Data System (ADS)
Li, X.; Wang, X.; Babovic, V. M.
2015-12-01
Chaos theory is a popular method in hydrologic time series prediction. Local model (LM) based on this theory utilizes time-delay embedding to reconstruct the phase-space diagram. For this method, its efficacy is dependent on the embedding parameters, i.e. embedding dimension, time lag, and nearest neighbor number. The optimal estimation of these parameters is thus critical to the application of Local model. However, these embedding parameters are conventionally estimated using Average Mutual Information (AMI) and False Nearest Neighbors (FNN) separately. This may leads to local optimization and thus has limitation to its prediction accuracy. Considering about these limitation, this paper applies a local model combined with simulated annealing (SA) to find the global optimization of embedding parameters. It is also compared with another global optimization approach of Genetic Algorithm (GA). These proposed hybrid methods are applied in daily and monthly streamflow time series for examination. The results show that global optimization can contribute to the local model to provide more accurate prediction results compared with local optimization. The LM combined with SA shows more advantages in terms of its computational efficiency. The proposed scheme here can also be applied to other fields such as prediction of hydro-climatic time series, error correction, etc.
DOT National Transportation Integrated Search
1974-08-01
The Transportation Systems Center (TSC) ILS Localizer Performance Prediction Model was used to predict the derogation to an Alford 1B Localizer caused by vehicular traffic traveling on a roadway to be located in front of the localizer. Several differ...
Analytic Guided-Search Model of Human Performance Accuracy in Target- Localization Search Tasks
NASA Technical Reports Server (NTRS)
Eckstein, Miguel P.; Beutter, Brent R.; Stone, Leland S.
2000-01-01
Current models of human visual search have extended the traditional serial/parallel search dichotomy. Two successful models for predicting human visual search are the Guided Search model and the Signal Detection Theory model. Although these models are inherently different, it has been difficult to compare them because the Guided Search model is designed to predict response time, while Signal Detection Theory models are designed to predict performance accuracy. Moreover, current implementations of the Guided Search model require the use of Monte-Carlo simulations, a method that makes fitting the model's performance quantitatively to human data more computationally time consuming. We have extended the Guided Search model to predict human accuracy in target-localization search tasks. We have also developed analytic expressions that simplify simulation of the model to the evaluation of a small set of equations using only three free parameters. This new implementation and extension of the Guided Search model will enable direct quantitative comparisons with human performance in target-localization search experiments and with the predictions of Signal Detection Theory and other search accuracy models.
NASA Technical Reports Server (NTRS)
Nese, Jon M.
1989-01-01
A dynamical systems approach is used to quantify the instantaneous and time-averaged predictability of a low-order moist general circulation model. Specifically, the effects on predictability of incorporating an active ocean circulation, implementing annual solar forcing, and asynchronously coupling the ocean and atmosphere are evaluated. The predictability and structure of the model attractors is compared using the Lyapunov exponents, the local divergence rates, and the correlation, fractal, and Lyapunov dimensions. The Lyapunov exponents measure the average rate of growth of small perturbations on an attractor, while the local divergence rates quantify phase-spatial variations of predictability. These local rates are exploited to efficiently identify and distinguish subtle differences in predictability among attractors. In addition, the predictability of monthly averaged and yearly averaged states is investigated by using attractor reconstruction techniques.
Prakash, J; Srinivasan, K
2009-07-01
In this paper, the authors have represented the nonlinear system as a family of local linear state space models, local PID controllers have been designed on the basis of linear models, and the weighted sum of the output from the local PID controllers (Nonlinear PID controller) has been used to control the nonlinear process. Further, Nonlinear Model Predictive Controller using the family of local linear state space models (F-NMPC) has been developed. The effectiveness of the proposed control schemes has been demonstrated on a CSTR process, which exhibits dynamic nonlinearity.
Micro Finite Element models of the vertebral body: Validation of local displacement predictions
Costa, Maria Cristiana; Tozzi, Gianluca; Cristofolini, Luca; Danesi, Valentina; Viceconti, Marco
2017-01-01
The estimation of local and structural mechanical properties of bones with micro Finite Element (microFE) models based on Micro Computed Tomography images depends on the quality bone geometry is captured, reconstructed and modelled. The aim of this study was to validate microFE models predictions of local displacements for vertebral bodies and to evaluate the effect of the elastic tissue modulus on model’s predictions of axial forces. Four porcine thoracic vertebrae were axially compressed in situ, in a step-wise fashion and scanned at approximately 39μm resolution in preloaded and loaded conditions. A global digital volume correlation (DVC) approach was used to compute the full-field displacements. Homogeneous, isotropic and linear elastic microFE models were generated with boundary conditions assigned from the interpolated displacement field measured from the DVC. Measured and predicted local displacements were compared for the cortical and trabecular compartments in the middle of the specimens. Models were run with two different tissue moduli defined from microindentation data (12.0GPa) and a back-calculation procedure (4.6GPa). The predicted sum of axial reaction forces was compared to the experimental values for each specimen. MicroFE models predicted more than 87% of the variation in the displacement measurements (R2 = 0.87–0.99). However, model predictions of axial forces were largely overestimated (80–369%) for a tissue modulus of 12.0GPa, whereas differences in the range 10–80% were found for a back-calculated tissue modulus. The specimen with the lowest density showed a large number of elements strained beyond yield and the highest predictive errors. This study shows that the simplest microFE models can accurately predict quantitatively the local displacements and qualitatively the strain distribution within the vertebral body, independently from the considered bone types. PMID:28700618
Mei, Suyu
2012-10-07
Recent years have witnessed much progress in computational modeling for protein subcellular localization. However, there are far few computational models for predicting plant protein subcellular multi-localization. In this paper, we propose a multi-label multi-kernel transfer learning model for predicting multiple subcellular locations of plant proteins (MLMK-TLM). The method proposes a multi-label confusion matrix and adapts one-against-all multi-class probabilistic outputs to multi-label learning scenario, based on which we further extend our published work MK-TLM (multi-kernel transfer learning based on Chou's PseAAC formulation for protein submitochondria localization) for plant protein subcellular multi-localization. By proper homolog knowledge transfer, MLMK-TLM is applicable to novel plant protein subcellular localization in multi-label learning scenario. The experiments on plant protein benchmark dataset show that MLMK-TLM outperforms the baseline model. Unlike the existing models, MLMK-TLM also reports its misleading tendency, which is important for comprehensive survey of model's multi-labeling performance. Copyright © 2012 Elsevier Ltd. All rights reserved.
Procedures for adjusting regional regression models of urban-runoff quality using local data
Hoos, A.B.; Sisolak, J.K.
1993-01-01
Statistical operations termed model-adjustment procedures (MAP?s) can be used to incorporate local data into existing regression models to improve the prediction of urban-runoff quality. Each MAP is a form of regression analysis in which the local data base is used as a calibration data set. Regression coefficients are determined from the local data base, and the resulting `adjusted? regression models can then be used to predict storm-runoff quality at unmonitored sites. The response variable in the regression analyses is the observed load or mean concentration of a constituent in storm runoff for a single storm. The set of explanatory variables used in the regression analyses is different for each MAP, but always includes the predicted value of load or mean concentration from a regional regression model. The four MAP?s examined in this study were: single-factor regression against the regional model prediction, P, (termed MAP-lF-P), regression against P,, (termed MAP-R-P), regression against P, and additional local variables (termed MAP-R-P+nV), and a weighted combination of P, and a local-regression prediction (termed MAP-W). The procedures were tested by means of split-sample analysis, using data from three cities included in the Nationwide Urban Runoff Program: Denver, Colorado; Bellevue, Washington; and Knoxville, Tennessee. The MAP that provided the greatest predictive accuracy for the verification data set differed among the three test data bases and among model types (MAP-W for Denver and Knoxville, MAP-lF-P and MAP-R-P for Bellevue load models, and MAP-R-P+nV for Bellevue concentration models) and, in many cases, was not clearly indicated by the values of standard error of estimate for the calibration data set. A scheme to guide MAP selection, based on exploratory data analysis of the calibration data set, is presented and tested. The MAP?s were tested for sensitivity to the size of a calibration data set. As expected, predictive accuracy of all MAP?s for the verification data set decreased as the calibration data-set size decreased, but predictive accuracy was not as sensitive for the MAP?s as it was for the local regression models.
Hoos, Anne B.; Patel, Anant R.
1996-01-01
Model-adjustment procedures were applied to the combined data bases of storm-runoff quality for Chattanooga, Knoxville, and Nashville, Tennessee, to improve predictive accuracy for storm-runoff quality for urban watersheds in these three cities and throughout Middle and East Tennessee. Data for 45 storms at 15 different sites (five sites in each city) constitute the data base. Comparison of observed values of storm-runoff load and event-mean concentration to the predicted values from the regional regression models for 10 constituents shows prediction errors, as large as 806,000 percent. Model-adjustment procedures, which combine the regional model predictions with local data, are applied to improve predictive accuracy. Standard error of estimate after model adjustment ranges from 67 to 322 percent. Calibration results may be biased due to sampling error in the Tennessee data base. The relatively large values of standard error of estimate for some of the constituent models, although representing significant reduction (at least 50 percent) in prediction error compared to estimation with unadjusted regional models, may be unacceptable for some applications. The user may wish to collect additional local data for these constituents and repeat the analysis, or calibrate an independent local regression model.
Derogation of Localizer Course Due to Proposed Water Tower Peterson Field, Colorado
DOT National Transportation Integrated Search
1974-10-01
The additional derogation to the localizer front and back courses caused by a water tower placed near the localizer site is predicted. This prediction is made with the Transportation Systems Center (TSC) localizer model. This additional derogation to...
NASA Technical Reports Server (NTRS)
Nese, Jon M.; Dutton, John A.
1993-01-01
The predictability of the weather and climatic states of a low-order moist general circulation model is quantified using a dynamic systems approach, and the effect of incorporating a simple oceanic circulation on predictability is evaluated. The predictability and the structure of the model attractors are compared using Liapunov exponents, local divergence rates, and the correlation and Liapunov dimensions. It was found that the activation of oceanic circulation increases the average error doubling time of the atmosphere and the coupled ocean-atmosphere system by 10 percent and decreases the variance of the largest local divergence rate by 20 percent. When an oceanic circulation develops, the average predictability of annually averaged states is improved by 25 percent and the variance of the largest local divergence rate decreases by 25 percent.
Adaptation of clinical prediction models for application in local settings.
Kappen, Teus H; Vergouwe, Yvonne; van Klei, Wilton A; van Wolfswinkel, Leo; Kalkman, Cor J; Moons, Karel G M
2012-01-01
When planning to use a validated prediction model in new patients, adequate performance is not guaranteed. For example, changes in clinical practice over time or a different case mix than the original validation population may result in inaccurate risk predictions. To demonstrate how clinical information can direct updating a prediction model and development of a strategy for handling missing predictor values in clinical practice. A previously derived and validated prediction model for postoperative nausea and vomiting was updated using a data set of 1847 patients. The update consisted of 1) changing the definition of an existing predictor, 2) reestimating the regression coefficient of a predictor, and 3) adding a new predictor to the model. The updated model was then validated in a new series of 3822 patients. Furthermore, several imputation models were considered to handle real-time missing values, so that possible missing predictor values could be anticipated during actual model use. Differences in clinical practice between our local population and the original derivation population guided the update strategy of the prediction model. The predictive accuracy of the updated model was better (c statistic, 0.68; calibration slope, 1.0) than the original model (c statistic, 0.62; calibration slope, 0.57). Inclusion of logistical variables in the imputation models, besides observed patient characteristics, contributed to a strategy to deal with missing predictor values at the time of risk calculation. Extensive knowledge of local, clinical processes provides crucial information to guide the process of adapting a prediction model to new clinical practices.
Factors influencing behavior and transferability of habitat models for a benthic stream fish
Kevin N. Leftwich; Paul L. Angermeier; C. Andrew Dolloff
1997-01-01
The authors examined the predictive power and transferability of habitat-based models by comparing associations of tangerine darter Percina aurantiaca and stream habitat at local and regional scales in North Fork Holston River (NFHR) and Little River, VA. The models correctly predicted the presence or absence of tangerine darters in NFHR for 64 percent (local model)...
NASA Astrophysics Data System (ADS)
Pulkkinen, A.
2012-12-01
Empirical modeling has been the workhorse of the past decades in predicting the state of the geospace. For example, numerous empirical studies have shown that global geoeffectiveness indices such as Kp and Dst are generally well predictable from the solar wind input. These successes have been facilitated partly by the strongly externally driven nature of the system. Although characterizing the general state of the system is valuable and empirical modeling will continue playing an important role, refined physics-based quantification of the state of the system has been the obvious next step in moving toward more mature science. Importantly, more refined and localized products are needed also for space weather purposes. Predictions of local physical quantities are necessary to make physics-based links to the impacts on specific systems. As we have introduced more localized predictions of the geospace state one central question is how predictable these local quantities are? This complex question can be addressed by rigorously measuring the model performance against the observed data. Space sciences community has made great advanced on this topic over the past few years and there are ongoing efforts in SHINE, CEDAR and GEM to carry out community-wide evaluations of the state-of-the-art solar and heliospheric, ionosphere-thermosphere and geospace models, respectively. These efforts will help establish benchmarks and thus provide means to measure the progress in the field analogous to monitoring of the improvement in lower atmospheric weather predictions carried out rigorously since 1980s. In this paper we will discuss some of the latest advancements in predicting the local geospace parameters and give an overview of some of the community efforts to rigorously measure the model performances. We will also briefly discuss some of the future opportunities for advancing the geospace modeling capability. These will include further development in data assimilation and ensemble modeling (e.g. taking into account uncertainty in the inflow boundary conditions).
NASA Astrophysics Data System (ADS)
Dooley, Gregory A.; Peter, Annika H. G.; Yang, Tianyi; Willman, Beth; Griffen, Brendan F.; Frebel, Anna
2017-11-01
A recent surge in the discovery of new ultrafaint dwarf satellites of the Milky Way has inspired the idea of searching for faint satellites, 103 M⊙
Prediction of global and local model quality in CASP8 using the ModFOLD server.
McGuffin, Liam J
2009-01-01
The development of effective methods for predicting the quality of three-dimensional (3D) models is fundamentally important for the success of tertiary structure (TS) prediction strategies. Since CASP7, the Quality Assessment (QA) category has existed to gauge the ability of various model quality assessment programs (MQAPs) at predicting the relative quality of individual 3D models. For the CASP8 experiment, automated predictions were submitted in the QA category using two methods from the ModFOLD server-ModFOLD version 1.1 and ModFOLDclust. ModFOLD version 1.1 is a single-model machine learning based method, which was used for automated predictions of global model quality (QMODE1). ModFOLDclust is a simple clustering based method, which was used for automated predictions of both global and local quality (QMODE2). In addition, manual predictions of model quality were made using ModFOLD version 2.0--an experimental method that combines the scores from ModFOLDclust and ModFOLD v1.1. Predictions from the ModFOLDclust method were the most successful of the three in terms of the global model quality, whilst the ModFOLD v1.1 method was comparable in performance to other single-model based methods. In addition, the ModFOLDclust method performed well at predicting the per-residue, or local, model quality scores. Predictions of the per-residue errors in our own 3D models, selected using the ModFOLD v2.0 method, were also the most accurate compared with those from other methods. All of the MQAPs described are publicly accessible via the ModFOLD server at: http://www.reading.ac.uk/bioinf/ModFOLD/. The methods are also freely available to download from: http://www.reading.ac.uk/bioinf/downloads/. Copyright 2009 Wiley-Liss, Inc.
Soft sensor modeling based on variable partition ensemble method for nonlinear batch processes
NASA Astrophysics Data System (ADS)
Wang, Li; Chen, Xiangguang; Yang, Kai; Jin, Huaiping
2017-01-01
Batch processes are always characterized by nonlinear and system uncertain properties, therefore, the conventional single model may be ill-suited. A local learning strategy soft sensor based on variable partition ensemble method is developed for the quality prediction of nonlinear and non-Gaussian batch processes. A set of input variable sets are obtained by bootstrapping and PMI criterion. Then, multiple local GPR models are developed based on each local input variable set. When a new test data is coming, the posterior probability of each best performance local model is estimated based on Bayesian inference and used to combine these local GPR models to get the final prediction result. The proposed soft sensor is demonstrated by applying to an industrial fed-batch chlortetracycline fermentation process.
Nazemi, S Majid; Kalajahi, S Mehrdad Hosseini; Cooper, David M L; Kontulainen, Saija A; Holdsworth, David W; Masri, Bassam A; Wilson, David R; Johnston, James D
2017-07-05
Previously, a finite element (FE) model of the proximal tibia was developed and validated against experimentally measured local subchondral stiffness. This model indicated modest predictions of stiffness (R 2 =0.77, normalized root mean squared error (RMSE%)=16.6%). Trabecular bone though was modeled with isotropic material properties despite its orthotropic anisotropy. The objective of this study was to identify the anisotropic FE modeling approach which best predicted (with largest explained variance and least amount of error) local subchondral bone stiffness at the proximal tibia. Local stiffness was measured at the subchondral surface of 13 medial/lateral tibial compartments using in situ macro indentation testing. An FE model of each specimen was generated assuming uniform anisotropy with 14 different combinations of cortical- and tibial-specific density-modulus relationships taken from the literature. Two FE models of each specimen were also generated which accounted for the spatial variation of trabecular bone anisotropy directly from clinical CT images using grey-level structure tensor and Cowin's fabric-elasticity equations. Stiffness was calculated using FE and compared to measured stiffness in terms of R 2 and RMSE%. The uniform anisotropic FE model explained 53-74% of the measured stiffness variance, with RMSE% ranging from 12.4 to 245.3%. The models which accounted for spatial variation of trabecular bone anisotropy predicted 76-79% of the variance in stiffness with RMSE% being 11.2-11.5%. Of the 16 evaluated finite element models in this study, the combination of Synder and Schneider (for cortical bone) and Cowin's fabric-elasticity equations (for trabecular bone) best predicted local subchondral bone stiffness. Copyright © 2017 Elsevier Ltd. All rights reserved.
Cross-scale assessment of potential habitat shifts in a rapidly changing climate
Jarnevich, Catherine S.; Holcombe, Tracy R.; Bella, Elizabeth S.; Carlson, Matthew L.; Graziano, Gino; Lamb, Melinda; Seefeldt, Steven S.; Morisette, Jeffrey T.
2014-01-01
We assessed the ability of climatic, environmental, and anthropogenic variables to predict areas of high-risk for plant invasion and consider the relative importance and contribution of these predictor variables by considering two spatial scales in a region of rapidly changing climate. We created predictive distribution models, using Maxent, for three highly invasive plant species (Canada thistle, white sweetclover, and reed canarygrass) in Alaska at both a regional scale and a local scale. Regional scale models encompassed southern coastal Alaska and were developed from topographic and climatic data at a 2 km (1.2 mi) spatial resolution. Models were applied to future climate (2030). Local scale models were spatially nested within the regional area; these models incorporated physiographic and anthropogenic variables at a 30 m (98.4 ft) resolution. Regional and local models performed well (AUC values > 0.7), with the exception of one species at each spatial scale. Regional models predict an increase in area of suitable habitat for all species by 2030 with a general shift to higher elevation areas; however, the distribution of each species was driven by different climate and topographical variables. In contrast local models indicate that distance to right-of-ways and elevation are associated with habitat suitability for all three species at this spatial level. Combining results from regional models, capturing long-term distribution, and local models, capturing near-term establishment and distribution, offers a new and effective tool for highlighting at-risk areas and provides insight on how variables acting at different scales contribute to suitability predictions. The combinations also provides easy comparison, highlighting agreement between the two scales, where long-term distribution factors predict suitability while near-term do not and vice versa.
2014-01-01
Background Protein model quality assessment is an essential component of generating and using protein structural models. During the Tenth Critical Assessment of Techniques for Protein Structure Prediction (CASP10), we developed and tested four automated methods (MULTICOM-REFINE, MULTICOM-CLUSTER, MULTICOM-NOVEL, and MULTICOM-CONSTRUCT) that predicted both local and global quality of protein structural models. Results MULTICOM-REFINE was a clustering approach that used the average pairwise structural similarity between models to measure the global quality and the average Euclidean distance between a model and several top ranked models to measure the local quality. MULTICOM-CLUSTER and MULTICOM-NOVEL were two new support vector machine-based methods of predicting both the local and global quality of a single protein model. MULTICOM-CONSTRUCT was a new weighted pairwise model comparison (clustering) method that used the weighted average similarity between models in a pool to measure the global model quality. Our experiments showed that the pairwise model assessment methods worked better when a large portion of models in the pool were of good quality, whereas single-model quality assessment methods performed better on some hard targets when only a small portion of models in the pool were of reasonable quality. Conclusions Since digging out a few good models from a large pool of low-quality models is a major challenge in protein structure prediction, single model quality assessment methods appear to be poised to make important contributions to protein structure modeling. The other interesting finding was that single-model quality assessment scores could be used to weight the models by the consensus pairwise model comparison method to improve its accuracy. PMID:24731387
Cao, Renzhi; Wang, Zheng; Cheng, Jianlin
2014-04-15
Protein model quality assessment is an essential component of generating and using protein structural models. During the Tenth Critical Assessment of Techniques for Protein Structure Prediction (CASP10), we developed and tested four automated methods (MULTICOM-REFINE, MULTICOM-CLUSTER, MULTICOM-NOVEL, and MULTICOM-CONSTRUCT) that predicted both local and global quality of protein structural models. MULTICOM-REFINE was a clustering approach that used the average pairwise structural similarity between models to measure the global quality and the average Euclidean distance between a model and several top ranked models to measure the local quality. MULTICOM-CLUSTER and MULTICOM-NOVEL were two new support vector machine-based methods of predicting both the local and global quality of a single protein model. MULTICOM-CONSTRUCT was a new weighted pairwise model comparison (clustering) method that used the weighted average similarity between models in a pool to measure the global model quality. Our experiments showed that the pairwise model assessment methods worked better when a large portion of models in the pool were of good quality, whereas single-model quality assessment methods performed better on some hard targets when only a small portion of models in the pool were of reasonable quality. Since digging out a few good models from a large pool of low-quality models is a major challenge in protein structure prediction, single model quality assessment methods appear to be poised to make important contributions to protein structure modeling. The other interesting finding was that single-model quality assessment scores could be used to weight the models by the consensus pairwise model comparison method to improve its accuracy.
NASA Astrophysics Data System (ADS)
Garcia-Mozo, H.; Orlandi, F.; Galan, C.; Fornaciari, M.; Romano, B.; Ruiz, L.; Diaz de La Guardia, C.; Trigo, M. M.; Chuine, I.
2009-03-01
Phenology data are sensitive data to identify how plants are adapted to local climate and how they respond to climatic changes. Modeling flowering phenology allows us to identify the meteorological variables determining the reproductive cycle. Phenology of temperate of woody plants is assumed to be locally adapted to climate. Nevertheless, recent research shows that local adaptation may not be an important constraint in predicting phenological responses. We analyzed variations in flowering dates of Olea europaea L. at different sites of Spain and Italy, testing for a genetic differentiation of flowering phenology among olive varieties to estimate whether local modeling is necessary for olive or not. We build models for the onset and peak dates flowering in different sites of Andalusia and Puglia. Process-based phenological models using temperature as input variable and photoperiod as the threshold date to start temperature accumulation were developed to predict both dates. Our results confirm and update previous results that indicated an advance in olive onset dates. The results indicate that both internal and external validity were higher in the models that used the photoperiod as an indicator to start to cumulate temperature. The use of the unified model for modeling the start and peak dates in the different localities provides standardized results for the comparative study. The use of regional models grouping localities by varieties and climate similarities indicate that local adaptation would not be an important factor in predicting olive phenological responses face to the global temperature increase.
2013-01-01
Background The measurement of the Erythrocyte Sedimentation Rate (ESR) value is a standard procedure performed during a typical blood test. In order to formulate a unified standard of establishing reference ESR values, this paper presents a novel prediction model in which local normal ESR values and corresponding geographical factors are used to predict reference ESR values using multi-layer feed-forward artificial neural networks (ANN). Methods and findings Local normal ESR values were obtained from hospital data, while geographical factors that include altitude, sunshine hours, relative humidity, temperature and precipitation were obtained from the National Geographical Data Information Centre in China. The results show that predicted values are statistically in agreement with measured values. Model results exhibit significant agreement between training data and test data. Consequently, the model is used to predict the unseen local reference ESR values. Conclusions Reference ESR values can be established with geographical factors by using artificial intelligence techniques. ANN is an effective method for simulating and predicting reference ESR values because of its ability to model nonlinear and complex relationships. PMID:23497145
Yang, Qingsheng; Mwenda, Kevin M; Ge, Miao
2013-03-12
The measurement of the Erythrocyte Sedimentation Rate (ESR) value is a standard procedure performed during a typical blood test. In order to formulate a unified standard of establishing reference ESR values, this paper presents a novel prediction model in which local normal ESR values and corresponding geographical factors are used to predict reference ESR values using multi-layer feed-forward artificial neural networks (ANN). Local normal ESR values were obtained from hospital data, while geographical factors that include altitude, sunshine hours, relative humidity, temperature and precipitation were obtained from the National Geographical Data Information Centre in China.The results show that predicted values are statistically in agreement with measured values. Model results exhibit significant agreement between training data and test data. Consequently, the model is used to predict the unseen local reference ESR values. Reference ESR values can be established with geographical factors by using artificial intelligence techniques. ANN is an effective method for simulating and predicting reference ESR values because of its ability to model nonlinear and complex relationships.
Bedoya, David; Manolakos, Elias S; Novotny, Vladimir
2011-03-01
Indices of Biological integrity (IBI) are considered valid indicators of the overall health of a water body because the biological community is an endpoint within natural systems. However, prediction of biological integrity using information from multi-parameter environmental observations is a challenging problem due to the hierarchical organization of the natural environment, the existence of nonlinear inter-dependencies among variables as well as natural stochasticity and measurement noise. We present a method for predicting the Fish Index of Biological Integrity (IBI) using multiple environmental observations at the state-scale in Ohio. Instream (chemical and physical quality) and offstream parameters (regional and local upstream land uses, stream fragmentation, and point source density and intensity) are used for this purpose. The IBI predictions are obtained using the environmental site-similarity concept and following a simple to implement leave-one-out cross validation approach. An IBI prediction for a sampling site is calculated by averaging the observed IBI scores of observations clustered in the most similar branch of a dendrogram--a hierarchical clustering tree of environmental observations--built using the rest of the observations. The standardized Euclidean distance is used to assess dissimilarity between observations. The constructed predictive model was able to explain 61% of the IBI variability statewide. Stream fragmentation and regional land use explained 60% of the variability; the remaining 1% was explained by instream habitat quality. Metrics related to local land use, water quality, and point source density and intensity did not improve the predictive model at the state-scale. The impact of local environmental conditions was evaluated by comparing local characteristics between well- and mispredicted sites. Significant differences in local land use patterns and upstream fragmentation density explained some of the model's over-predictions. Local land use conditions explained some of the model's IBI under-predictions at the state-scale since none of the variables within this group were included in the best final predictive model. Under-predicted sites also had higher levels of downstream fragmentation. The proposed variables ranking and predictive modeling methodology is very well suited for the analysis of hierarchical environments, such as natural fresh water systems, with many cross-correlated environmental variables. It is computationally efficient, can be fully automated, does not make any pre-conceived assumptions on the variables interdependency structure (such as linearity), and it is able to rank variables in a database and generate IBI predictions using only non-parametric easy to implement hierarchical clustering. Copyright © 2011 Elsevier Ltd. All rights reserved.
Distributed Prognostics based on Structural Model Decomposition
NASA Technical Reports Server (NTRS)
Daigle, Matthew J.; Bregon, Anibal; Roychoudhury, I.
2014-01-01
Within systems health management, prognostics focuses on predicting the remaining useful life of a system. In the model-based prognostics paradigm, physics-based models are constructed that describe the operation of a system and how it fails. Such approaches consist of an estimation phase, in which the health state of the system is first identified, and a prediction phase, in which the health state is projected forward in time to determine the end of life. Centralized solutions to these problems are often computationally expensive, do not scale well as the size of the system grows, and introduce a single point of failure. In this paper, we propose a novel distributed model-based prognostics scheme that formally describes how to decompose both the estimation and prediction problems into independent local subproblems whose solutions may be easily composed into a global solution. The decomposition of the prognostics problem is achieved through structural decomposition of the underlying models. The decomposition algorithm creates from the global system model a set of local submodels suitable for prognostics. Independent local estimation and prediction problems are formed based on these local submodels, resulting in a scalable distributed prognostics approach that allows the local subproblems to be solved in parallel, thus offering increases in computational efficiency. Using a centrifugal pump as a case study, we perform a number of simulation-based experiments to demonstrate the distributed approach, compare the performance with a centralized approach, and establish its scalability. Index Terms-model-based prognostics, distributed prognostics, structural model decomposition ABBREVIATIONS
NASA Technical Reports Server (NTRS)
Dilley, Arthur D.; McClinton, Charles R. (Technical Monitor)
2001-01-01
Results from a study to assess the accuracy of turbulent heating and skin friction prediction techniques for hypersonic applications are presented. The study uses the original and a modified Baldwin-Lomax turbulence model with a space marching code. Grid converged turbulent predictions using the wall damping formulation (original model) and local damping formulation (modified model) are compared with experimental data for several flat plates. The wall damping and local damping results are similar for hot wall conditions, but differ significantly for cold walls, i.e., T(sub w) / T(sub t) < 0.3, with the wall damping heating and skin friction 10-30% above the local damping results. Furthermore, the local damping predictions have reasonable or good agreement with the experimental heating data for all cases. The impact of the two formulations on the van Driest damping function and the turbulent eddy viscosity distribution for a cold wall case indicate the importance of including temperature gradient effects. Grid requirements for accurate turbulent heating predictions are also studied. These results indicate that a cell Reynolds number of 1 is required for grid converged heating predictions, but coarser grids with a y(sup +) less than 2 are adequate for design of hypersonic vehicles. Based on the results of this study, it is recommended that the local damping formulation be used with the Baldwin-Lomax and Cebeci-Smith turbulence models in design and analysis of Hyper-X and future hypersonic vehicles.
Computation of turbulent rotating channel flow with an algebraic Reynolds stress model
NASA Technical Reports Server (NTRS)
Warfield, M. J.; Lakshminarayana, B.
1986-01-01
An Algebraic Reynolds Stress Model has been implemented to modify the Kolmogorov-Prandtl eddy viscosity relation to produce an anisotropic turbulence model. The eddy viscosity relation becomes a function of the local turbulent production to dissipation ratio and local turbulence/rotation parameters. The model is used to predict fully-developed rotating channel flow over a diverse range of rotation numbers. In addition, predictions are obtained for a developing channel flow with high rotation. The predictions are compared with the experimental data available. Good predictions are achieved for mean velocity and wall shear stress over most of the rotation speeds tested. There is some prediction breakdown at high rotation (rotation number greater than .10) where the effects of the rotation on turbulence become quite complex. At high rotation and low Reynolds number, the laminarization on the trailing side represents a complex effect of rotation which is difficult to predict with the described models.
Woodin, Sarah A; Hilbish, Thomas J; Helmuth, Brian; Jones, Sierra J; Wethey, David S
2013-09-01
Modeling the biogeographic consequences of climate change requires confidence in model predictions under novel conditions. However, models often fail when extended to new locales, and such instances have been used as evidence of a change in physiological tolerance, that is, a fundamental niche shift. We explore an alternative explanation and propose a method for predicting the likelihood of failure based on physiological performance curves and environmental variance in the original and new environments. We define the transient event margin (TEM) as the gap between energetic performance failure, defined as CTmax, and the upper lethal limit, defined as LTmax. If TEM is large relative to environmental fluctuations, models will likely fail in new locales. If TEM is small relative to environmental fluctuations, models are likely to be robust for new locales, even when mechanism is unknown. Using temperature, we predict when biogeographic models are likely to fail and illustrate this with a case study. We suggest that failure is predictable from an understanding of how climate drives nonlethal physiological responses, but for many species such data have not been collected. Successful biogeographic forecasting thus depends on understanding when the mechanisms limiting distribution of a species will differ among geographic regions, or at different times, resulting in realized niche shifts. TEM allows prediction of the likelihood of such model failure.
Horowitz, A.J.; Elrick, K.A.; Demas, C.R.; Demcheck, D.K.
1991-01-01
Studies have demonstrated the utility of fluvial bed sediment chemical data in assesing local water-quality conditions. However, establishing local background trace element levels can be difficult. Reference to published average concentrations or the use of dated cores are often of little use in small areas of diverse local petrology, geology, land use, or hydrology. An alternative approach entails the construction of a series of sediment-trace element predictive models based on data from environmentally diverse but unaffected areas. Predicted values could provide a measure of local background concentrations and comparison with actual measured concentrations could identify elevated trace elements and affected sites. Such a model set was developed from surface bed sediments collected nationwide in the United States. Tests of the models in a small Louisiana basin indicated that they could be used to establish local trace element background levels, but required recalibration to account for local geochemical conditions outside the range of samples used to generate the nationwide models.
Performance predictions for a parabolic localizer antenna on Runway 28R - San Francisco Airport.
DOT National Transportation Integrated Search
1973-06-01
The TSC ILS localizer model is used to predict the performance of the Texas Instruments "wide aperture" parabolic antenna as a localizer system for runway 28R at San Francisco Airport. Course derogation caused by the new American Airlines hangar is c...
Iowa calibration of MEPDG performance prediction models.
DOT National Transportation Integrated Search
2013-06-01
This study aims to improve the accuracy of AASHTO Mechanistic-Empirical Pavement Design Guide (MEPDG) pavement : performance predictions for Iowa pavement systems through local calibration of MEPDG prediction models. A total of 130 : representative p...
Francy, Donna S.; Brady, Amie M.G.; Carvin, Rebecca B.; Corsi, Steven R.; Fuller, Lori M.; Harrison, John H.; Hayhurst, Brett A.; Lant, Jeremiah; Nevers, Meredith B.; Terrio, Paul J.; Zimmerman, Tammy M.
2013-01-01
Predictive models have been used at beaches to improve the timeliness and accuracy of recreational water-quality assessments over the most common current approach to water-quality monitoring, which relies on culturing fecal-indicator bacteria such as Escherichia coli (E. coli.). Beach-specific predictive models use environmental and water-quality variables that are easily and quickly measured as surrogates to estimate concentrations of fecal-indicator bacteria or to provide the probability that a State recreational water-quality standard will be exceeded. When predictive models are used for beach closure or advisory decisions, they are referred to as “nowcasts.” During the recreational seasons of 2010-12, the U.S. Geological Survey (USGS), in cooperation with 23 local and State agencies, worked to improve existing nowcasts at 4 beaches, validate predictive models at another 38 beaches, and collect data for predictive-model development at 7 beaches throughout the Great Lakes. This report summarizes efforts to collect data and develop predictive models by multiple agencies and to compile existing information on the beaches and beach-monitoring programs into one comprehensive report. Local agencies measured E. coli concentrations and variables expected to affect E. coli concentrations such as wave height, turbidity, water temperature, and numbers of birds at the time of sampling. In addition to these field measurements, equipment was installed by the USGS or local agencies at or near several beaches to collect water-quality and metrological measurements in near real time, including nearshore buoys, weather stations, and tributary staff gages and monitors. The USGS worked with local agencies to retrieve data from existing sources either manually or by use of tools designed specifically to compile and process data for predictive-model development. Predictive models were developed by use of linear regression and (or) partial least squares techniques for 42 beaches that had at least 2 years of data (2010-11 and sometimes earlier) and for 1 beach that had 1 year of data. For most models, software designed for model development by the U.S. Environmental Protection Agency (Virtual Beach) was used. The selected model for each beach was based on a combination of explanatory variables including, most commonly, turbidity, day of the year, change in lake level over 24 hours, wave height, wind direction and speed, and antecedent rainfall for various time periods. Forty-two predictive models were validated against data collected during an independent year (2012) and compared to the current method for assessing recreational water quality-using the previous day’s E. coli concentration (persistence model). Goals for good predictive-model performance were responses that were at least 5 percent greater than the persistence model and overall correct responses greater than or equal to 80 percent, sensitivities (percentage of exceedances of the bathing-water standard that were correctly predicted by the model) greater than or equal to 50 percent, and specificities (percentage of nonexceedances correctly predicted by the model) greater than or equal to 85 percent. Out of 42 predictive models, 24 models yielded over-all correct responses that were at least 5 percent greater than the use of the persistence model. Predictive-model responses met the performance goals more often than the persistence-model responses in terms of overall correctness (28 versus 17 models, respectively), sensitivity (17 versus 4 models), and specificity (34 versus 25 models). Gaining knowledge of each beach and the factors that affect E. coli concentrations is important for developing good predictive models. Collection of additional years of data with a wide range of environmental conditions may also help to improve future model performance. The USGS will continue to work with local agencies in 2013 and beyond to develop and validate predictive models at beaches and improve existing nowcasts, restructuring monitoring activities to accommodate future uncertainties in funding and resources.
A Particle Swarm Optimization-Based Approach with Local Search for Predicting Protein Folding.
Yang, Cheng-Hong; Lin, Yu-Shiun; Chuang, Li-Yeh; Chang, Hsueh-Wei
2017-10-01
The hydrophobic-polar (HP) model is commonly used for predicting protein folding structures and hydrophobic interactions. This study developed a particle swarm optimization (PSO)-based algorithm combined with local search algorithms; specifically, the high exploration PSO (HEPSO) algorithm (which can execute global search processes) was combined with three local search algorithms (hill-climbing algorithm, greedy algorithm, and Tabu table), yielding the proposed HE-L-PSO algorithm. By using 20 known protein structures, we evaluated the performance of the HE-L-PSO algorithm in predicting protein folding in the HP model. The proposed HE-L-PSO algorithm exhibited favorable performance in predicting both short and long amino acid sequences with high reproducibility and stability, compared with seven reported algorithms. The HE-L-PSO algorithm yielded optimal solutions for all predicted protein folding structures. All HE-L-PSO-predicted protein folding structures possessed a hydrophobic core that is similar to normal protein folding.
DeepLoc: prediction of protein subcellular localization using deep learning.
Almagro Armenteros, José Juan; Sønderby, Casper Kaae; Sønderby, Søren Kaae; Nielsen, Henrik; Winther, Ole
2017-11-01
The prediction of eukaryotic protein subcellular localization is a well-studied topic in bioinformatics due to its relevance in proteomics research. Many machine learning methods have been successfully applied in this task, but in most of them, predictions rely on annotation of homologues from knowledge databases. For novel proteins where no annotated homologues exist, and for predicting the effects of sequence variants, it is desirable to have methods for predicting protein properties from sequence information only. Here, we present a prediction algorithm using deep neural networks to predict protein subcellular localization relying only on sequence information. At its core, the prediction model uses a recurrent neural network that processes the entire protein sequence and an attention mechanism identifying protein regions important for the subcellular localization. The model was trained and tested on a protein dataset extracted from one of the latest UniProt releases, in which experimentally annotated proteins follow more stringent criteria than previously. We demonstrate that our model achieves a good accuracy (78% for 10 categories; 92% for membrane-bound or soluble), outperforming current state-of-the-art algorithms, including those relying on homology information. The method is available as a web server at http://www.cbs.dtu.dk/services/DeepLoc. Example code is available at https://github.com/JJAlmagro/subcellular_localization. The dataset is available at http://www.cbs.dtu.dk/services/DeepLoc/data.php. jjalma@dtu.dk. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
APOLLO: a quality assessment service for single and multiple protein models.
Wang, Zheng; Eickholt, Jesse; Cheng, Jianlin
2011-06-15
We built a web server named APOLLO, which can evaluate the absolute global and local qualities of a single protein model using machine learning methods or the global and local qualities of a pool of models using a pair-wise comparison approach. Based on our evaluations on 107 CASP9 (Critical Assessment of Techniques for Protein Structure Prediction) targets, the predicted quality scores generated from our machine learning and pair-wise methods have an average per-target correlation of 0.671 and 0.917, respectively, with the true model quality scores. Based on our test on 92 CASP9 targets, our predicted absolute local qualities have an average difference of 2.60 Å with the actual distances to native structure. http://sysbio.rnet.missouri.edu/apollo/. Single and pair-wise global quality assessment software is also available at the site.
Predicting local field potentials with recurrent neural networks.
Kim, Louis; Harer, Jacob; Rangamani, Akshay; Moran, James; Parks, Philip D; Widge, Alik; Eskandar, Emad; Dougherty, Darin; Chin, Sang Peter
2016-08-01
We present a Recurrent Neural Network using LSTM (Long Short Term Memory) that is capable of modeling and predicting Local Field Potentials. We train and test the network on real data recorded from epilepsy patients. We construct networks that predict multi-channel LFPs for 1, 10, and 100 milliseconds forward in time. Our results show that prediction using LSTM outperforms regression when predicting 10 and 100 millisecond forward in time.
NASA Astrophysics Data System (ADS)
Daminelli, Simone; Thomas, Josephine Maria; Durán, Claudio; Vittorio Cannistraci, Carlo
2015-11-01
Bipartite networks are powerful descriptions of complex systems characterized by two different classes of nodes and connections allowed only across but not within the two classes. Unveiling physical principles, building theories and suggesting physical models to predict bipartite links such as product-consumer connections in recommendation systems or drug-target interactions in molecular networks can provide priceless information to improve e-commerce or to accelerate pharmaceutical research. The prediction of nonobserved connections starting from those already present in the topology of a network is known as the link-prediction problem. It represents an important subject both in many-body interaction theory in physics and in new algorithms for applied tools in computer science. The rationale is that the existing connectivity structure of a network can suggest where new connections can appear with higher likelihood in an evolving network, or where nonobserved connections are missing in a partially known network. Surprisingly, current complex network theory presents a theoretical bottle-neck: a general framework for local-based link prediction directly in the bipartite domain is missing. Here, we overcome this theoretical obstacle and present a formal definition of common neighbour index and local-community-paradigm (LCP) for bipartite networks. As a consequence, we are able to introduce the first node-neighbourhood-based and LCP-based models for topological link prediction that utilize the bipartite domain. We performed link prediction evaluations in several networks of different size and of disparate origin, including technological, social and biological systems. Our models significantly improve topological prediction in many bipartite networks because they exploit local physical driving-forces that participate in the formation and organization of many real-world bipartite networks. Furthermore, we present a local-based formalism that allows to intuitively implement neighbourhood-based link prediction entirely in the bipartite domain.
NASA Astrophysics Data System (ADS)
Zhang, Ying; Moges, Semu; Block, Paul
2018-01-01
Prediction of seasonal precipitation can provide actionable information to guide management of various sectoral activities. For instance, it is often translated into hydrological forecasts for better water resources management. However, many studies assume homogeneity in precipitation across an entire study region, which may prove ineffective for operational and local-level decisions, particularly for locations with high spatial variability. This study proposes advancing local-level seasonal precipitation predictions by first conditioning on regional-level predictions, as defined through objective cluster analysis, for western Ethiopia. To our knowledge, this is the first study predicting seasonal precipitation at high resolution in this region, where lives and livelihoods are vulnerable to precipitation variability given the high reliance on rain-fed agriculture and limited water resources infrastructure. The combination of objective cluster analysis, spatially high-resolution prediction of seasonal precipitation, and a modeling structure spanning statistical and dynamical approaches makes clear advances in prediction skill and resolution, as compared with previous studies. The statistical model improves versus the non-clustered case or dynamical models for a number of specific clusters in northwestern Ethiopia, with clusters having regional average correlation and ranked probability skill score (RPSS) values of up to 0.5 and 33 %, respectively. The general skill (after bias correction) of the two best-performing dynamical models over the entire study region is superior to that of the statistical models, although the dynamical models issue predictions at a lower resolution and the raw predictions require bias correction to guarantee comparable skills.
Contaminant dispersal in bounded turbulent shear flow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wallace, J.M.; Bernard, P.S.; Chiang, K.F.
The dispersion of smoke downstream of a line source at the wall and at y{sup +} = 30 in a turbulent boundary layer has been predicted with a non-local model of the scalar fluxes {bar u}c and {bar v}c. The predicted plume from the wall source has been compared to high Schmidt number experimental measurements using a combination of hot-wire anemometry to obtain velocity component data synchronously with concentration data obtained optically. The predicted plumes from the source at y{sup +} = 30 and at the wall also have been compared to a low Schmidt number direct numerical simulation. Nearmore » the source, the non-local flux models give considerably better predictions than models which account solely for mean gradient transport. At a sufficient distance downstream the gradient models gives reasonably good predictions.« less
NASA Astrophysics Data System (ADS)
Cavalcanti, Eric G.; Wiseman, Howard M.
2012-10-01
The 1964 theorem of John Bell shows that no model that reproduces the predictions of quantum mechanics can simultaneously satisfy the assumptions of locality and determinism. On the other hand, the assumptions of signal locality plus predictability are also sufficient to derive Bell inequalities. This simple theorem, previously noted but published only relatively recently by Masanes, Acin and Gisin, has fundamental implications not entirely appreciated. Firstly, nothing can be concluded about the ontological assumptions of locality or determinism independently of each other—it is possible to reproduce quantum mechanics with deterministic models that violate locality as well as indeterministic models that satisfy locality. On the other hand, the operational assumption of signal locality is an empirically testable (and well-tested) consequence of relativity. Thus Bell inequality violations imply that we can trust that some events are fundamentally unpredictable, even if we cannot trust that they are indeterministic. This result grounds the quantum-mechanical prohibition of arbitrarily accurate predictions on the assumption of no superluminal signalling, regardless of any postulates of quantum mechanics. It also sheds a new light on an early stage of the historical debate between Einstein and Bohr.
Analytical model for local scour prediction around hydrokinetic turbine foundations
NASA Astrophysics Data System (ADS)
Musa, M.; Heisel, M.; Hill, C.; Guala, M.
2017-12-01
Marine and Hydrokinetic renewable energy is an emerging sustainable and secure technology which produces clean energy harnessing water currents from mostly tidal and fluvial waterways. Hydrokinetic turbines are typically anchored at the bottom of the channel, which can be erodible or non-erodible. Recent experiments demonstrated the interactions between operating turbines and an erodible surface with sediment transport, resulting in a remarkable localized erosion-deposition pattern significantly larger than those observed by static in-river construction such as bridge piers, etc. Predicting local scour geometry at the base of hydrokinetic devices is extremely important during foundation design, installation, operation, and maintenance (IO&M), and long-term structural integrity. An analytical modeling framework is proposed applying the phenomenological theory of turbulence to the flow structures that promote the scouring process at the base of a turbine. The evolution of scour is directly linked to device operating conditions through the turbine drag force, which is inferred to locally dictate the energy dissipation rate in the scour region. The predictive model is validated using experimental data obtained at the University of Minnesota's St. Anthony Falls Laboratory (SAFL), covering two sediment mobility regimes (clear water and live bed), different turbine designs, hydraulic parameters, grain size distribution and bedform types. The model is applied to a potential prototype scale deployment in the lower Mississippi River, demonstrating its practical relevance and endorsing the feasibility of hydrokinetic energy power plants in large sandy rivers. Multi-turbine deployments are further studied experimentally by monitoring both local and non-local geomorphic effects introduced by a twelve turbine staggered array model installed in a wide channel at SAFL. Local scour behind each turbine is well captured by the theoretical predictive model. However, multi-turbine configurations introduce subtle large-scale effects that deepen local scour within the first two rows of the array and develop spatially as a two-dimensional oscillation of the mean bed downstream of the entire array.
The Business Case for Automated Software Engineering
NASA Technical Reports Server (NTRS)
Menzies, Tim; Elrawas, Oussama; Hihn, Jairus M.; Feather, Martin S.; Madachy, Ray; Boehm, Barry
2007-01-01
Adoption of advanced automated SE (ASE) tools would be more favored if a business case could be made that these tools are more valuable than alternate methods. In theory, software prediction models can be used to make that case. In practice, this is complicated by the 'local tuning' problem. Normally. predictors for software effort and defects and threat use local data to tune their predictions. Such local tuning data is often unavailable. This paper shows that assessing the relative merits of different SE methods need not require precise local tunings. STAR 1 is a simulated annealer plus a Bayesian post-processor that explores the space of possible local tunings within software prediction models. STAR 1 ranks project decisions by their effects on effort and defects and threats. In experiments with NASA systems. STARI found one project where ASE were essential for minimizing effort/ defect/ threats; and another project were ASE tools were merely optional.
NASA Technical Reports Server (NTRS)
Schuecker, Clara; Davila, Carlos G.; Rose, Cheryl A.
2010-01-01
Five models for matrix damage in fiber reinforced laminates are evaluated for matrix-dominated loading conditions under plane stress and are compared both qualitatively and quantitatively. The emphasis of this study is on a comparison of the response of embedded plies subjected to a homogeneous stress state. Three of the models are specifically designed for modeling the non-linear response due to distributed matrix cracking under homogeneous loading, and also account for non-linear (shear) behavior prior to the onset of cracking. The remaining two models are localized damage models intended for predicting local failure at stress concentrations. The modeling approaches of distributed vs. localized cracking as well as the different formulations of damage initiation and damage progression are compared and discussed.
Protein (multi-)location prediction: utilizing interdependencies via a generative model
Shatkay, Hagit
2015-01-01
Motivation: Proteins are responsible for a multitude of vital tasks in all living organisms. Given that a protein’s function and role are strongly related to its subcellular location, protein location prediction is an important research area. While proteins move from one location to another and can localize to multiple locations, most existing location prediction systems assign only a single location per protein. A few recent systems attempt to predict multiple locations for proteins, however, their performance leaves much room for improvement. Moreover, such systems do not capture dependencies among locations and usually consider locations as independent. We hypothesize that a multi-location predictor that captures location inter-dependencies can improve location predictions for proteins. Results: We introduce a probabilistic generative model for protein localization, and develop a system based on it—which we call MDLoc—that utilizes inter-dependencies among locations to predict multiple locations for proteins. The model captures location inter-dependencies using Bayesian networks and represents dependency between features and locations using a mixture model. We use iterative processes for learning model parameters and for estimating protein locations. We evaluate our classifier MDLoc, on a dataset of single- and multi-localized proteins derived from the DBMLoc dataset, which is the most comprehensive protein multi-localization dataset currently available. Our results, obtained by using MDLoc, significantly improve upon results obtained by an initial simpler classifier, as well as on results reported by other top systems. Availability and implementation: MDLoc is available at: http://www.eecis.udel.edu/∼compbio/mdloc. Contact: shatkay@udel.edu. PMID:26072505
Protein (multi-)location prediction: utilizing interdependencies via a generative model.
Simha, Ramanuja; Briesemeister, Sebastian; Kohlbacher, Oliver; Shatkay, Hagit
2015-06-15
Proteins are responsible for a multitude of vital tasks in all living organisms. Given that a protein's function and role are strongly related to its subcellular location, protein location prediction is an important research area. While proteins move from one location to another and can localize to multiple locations, most existing location prediction systems assign only a single location per protein. A few recent systems attempt to predict multiple locations for proteins, however, their performance leaves much room for improvement. Moreover, such systems do not capture dependencies among locations and usually consider locations as independent. We hypothesize that a multi-location predictor that captures location inter-dependencies can improve location predictions for proteins. We introduce a probabilistic generative model for protein localization, and develop a system based on it-which we call MDLoc-that utilizes inter-dependencies among locations to predict multiple locations for proteins. The model captures location inter-dependencies using Bayesian networks and represents dependency between features and locations using a mixture model. We use iterative processes for learning model parameters and for estimating protein locations. We evaluate our classifier MDLoc, on a dataset of single- and multi-localized proteins derived from the DBMLoc dataset, which is the most comprehensive protein multi-localization dataset currently available. Our results, obtained by using MDLoc, significantly improve upon results obtained by an initial simpler classifier, as well as on results reported by other top systems. MDLoc is available at: http://www.eecis.udel.edu/∼compbio/mdloc. © The Author 2015. Published by Oxford University Press.
Predictive Multiple Model Switching Control with the Self-Organizing Map
NASA Technical Reports Server (NTRS)
Motter, Mark A.
2000-01-01
A predictive, multiple model control strategy is developed by extension of self-organizing map (SOM) local dynamic modeling of nonlinear autonomous systems to a control framework. Multiple SOMs collectively model the global response of a nonautonomous system to a finite set of representative prototype controls. Each SOM provides a codebook representation of the dynamics corresponding to a prototype control. Different dynamic regimes are organized into topological neighborhoods where the adjacent entries in the codebook represent the global minimization of a similarity metric. The SOM is additionally employed to identify the local dynamical regime, and consequently implements a switching scheme that selects the best available model for the applied control. SOM based linear models are used to predict the response to a larger family of control sequences which are clustered on the representative prototypes. The control sequence which corresponds to the prediction that best satisfies the requirements on the system output is applied as the external driving signal.
EUV local CDU healing performance and modeling capability towards 5nm node
NASA Astrophysics Data System (ADS)
Jee, Tae Kwon; Timoshkov, Vadim; Choi, Peter; Rio, David; Tsai, Yu-Cheng; Yaegashi, Hidetami; Koike, Kyohei; Fonseca, Carlos; Schoofs, Stijn
2017-10-01
Both local variability and optical proximity correction (OPC) errors are big contributors to the edge placement error (EPE) budget which is closely related to the device yield. The post-litho contact hole healing will be demonstrated to meet after-etch local variability specifications using a low dose, 30mJ/cm2 dose-to-size, positive tone developed (PTD) resist with relevant throughput in high volume manufacturing (HVM). The total local variability of the node 5nm (N5) contact holes will be characterized in terms of local CD uniformity (LCDU), local placement error (LPE), and contact edge roughness (CER) using a statistical methodology. The CD healing process has complex etch proximity effects, so the OPC prediction accuracy is challenging to meet EPE requirements for the N5. Thus, the prediction accuracy of an after-etch model will be investigated and discussed using ASML Tachyon OPC model.
Genetically informed ecological niche models improve climate change predictions.
Ikeda, Dana H; Max, Tamara L; Allan, Gerard J; Lau, Matthew K; Shuster, Stephen M; Whitham, Thomas G
2017-01-01
We examined the hypothesis that ecological niche models (ENMs) more accurately predict species distributions when they incorporate information on population genetic structure, and concomitantly, local adaptation. Local adaptation is common in species that span a range of environmental gradients (e.g., soils and climate). Moreover, common garden studies have demonstrated a covariance between neutral markers and functional traits associated with a species' ability to adapt to environmental change. We therefore predicted that genetically distinct populations would respond differently to climate change, resulting in predicted distributions with little overlap. To test whether genetic information improves our ability to predict a species' niche space, we created genetically informed ecological niche models (gENMs) using Populus fremontii (Salicaceae), a widespread tree species in which prior common garden experiments demonstrate strong evidence for local adaptation. Four major findings emerged: (i) gENMs predicted population occurrences with up to 12-fold greater accuracy than models without genetic information; (ii) tests of niche similarity revealed that three ecotypes, identified on the basis of neutral genetic markers and locally adapted populations, are associated with differences in climate; (iii) our forecasts indicate that ongoing climate change will likely shift these ecotypes further apart in geographic space, resulting in greater niche divergence; (iv) ecotypes that currently exhibit the largest geographic distribution and niche breadth appear to be buffered the most from climate change. As diverse agents of selection shape genetic variability and structure within species, we argue that gENMs will lead to more accurate predictions of species distributions under climate change. © 2016 John Wiley & Sons Ltd.
Yubo Wang; Tatinati, Sivanagaraja; Liyu Huang; Kim Jeong Hong; Shafiq, Ghufran; Veluvolu, Kalyana C; Khong, Andy W H
2017-07-01
Extracranial robotic radiotherapy employs external markers and a correlation model to trace the tumor motion caused by the respiration. The real-time tracking of tumor motion however requires a prediction model to compensate the latencies induced by the software (image data acquisition and processing) and hardware (mechanical and kinematic) limitations of the treatment system. A new prediction algorithm based on local receptive fields extreme learning machines (pLRF-ELM) is proposed for respiratory motion prediction. All the existing respiratory motion prediction methods model the non-stationary respiratory motion traces directly to predict the future values. Unlike these existing methods, the pLRF-ELM performs prediction by modeling the higher-level features obtained by mapping the raw respiratory motion into the random feature space of ELM instead of directly modeling the raw respiratory motion. The developed method is evaluated using the dataset acquired from 31 patients for two horizons in-line with the latencies of treatment systems like CyberKnife. Results showed that pLRF-ELM is superior to that of existing prediction methods. Results further highlight that the abstracted higher-level features are suitable to approximate the nonlinear and non-stationary characteristics of respiratory motion for accurate prediction.
A range-based predictive localization algorithm for WSID networks
NASA Astrophysics Data System (ADS)
Liu, Yuan; Chen, Junjie; Li, Gang
2017-11-01
Most studies on localization algorithms are conducted on the sensor networks with densely distributed nodes. However, the non-localizable problems are prone to occur in the network with sparsely distributed sensor nodes. To solve this problem, a range-based predictive localization algorithm (RPLA) is proposed in this paper for the wireless sensor networks syncretizing the RFID (WSID) networks. The Gaussian mixture model is established to predict the trajectory of a mobile target. Then, the received signal strength indication is used to reduce the residence area of the target location based on the approximate point-in-triangulation test algorithm. In addition, collaborative localization schemes are introduced to locate the target in the non-localizable situations. Simulation results verify that the RPLA achieves accurate localization for the network with sparsely distributed sensor nodes. The localization accuracy of the RPLA is 48.7% higher than that of the APIT algorithm, 16.8% higher than that of the single Gaussian model-based algorithm and 10.5% higher than that of the Kalman filtering-based algorithm.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-22
... Commission, adopts a point-to-point predictive model for determining the ability of individual locations to... predictive model for reliably and presumptively determining the ability of individual locations, through the... adopted a point-to-point predictive model for determining the ability of individual locations to receive...
A cluster expansion model for predicting activation barrier of atomic processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rehman, Tafizur; Jaipal, M.; Chatterjee, Abhijit, E-mail: achatter@iitk.ac.in
2013-06-15
We introduce a procedure based on cluster expansion models for predicting the activation barrier of atomic processes encountered while studying the dynamics of a material system using the kinetic Monte Carlo (KMC) method. Starting with an interatomic potential description, a mathematical derivation is presented to show that the local environment dependence of the activation barrier can be captured using cluster interaction models. Next, we develop a systematic procedure for training the cluster interaction model on-the-fly, which involves: (i) obtaining activation barriers for handful local environments using nudged elastic band (NEB) calculations, (ii) identifying the local environment by analyzing the NEBmore » results, and (iii) estimating the cluster interaction model parameters from the activation barrier data. Once a cluster expansion model has been trained, it is used to predict activation barriers without requiring any additional NEB calculations. Numerical studies are performed to validate the cluster expansion model by studying hop processes in Ag/Ag(100). We show that the use of cluster expansion model with KMC enables efficient generation of an accurate process rate catalog.« less
Predicting plankton net community production in the Atlantic Ocean
NASA Astrophysics Data System (ADS)
Serret, Pablo; Robinson, Carol; Fernández, Emilio; Teira, Eva; Tilstone, Gavin; Pérez, Valesca
2009-07-01
We present, test and implement two contrasting models to predict euphotic zone net community production (NCP), which are based on 14C primary production (PO 14CP) to NCP relationships over two latitudinal (ca. 30°S-45°N) transects traversing highly productive and oligotrophic provinces of the Atlantic Ocean (NADR, CNRY, BENG, NAST-E, ETRA and SATL, Longhurst et al., 1995 [An estimation of global primary production in the ocean from satellite radiometer data. Journal of Plankton Research 17, 1245-1271]). The two models include similar ranges of PO 14CP and community structure, but differ in the relative influence of allochthonous organic matter in the oligotrophic provinces. Both models were used to predict NCP from PO 14CP measurements obtained during 11 local and three seasonal studies in the Atlantic, Pacific and Indian Oceans, and from satellite-derived estimates of PO 14CP. Comparison of these NCP predictions with concurrent in situ measurements and geochemical estimates of NCP showed that geographic and annual patterns of NCP can only be predicted when the relative trophic importance of local vs. distant processes is similar in both modeled and predicted ecosystems. The system-dependent ability of our models to predict NCP seasonality suggests that trophic-level dynamics are stronger than differences in hydrodynamic regime, taxonomic composition and phytoplankton growth. The regional differences in the predictive power of both models confirm the existence of biogeographic differences in the scale of trophic dynamics, which impede the use of a single generalized equation to estimate global marine plankton NCP. This paper shows the potential of a systematic empirical approach to predict plankton NCP from local and satellite-derived P estimates.
Weather Research and Forecasting Model Sensitivity Comparisons for Warm Season Convective Initiation
NASA Technical Reports Server (NTRS)
Watson, Leela R.
2007-01-01
This report describes the work done by the Applied Meteorology Unit (AMU) in assessing the success of different model configurations in predicting warm season convection over East-Central Florida. The Weather Research and Forecasting Environmental Modeling System (WRF EMS) software allows users to choose among two dynamical cores - the Advanced Research WRF (ARW) and the Non-hydrostatic Mesoscale Model (NMM). There are also data assimilation analysis packages available for the initialization of the WRF model - the Local Analysis and Prediction System (LAPS) and the Advanced Regional Prediction System (ARPS) Data Analysis System (ADAS). Besides model core and initialization options, the WRF model can be run with one- or two-way nesting. Having a series of initialization options and WRF cores, as well as many options within each core, creates challenges for local forecasters, such as determining which configuration options are best to address specific forecast concerns. This project assessed three different model intializations available to determine which configuration best predicts warm season convective initiation in East-Central Florida. The project also examined the use of one- and two-way nesting in predicting warm season convection.
Real-time assessments of water quality: expanding nowcasting throughout the Great Lakes
,
2013-01-01
Nowcasts are systems that inform the public of current bacterial water-quality conditions at beaches on the basis of predictive models. During 2010–12, the U.S. Geological Survey (USGS) worked with 23 local and State agencies to improve existing operational beach nowcast systems at 4 beaches and expand the use of predictive models in nowcasts at an additional 45 beaches throughout the Great Lakes. The predictive models were specific to each beach, and the best model for each beach was based on a unique combination of environmental and water-quality explanatory variables. The variables used most often in models to predict Escherichia coli (E. coli) concentrations or the probability of exceeding a State recreational water-quality standard included turbidity, day of the year, wave height, wind direction and speed, antecedent rainfall for various time periods, and change in lake level over 24 hours. During validation of 42 beach models during 2012, the models performed better than the current method to assess recreational water quality (previous day's E. coli concentration). The USGS will continue to work with local agencies to improve nowcast predictions, enable technology transfer of predictive model development procedures, and implement more operational systems during 2013 and beyond.
Gogol-Prokurat, Melanie
2011-01-01
If species distribution models (SDMs) can rank habitat suitability at a local scale, they may be a valuable conservation planning tool for rare, patchily distributed species. This study assessed the ability of Maxent, an SDM reported to be appropriate for modeling rare species, to rank habitat suitability at a local scale for four edaphic endemic rare plants of gabbroic soils in El Dorado County, California, and examined the effects of grain size, spatial extent, and fine-grain environmental predictors on local-scale model accuracy. Models were developed using species occurrence data mapped on public lands and were evaluated using an independent data set of presence and absence locations on surrounding lands, mimicking a typical conservation-planning scenario that prioritizes potential habitat on unsurveyed lands surrounding known occurrences. Maxent produced models that were successful at discriminating between suitable and unsuitable habitat at the local scale for all four species, and predicted habitat suitability values were proportional to likelihood of occurrence or population abundance for three of four species. Unfortunately, models with the best discrimination (i.e., AUC) were not always the most useful for ranking habitat suitability. The use of independent test data showed metrics that were valuable for evaluating which variables and model choices (e.g., grain, extent) to use in guiding habitat prioritization for conservation of these species. A goodness-of-fit test was used to determine whether habitat suitability values ranked habitat suitability on a continuous scale. If they did not, a minimum acceptable error predicted area criterion was used to determine the threshold for classifying habitat as suitable or unsuitable. I found a trade-off between model extent and the use of fine-grain environmental variables: goodness of fit was improved at larger extents, and fine-grain environmental variables improved local-scale accuracy, but fine-grain variables were not available at large extents. No single model met all habitat prioritization criteria, and the best models were overlaid to identify consensus areas of high suitability. Although the four species modeled here co-occur and are treated together for conservation planning, model accuracy and predicted suitable areas varied among species.
The local heat transfer mathematical model between vibrated fluidized beds and horizontal tubes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Xuejun; College of Biology and Chemical Engineering, Panzhihua University, Panzhihua 617000; Ye, Shichao
2008-05-15
A dimensionless mathematical model is proposed to predict the local heat transfer coefficients between vibrated fluidized beds and immersed horizontal tubes, and the effects of the thickness of gas film and the contact time of particle packets are well considered. Experiments using the glass beads (the average diameter bar d{sub p}=1.83mm) were conducted in a two-dimensional vibrated fluidized bed (240 mm x 80 mm). The local heat transfer law between vibrated fluidized bed and horizontal tube surface has been investigated. The results show that the values of theoretical prediction are in good agreement with experimental data, so the model ismore » able to predict the local heat transfer coefficients between vibrated fluidized beds and immersed horizontal tubes reasonably well, and the error is in range of {+-}15%. The results can provide references for future designing and researching on the vibrated fluidized beds with immersed horizontal tubes. (author)« less
Huang, Yingxiang; Lee, Junghye; Wang, Shuang; Sun, Jimeng; Liu, Hongfang; Jiang, Xiaoqian
2018-05-16
Data sharing has been a big challenge in biomedical informatics because of privacy concerns. Contextual embedding models have demonstrated a very strong representative capability to describe medical concepts (and their context), and they have shown promise as an alternative way to support deep-learning applications without the need to disclose original data. However, contextual embedding models acquired from individual hospitals cannot be directly combined because their embedding spaces are different, and naive pooling renders combined embeddings useless. The aim of this study was to present a novel approach to address these issues and to promote sharing representation without sharing data. Without sacrificing privacy, we also aimed to build a global model from representations learned from local private data and synchronize information from multiple sources. We propose a methodology that harmonizes different local contextual embeddings into a global model. We used Word2Vec to generate contextual embeddings from each source and Procrustes to fuse different vector models into one common space by using a list of corresponding pairs as anchor points. We performed prediction analysis with harmonized embeddings. We used sequential medical events extracted from the Medical Information Mart for Intensive Care III database to evaluate the proposed methodology in predicting the next likely diagnosis of a new patient using either structured data or unstructured data. Under different experimental scenarios, we confirmed that the global model built from harmonized local models achieves a more accurate prediction than local models and global models built from naive pooling. Such aggregation of local models using our unique harmonization can serve as the proxy for a global model, combining information from a wide range of institutions and information sources. It allows information unique to a certain hospital to become available to other sites, increasing the fluidity of information flow in health care. ©Yingxiang Huang, Junghye Lee, Shuang Wang, Jimeng Sun, Hongfang Liu, Xiaoqian Jiang. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 16.05.2018.
Analytic prediction of unconfined boundary layer flashback limits in premixed hydrogen-air flames
NASA Astrophysics Data System (ADS)
Hoferichter, Vera; Hirsch, Christoph; Sattelmayer, Thomas
2017-05-01
Flame flashback is a major challenge in premixed combustion. Hence, the prediction of the minimum flow velocity to prevent boundary layer flashback is of high technical interest. This paper presents an analytic approach to predicting boundary layer flashback limits for channel and tube burners. The model reflects the experimentally observed flashback mechanism and consists of a local and global analysis. Based on the local analysis, the flow velocity at flashback initiation is obtained depending on flame angle and local turbulent burning velocity. The local turbulent burning velocity is calculated in accordance with a predictive model for boundary layer flashback limits of duct-confined flames presented by the authors in an earlier publication. This ensures consistency of both models. The flame angle of the stable flame near flashback conditions can be obtained by various methods. In this study, an approach based on global mass conservation is applied and is validated using Mie-scattering images from a channel burner test rig at ambient conditions. The predicted flashback limits are compared to experimental results and to literature data from preheated tube burner experiments. Finally, a method for including the effect of burner exit temperature is demonstrated and used to explain the discrepancies in flashback limits obtained from different burner configurations reported in the literature.
NASA Astrophysics Data System (ADS)
Graham, Wendy; Destouni, Georgia; Demmy, George; Foussereau, Xavier
1998-07-01
The methodology developed in Destouni and Graham [Destouni, G., Graham, W.D., 1997. The influence of observation method on local concentration statistics in the subsurface. Water Resour. Res. 33 (4) 663-676.] for predicting locally measured concentration statistics for solute transport in heterogeneous porous media under saturated flow conditions is applied to the prediction of conservative nonreactive solute transport in the vadose zone where observations are obtained by soil coring. Exact analytical solutions are developed for both the mean and variance of solute concentrations measured in discrete soil cores using a simplified physical model for vadose-zone flow and solute transport. Theoretical results show that while the ensemble mean concentration is relatively insensitive to the length-scale of the measurement, predictions of the concentration variance are significantly impacted by the sampling interval. Results also show that accounting for vertical heterogeneity in the soil profile results in significantly less spreading in the mean and variance of the measured solute breakthrough curves, indicating that it is important to account for vertical heterogeneity even for relatively small travel distances. Model predictions for both the mean and variance of locally measured solute concentration, based on independently estimated model parameters, agree well with data from a field tracer test conducted in Manatee County, Florida.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mackay, D.; Di Guardo, A.; Paterson, S.
Evaluation of chemical fate in the environment has been suggested to be best accomplished using a five-stage process in which a sequence of increasing site-specific multimedia mass balance models is applied. This approach is illustrated for chlorobenzene and linear alkylbenzene sulfonates (LAS). The first two stages involve classifying the chemical and quantifying the emissions into each environmental compartment. In the third stage, the characteristics of the chemical are determined using the evaluative equilibrium criterion model, which is capable of treating a variety of chemicals including those that are in volatile and insoluble in water. This evaluation is conducted in threemore » steps using levels 1, 2, and 3 versions of the model, which introduce increasing complexity and more realistic representations of the environment. In the fourth stage, ChemCAN, which is a level 3 model for specific regions of Canada, is used to predict the chemical`s fate in southern Ontario. The final stage is to apply local environmental models to predict environmental exposure concentrations. For chlorobenzene, the local model was the SoilFug model, which predicts the fate of agrochemicals, and for LAS the WW-TREAT, GRiDS, and ROUT models were used to predict the fate of LAS in a sewage treatment plant and in riverine receiving waters. It is concluded that this systematic approach provides a comprehensive assessment of chemical fate, revealing the broad characteristics of chemical behavior and quantifying the likely local and regional exposure levels.« less
The development of local calibration factors for implementing the highway safety manual in Maryland.
DOT National Transportation Integrated Search
2014-03-01
The goal of the study was to determine local calibration factors (LCFs) to adjust predicted motor : vehicle traffic crashes for the Maryland-specific application of the Highway Safety Manual : (HSM). Since HSM predictive models were developed using d...
NASA Technical Reports Server (NTRS)
Lee, Jong-Won; Harris, Charles E.
1990-01-01
A mathematical model based on the Euler-Bermoulli beam theory is proposed for predicting the effective Young's moduli of piecewise isotropic composite laminates with local ply curvatures in the main load-carrying layers. Strains in corrugated layers, in-phase layers, and out-of-phase layers are predicted for various geometries and material configurations by assuming matrix layers as elastic foundations of different spring constants. The effective Young's moduli measured from corrugated aluminum specimens and aluminum/epoxy specimens with in-phase and out-of-phase wavy patterns coincide very well with the model predictions. Moire fringe analysis of an in-phase specimen and an out-of-phase specimen are also presented, confirming the main assumption of the model related to the elastic constraint due to the matrix layers. The present model is also compared with the experimental results and other models, including the microbuckling models, published in the literature. The results of the present study show that even a very small-scale local ply curvature produces a noticeable effect on the mechanical constitutive behavior of a laminated composite.
NASA Astrophysics Data System (ADS)
Shaman, J.; Stieglitz, M.; Zebiak, S.; Cane, M.; Day, J. F.
2002-12-01
We present an ensemble local hydrologic forecast derived from the seasonal forecasts of the International Research Institute (IRI) for Climate Prediction. Three- month seasonal forecasts were used to resample historical meteorological conditions and generate ensemble forcing datasets for a TOPMODEL-based hydrology model. Eleven retrospective forecasts were run at a Florida and New York site. Forecast skill was assessed for mean area modeled water table depth (WTD), i.e. near surface soil wetness conditions, and compared with WTD simulated with observed data. Hydrology model forecast skill was evident at the Florida site but not at the New York site. At the Florida site, persistence of hydrologic conditions and local skill of the IRI seasonal forecast contributed to the local hydrologic forecast skill. This forecast will permit probabilistic prediction of future hydrologic conditions. At the Florida site, we have also quantified the link between modeled WTD (i.e. drought) and the amplification and transmission of St. Louis Encephalitis virus (SLEV). We derive an empirical relationship between modeled land surface wetness and levels of SLEV transmission associated with human clinical cases. We then combine the seasonal forecasts of local, modeled WTD with this empirical relationship and produce retrospective probabilistic seasonal forecasts of epidemic SLEV transmission in Florida. Epidemic SLEV transmission forecast skill is demonstrated. These findings will permit real-time forecast of drought and resultant SLEV transmission in Florida.
Coupling of the Models of Human Physiology and Thermal Comfort
NASA Astrophysics Data System (ADS)
Pokorny, J.; Jicha, M.
2013-04-01
A coupled model of human physiology and thermal comfort was developed in Dymola/Modelica. A coupling combines a modified Tanabe model of human physiology and thermal comfort model developed by Zhang. The Coupled model allows predicting the thermal sensation and comfort of both local and overall from local boundary conditions representing ambient and personal factors. The aim of this study was to compare prediction of the Coupled model with the Fiala model prediction and experimental data. Validation data were taken from the literature, mainly from the validation manual of software Theseus-FE [1]. In the paper validation of the model for very light physical activities (1 met) indoor environment with temperatures from 12 °C up to 48 °C is presented. The Coupled model predicts mean skin temperature for cold, neutral and warm environment well. However prediction of core temperature in cold environment is inaccurate and very affected by ambient temperature. Evaluation of thermal comfort in warm environment is supplemented by skin wettedness prediction. The Coupled model is designed for non-uniform and transient environmental conditions; it is also suitable simulation of thermal comfort in vehicles cabins. The usage of the model is limited for very light physical activities up to 1.2 met only.
Improving of local ozone forecasting by integrated models.
Gradišar, Dejan; Grašič, Boštjan; Božnar, Marija Zlata; Mlakar, Primož; Kocijan, Juš
2016-09-01
This paper discuss the problem of forecasting the maximum ozone concentrations in urban microlocations, where reliable alerting of the local population when thresholds have been surpassed is necessary. To improve the forecast, the methodology of integrated models is proposed. The model is based on multilayer perceptron neural networks that use as inputs all available information from QualeAria air-quality model, WRF numerical weather prediction model and onsite measurements of meteorology and air pollution. While air-quality and meteorological models cover large geographical 3-dimensional space, their local resolution is often not satisfactory. On the other hand, empirical methods have the advantage of good local forecasts. In this paper, integrated models are used for improved 1-day-ahead forecasting of the maximum hourly value of ozone within each day for representative locations in Slovenia. The WRF meteorological model is used for forecasting meteorological variables and the QualeAria air-quality model for gas concentrations. Their predictions, together with measurements from ground stations, are used as inputs to a neural network. The model validation results show that integrated models noticeably improve ozone forecasts and provide better alert systems.
Predicting diameters inside bark for 10 important hardwood species
Donald E. Hilt; Everette D. Rast; Herman J. Bailey
1983-01-01
General models for predicting DIB/DOB ratios up the stem, applicable over wide geographic areas, have been developed for 10 important hardwood species. Results indicate that the ratios either decrease or remain constant up the stem. Methods for adjusting the general models to local conditions are presented. The prediction models can be used in conjunction with optical...
NASA Astrophysics Data System (ADS)
Totani, Tomonori; Takeuchi, Tsutomu T.
2002-05-01
We give an explanation for the origin of various properties observed in local infrared galaxies and make predictions for galaxy counts and cosmic background radiation (CBR) using a new model extended from that for optical/near-infrared galaxies. Important new characteristics of this study are that (1) mass scale dependence of dust extinction is introduced based on the size-luminosity relation of optical galaxies and that (2) the large-grain dust temperature Tdust is calculated based on a physical consideration for energy balance rather than by using the empirical relation between Tdust and total infrared luminosity LIR found in local galaxies, which has been employed in most previous works. Consequently, the local properties of infrared galaxies, i.e., optical/infrared luminosity ratios, LIR-Tdust correlation, and infrared luminosity function are outputs predicted by the model, while these have been inputs in a number of previous models. Our model indeed reproduces these local properties reasonably well. Then we make predictions for faint infrared counts (in 15, 60, 90, 170, 450, and 850 μm) and CBR using this model. We found results considerably different from those of most previous works based on the empirical LIR-Tdust relation; especially, it is shown that the dust temperature of starbursting primordial elliptical galaxies is expected to be very high (40-80 K), as often seen in starburst galaxies or ultraluminous infrared galaxies in the local and high-z universe. This indicates that intense starbursts of forming elliptical galaxies should have occurred at z~2-3, in contrast to the previous results that significant starbursts beyond z~1 tend to overproduce the far-infrared (FIR) CBR detected by COBE/FIRAS. On the other hand, our model predicts that the mid-infrared (MIR) flux from warm/nonequilibrium dust is relatively weak in such galaxies making FIR CBR, and this effect reconciles the prima facie conflict between the upper limit on MIR CBR from TeV gamma-ray observations and the COBE detections of FIR CBR. The intergalactic optical depth of TeV gamma rays based on our model is also presented.
NASA Technical Reports Server (NTRS)
Arnold, Steven M. (Technical Monitor); Bansal, Yogesh; Pindera, Marek-Jerzy
2004-01-01
The High-Fidelity Generalized Method of Cells is a new micromechanics model for unidirectionally reinforced periodic multiphase materials that was developed to overcome the original model's shortcomings. The high-fidelity version predicts the local stress and strain fields with dramatically greater accuracy relative to the original model through the use of a better displacement field representation. Herein, we test the high-fidelity model's predictive capability in estimating the elastic moduli of periodic composites characterized by repeating unit cells obtained by rotation of an infinite square fiber array through an angle about the fiber axis. Such repeating unit cells may contain a few or many fibers, depending on the rotation angle. In order to analyze such multi-inclusion repeating unit cells efficiently, the high-fidelity micromechanics model's framework is reformulated using the local/global stiffness matrix approach. The excellent agreement with the corresponding results obtained from the standard transformation equations confirms the new model's predictive capability for periodic composites characterized by multi-inclusion repeating unit cells lacking planes of material symmetry. Comparison of the effective moduli and local stress fields with the corresponding results obtained from the original Generalized Method of Cells dramatically highlights the original model's shortcomings for certain classes of unidirectional composites.
NASA Astrophysics Data System (ADS)
Cao, Duc; Moses, Gregory; Delettrez, Jacques
2015-08-01
An implicit, non-local thermal conduction algorithm based on the algorithm developed by Schurtz, Nicolai, and Busquet (SNB) [Schurtz et al., Phys. Plasmas 7, 4238 (2000)] for non-local electron transport is presented and has been implemented in the radiation-hydrodynamics code DRACO. To study the model's effect on DRACO's predictive capability, simulations of shot 60 303 from OMEGA are completed using the iSNB model, and the computed shock speed vs. time is compared to experiment. Temperature outputs from the iSNB model are compared with the non-local transport model of Goncharov et al. [Phys. Plasmas 13, 012702 (2006)]. Effects on adiabat are also examined in a polar drive surrogate simulation. Results show that the iSNB model is not only capable of flux-limitation but also preheat prediction while remaining numerically robust and sacrificing little computational speed. Additionally, the results provide strong incentive to further modify key parameters within the SNB theory, namely, the newly introduced non-local mean free path. This research was supported by the Laboratory for Laser Energetics of the University of Rochester.
NASA Astrophysics Data System (ADS)
Yung, L. Y. Aaron; Somerville, Rachel S.
2017-06-01
The well-established Santa Cruz semi-analytic galaxy formation framework has been shown to be quite successful at explaining observations in the local Universe, as well as making predictions for low-redshift observations. Recently, metallicity-based gas partitioning and H2-based star formation recipes have been implemented in our model, replacing the legacy cold-gas based recipe. We then use our revised model to explore the high-redshift Universe and make predictions up to z = 15. Although our model is only calibrated to observations from the local universe, our predictions seem to match incredibly well with mid- to high-redshift observational constraints available-to-date, including rest-frame UV luminosity functions and the reionization history as constrained by CMB and IGM observations. We provide predictions for individual and statistical galaxy properties at a wide range of redshifts (z = 4 - 15), including objects that are too far or too faint to be detected with current facilities. And using our model predictions, we also provide forecasted luminosity functions and other observables for upcoming studies with JWST.
Sazonovas, A; Japertas, P; Didziapetris, R
2010-01-01
This study presents a new type of acute toxicity (LD(50)) prediction that enables automated assessment of the reliability of predictions (which is synonymous with the assessment of the Model Applicability Domain as defined by the Organization for Economic Cooperation and Development). Analysis involved nearly 75,000 compounds from six animal systems (acute rat toxicity after oral and intraperitoneal administration; acute mouse toxicity after oral, intraperitoneal, intravenous, and subcutaneous administration). Fragmental Partial Least Squares (PLS) with 100 bootstraps yielded baseline predictions that were automatically corrected for non-linear effects in local chemical spaces--a combination called Global, Adjusted Locally According to Similarity (GALAS) modelling methodology. Each prediction obtained in this manner is provided with a reliability index value that depends on both compound's similarity to the training set (that accounts for similar trends in LD(50) variations within multiple bootstraps) and consistency of experimental results with regard to the baseline model in the local chemical environment. The actual performance of the Reliability Index (RI) was proven by its good (and uniform) correlations with Root Mean Square Error (RMSE) in all validation sets, thus providing quantitative assessment of the Model Applicability Domain. The obtained models can be used for compound screening in the early stages of drug development and prioritization for experimental in vitro testing or later in vivo animal acute toxicity studies.
A predictive model for the tokamak density limit
Teng, Q.; Brennan, D. P.; Delgado-Aparicio, L.; ...
2016-07-28
We reproduce the Greenwald density limit, in all tokamak experiments by using a phenomenologically correct model with parameters in the range of experiments. A simple model of equilibrium evolution and local power balance inside the island has been implemented to calculate the radiation-driven thermo-resistive tearing mode growth and explain the density limit. Strong destabilization of the tearing mode due to an imbalance of local Ohmic heating and radiative cooling in the island predicts the density limit within a few percent. Furthermore, we found the density limit and it is a local edge limit and weakly dependent on impurity densities. Ourmore » results are robust to a substantial variation in model parameters within the range of experiments.« less
Evaluation of a locally homogeneous flow model of spray combustion
NASA Technical Reports Server (NTRS)
Mao, C. P.; Szekely, G. A., Jr.; Faeth, G. M.
1980-01-01
A model of spray combustion which employs a second-order turbulence model was developed. The assumption of locally homogeneous flow is made, implying infinitely fast transport rates between the phase. Measurements to test the model were completed for a gaseous n-propane flame and an air atomized n-pentane spray flame, burning in stagnant air at atmospheric pressure. Profiles of mean velocity and temperature, as well as velocity fluctuations and Reynolds stress, were measured in the flames. The predictions for the gas flame were in excellent agreement with the measurements. The predictions for the spray were qualitatively correct, but effects of finite rate interphase transport were evident, resulting in a overstimation of the rate development of the flow. Predictions of spray penetration length at high pressures, including supercritical combustion conditions, were also completed for comparison with earlier measurements. Test conditions involved a pressure atomized n-pentane spray, burning in stagnant air at pressures of 3, 5, and 9 MPa. The comparison between predictions and measurements was fair. This is not a very sensitive test of the model, however, and further high pressure experimental and theoretical results are needed before a satisfactory assessment of the locally homogeneous flow approximation can be made.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lindsay, WD; Oncora Medical, LLC, Philadelphia, PA; Berlind, CG
Purpose: While rates of local control have been well characterized after stereotactic body radiotherapy (SBRT) for stage I non-small cell lung cancer (NSCLC), less data are available characterizing survival and normal tissue toxicities, and no validated models exist assessing these parameters after SBRT. We evaluate the reliability of various machine learning techniques when applied to radiation oncology datasets to create predictive models of mortality, tumor control, and normal tissue complications. Methods: A dataset of 204 consecutive patients with stage I non-small cell lung cancer (NSCLC) treated with stereotactic body radiotherapy (SBRT) at the University of Pennsylvania between 2009 and 2013more » was used to create predictive models of tumor control, normal tissue complications, and mortality in this IRB-approved study. Nearly 200 data fields of detailed patient- and tumor-specific information, radiotherapy dosimetric measurements, and clinical outcomes data were collected. Predictive models were created for local tumor control, 1- and 3-year overall survival, and nodal failure using 60% of the data (leaving the remainder as a test set). After applying feature selection and dimensionality reduction, nonlinear support vector classification was applied to the resulting features. Models were evaluated for accuracy and area under ROC curve on the 81-patient test set. Results: Models for common events in the dataset (such as mortality at one year) had the highest predictive power (AUC = .67, p < 0.05). For rare occurrences such as radiation pneumonitis and local failure (each occurring in less than 10% of patients), too few events were present to create reliable models. Conclusion: Although this study demonstrates the validity of predictive analytics using information extracted from patient medical records and can most reliably predict for survival after SBRT, larger sample sizes are needed to develop predictive models for normal tissue toxicities and more advanced machine learning methodologies need be consider in the future.« less
Adjusting the Stems Regional Forest Growth Model to Improve Local Predictions
W. Brad Smith
1983-01-01
A simple procedure using double sampling is described for adjusting growth in the STEMS regional forest growth model to compensate for subregional variations. Predictive accuracy of the STEMS model (a distance-independent, individual tree growth model for Lake States forests) was improved by using this procedure
Evaluating the utility of mid-infrared spectral subspaces for predicting soil properties.
Sila, Andrew M; Shepherd, Keith D; Pokhariyal, Ganesh P
2016-04-15
We propose four methods for finding local subspaces in large spectral libraries. The proposed four methods include (a) cosine angle spectral matching; (b) hit quality index spectral matching; (c) self-organizing maps and (d) archetypal analysis methods. Then evaluate prediction accuracies for global and subspaces calibration models. These methods were tested on a mid-infrared spectral library containing 1907 soil samples collected from 19 different countries under the Africa Soil Information Service project. Calibration models for pH, Mehlich-3 Ca, Mehlich-3 Al, total carbon and clay soil properties were developed for the whole library and for the subspace. Root mean square error of prediction was used to evaluate predictive performance of subspace and global models. The root mean square error of prediction was computed using a one-third-holdout validation set. Effect of pretreating spectra with different methods was tested for 1st and 2nd derivative Savitzky-Golay algorithm, multiplicative scatter correction, standard normal variate and standard normal variate followed by detrending methods. In summary, the results show that global models outperformed the subspace models. We, therefore, conclude that global models are more accurate than the local models except in few cases. For instance, sand and clay root mean square error values from local models from archetypal analysis method were 50% poorer than the global models except for subspace models obtained using multiplicative scatter corrected spectra with which were 12% better. However, the subspace approach provides novel methods for discovering data pattern that may exist in large spectral libraries.
NASA Astrophysics Data System (ADS)
Abani, Neerav; Reitz, Rolf D.
2010-09-01
An advanced mixing model was applied to study engine emissions and combustion with different injection strategies ranging from multiple injections, early injection and grouped-hole nozzle injection in light and heavy duty diesel engines. The model was implemented in the KIVA-CHEMKIN engine combustion code and simulations were conducted at different mesh resolutions. The model was compared with the standard KIVA spray model that uses the Lagrangian-Drop and Eulerian-Fluid (LDEF) approach, and a Gas Jet spray model that improves predictions of liquid sprays. A Vapor Particle Method (VPM) is introduced that accounts for sub-grid scale mixing of fuel vapor and more accurately and predicts the mixing of fuel-vapor over a range of mesh resolutions. The fuel vapor is transported as particles until a certain distance from nozzle is reached where the local jet half-width is adequately resolved by the local mesh scale. Within this distance the vapor particle is transported while releasing fuel vapor locally, as determined by a weighting factor. The VPM model more accurately predicts fuel-vapor penetrations for early cycle injections and flame lift-off lengths for late cycle injections. Engine combustion computations show that as compared to the standard KIVA and Gas Jet spray models, the VPM spray model improves predictions of in-cylinder pressure, heat released rate and engine emissions of NOx, CO and soot with coarse mesh resolutions. The VPM spray model is thus a good tool for efficiently investigating diesel engine combustion with practical mesh resolutions, thereby saving computer time.
MetaMQAP: a meta-server for the quality assessment of protein models.
Pawlowski, Marcin; Gajda, Michal J; Matlak, Ryszard; Bujnicki, Janusz M
2008-09-29
Computational models of protein structure are usually inaccurate and exhibit significant deviations from the true structure. The utility of models depends on the degree of these deviations. A number of predictive methods have been developed to discriminate between the globally incorrect and approximately correct models. However, only a few methods predict correctness of different parts of computational models. Several Model Quality Assessment Programs (MQAPs) have been developed to detect local inaccuracies in unrefined crystallographic models, but it is not known if they are useful for computational models, which usually exhibit different and much more severe errors. The ability to identify local errors in models was tested for eight MQAPs: VERIFY3D, PROSA, BALA, ANOLEA, PROVE, TUNE, REFINER, PROQRES on 8251 models from the CASP-5 and CASP-6 experiments, by calculating the Spearman's rank correlation coefficients between per-residue scores of these methods and local deviations between C-alpha atoms in the models vs. experimental structures. As a reference, we calculated the value of correlation between the local deviations and trivial features that can be calculated for each residue directly from the models, i.e. solvent accessibility, depth in the structure, and the number of local and non-local neighbours. We found that absolute correlations of scores returned by the MQAPs and local deviations were poor for all methods. In addition, scores of PROQRES and several other MQAPs strongly correlate with 'trivial' features. Therefore, we developed MetaMQAP, a meta-predictor based on a multivariate regression model, which uses scores of the above-mentioned methods, but in which trivial parameters are controlled. MetaMQAP predicts the absolute deviation (in Angströms) of individual C-alpha atoms between the model and the unknown true structure as well as global deviations (expressed as root mean square deviation and GDT_TS scores). Local model accuracy predicted by MetaMQAP shows an impressive correlation coefficient of 0.7 with true deviations from native structures, a significant improvement over all constituent primary MQAP scores. The global MetaMQAP score is correlated with model GDT_TS on the level of 0.89. Finally, we compared our method with the MQAPs that scored best in the 7th edition of CASP, using CASP7 server models (not included in the MetaMQAP training set) as the test data. In our benchmark, MetaMQAP is outperformed only by PCONS6 and method QA_556 - methods that require comparison of multiple alternative models and score each of them depending on its similarity to other models. MetaMQAP is however the best among methods capable of evaluating just single models. We implemented the MetaMQAP as a web server available for free use by all academic users at the URL https://genesilico.pl/toolkit/
The Green's functions for peridynamic non-local diffusion.
Wang, L J; Xu, J F; Wang, J X
2016-09-01
In this work, we develop the Green's function method for the solution of the peridynamic non-local diffusion model in which the spatial gradient of the generalized potential in the classical theory is replaced by an integral of a generalized response function in a horizon. We first show that the general solutions of the peridynamic non-local diffusion model can be expressed as functionals of the corresponding Green's functions for point sources, along with volume constraints for non-local diffusion. Then, we obtain the Green's functions by the Fourier transform method for unsteady and steady diffusions in infinite domains. We also demonstrate that the peridynamic non-local solutions converge to the classical differential solutions when the non-local length approaches zero. Finally, the peridynamic analytical solutions are applied to an infinite plate heated by a Gauss source, and the predicted variations of temperature are compared with the classical local solutions. The peridynamic non-local diffusion model predicts a lower rate of variation of the field quantities than that of the classical theory, which is consistent with experimental observations. The developed method is applicable to general diffusion-type problems.
Validation of the thermophysiological model by Fiala for prediction of local skin temperatures
NASA Astrophysics Data System (ADS)
Martínez, Natividad; Psikuta, Agnes; Kuklane, Kalev; Quesada, José Ignacio Priego; de Anda, Rosa María Cibrián Ortiz; Soriano, Pedro Pérez; Palmer, Rosario Salvador; Corberán, José Miguel; Rossi, René Michel; Annaheim, Simon
2016-12-01
The most complete and realistic physiological data are derived from direct measurements during human experiments; however, they present some limitations such as ethical concerns, time and cost burden. Thermophysiological models are able to predict human thermal response in a wide range of environmental conditions, but their use is limited due to lack of validation. The aim of this work was to validate the thermophysiological model by Fiala for prediction of local skin temperatures against a dedicated database containing 43 different human experiments representing a wide range of conditions. The validation was conducted based on root-mean-square deviation (rmsd) and bias. The thermophysiological model by Fiala showed a good precision when predicting core and mean skin temperature (rmsd 0.26 and 0.92 °C, respectively) and also local skin temperatures for most body sites (average rmsd for local skin temperatures 1.32 °C). However, an increased deviation of the predictions was observed for the forehead skin temperature (rmsd of 1.63 °C) and for the thigh during exercising exposures (rmsd of 1.41 °C). Possible reasons for the observed deviations are lack of information on measurement circumstances (hair, head coverage interference) or an overestimation of the sweat evaporative cooling capacity for the head and thigh, respectively. This work has highlighted the importance of collecting details about the clothing worn and how and where the sensors were attached to the skin for achieving more precise results in the simulations.
Bedbrook, Claire N; Yang, Kevin K; Rice, Austin J; Gradinaru, Viviana; Arnold, Frances H
2017-10-01
There is growing interest in studying and engineering integral membrane proteins (MPs) that play key roles in sensing and regulating cellular response to diverse external signals. A MP must be expressed, correctly inserted and folded in a lipid bilayer, and trafficked to the proper cellular location in order to function. The sequence and structural determinants of these processes are complex and highly constrained. Here we describe a predictive, machine-learning approach that captures this complexity to facilitate successful MP engineering and design. Machine learning on carefully-chosen training sequences made by structure-guided SCHEMA recombination has enabled us to accurately predict the rare sequences in a diverse library of channelrhodopsins (ChRs) that express and localize to the plasma membrane of mammalian cells. These light-gated channel proteins of microbial origin are of interest for neuroscience applications, where expression and localization to the plasma membrane is a prerequisite for function. We trained Gaussian process (GP) classification and regression models with expression and localization data from 218 ChR chimeras chosen from a 118,098-variant library designed by SCHEMA recombination of three parent ChRs. We use these GP models to identify ChRs that express and localize well and show that our models can elucidate sequence and structure elements important for these processes. We also used the predictive models to convert a naturally occurring ChR incapable of mammalian localization into one that localizes well.
Rice, Austin J.; Gradinaru, Viviana; Arnold, Frances H.
2017-01-01
There is growing interest in studying and engineering integral membrane proteins (MPs) that play key roles in sensing and regulating cellular response to diverse external signals. A MP must be expressed, correctly inserted and folded in a lipid bilayer, and trafficked to the proper cellular location in order to function. The sequence and structural determinants of these processes are complex and highly constrained. Here we describe a predictive, machine-learning approach that captures this complexity to facilitate successful MP engineering and design. Machine learning on carefully-chosen training sequences made by structure-guided SCHEMA recombination has enabled us to accurately predict the rare sequences in a diverse library of channelrhodopsins (ChRs) that express and localize to the plasma membrane of mammalian cells. These light-gated channel proteins of microbial origin are of interest for neuroscience applications, where expression and localization to the plasma membrane is a prerequisite for function. We trained Gaussian process (GP) classification and regression models with expression and localization data from 218 ChR chimeras chosen from a 118,098-variant library designed by SCHEMA recombination of three parent ChRs. We use these GP models to identify ChRs that express and localize well and show that our models can elucidate sequence and structure elements important for these processes. We also used the predictive models to convert a naturally occurring ChR incapable of mammalian localization into one that localizes well. PMID:29059183
Effects of local and regional climatic fluctuations on dengue outbreaks in southern Taiwan
Chaves, Luis Fernando; Chen, Po-Jiang
2017-01-01
Background Southern Taiwan has been a hotspot for dengue fever transmission since 1998. During 2014 and 2015, Taiwan experienced unprecedented dengue outbreaks and the causes are poorly understood. This study aims to investigate the influence of regional and local climate conditions on the incidence of dengue fever in Taiwan, as well as to develop a climate-based model for future forecasting. Methodology/Principle findings Historical time-series data on dengue outbreaks in southern Taiwan from 1998 to 2015 were investigated. Local climate variables were analyzed using a distributed lag non-linear model (DLNM), and the model of best fit was used to predict dengue incidence between 2013 and 2015. The cross-wavelet coherence approach was used to evaluate the regional El Niño Southern Oscillation (ENSO) and Indian Ocean Dipole (IOD) effects on dengue incidence and local climate variables. The DLNM results highlighted the important non-linear and lag effects of minimum temperature and precipitation. Minimum temperature above 23°C or below 17°C can increase dengue incidence rate with lag effects of 10 to 15 weeks. Moderate to high precipitation can increase dengue incidence rates with a lag of 10 or 20 weeks. The model of best fit successfully predicted dengue transmission between 2013 and 2015. The prediction accuracy ranged from 0.7 to 0.9, depending on the number of weeks ahead of the prediction. ENSO and IOD were associated with nonstationary inter-annual patterns of dengue transmission. IOD had a greater impact on the seasonality of local climate conditions. Conclusions/Significance Our findings suggest that dengue transmission can be affected by regional and local climatic fluctuations in southern Taiwan. The climate-based model developed in this study can provide important information for dengue early warning systems in Taiwan. Local climate conditions might be influenced by ENSO and IOD, to result in unusual dengue outbreaks. PMID:28575035
Effects of local and regional climatic fluctuations on dengue outbreaks in southern Taiwan.
Chuang, Ting-Wu; Chaves, Luis Fernando; Chen, Po-Jiang
2017-01-01
Southern Taiwan has been a hotspot for dengue fever transmission since 1998. During 2014 and 2015, Taiwan experienced unprecedented dengue outbreaks and the causes are poorly understood. This study aims to investigate the influence of regional and local climate conditions on the incidence of dengue fever in Taiwan, as well as to develop a climate-based model for future forecasting. Historical time-series data on dengue outbreaks in southern Taiwan from 1998 to 2015 were investigated. Local climate variables were analyzed using a distributed lag non-linear model (DLNM), and the model of best fit was used to predict dengue incidence between 2013 and 2015. The cross-wavelet coherence approach was used to evaluate the regional El Niño Southern Oscillation (ENSO) and Indian Ocean Dipole (IOD) effects on dengue incidence and local climate variables. The DLNM results highlighted the important non-linear and lag effects of minimum temperature and precipitation. Minimum temperature above 23°C or below 17°C can increase dengue incidence rate with lag effects of 10 to 15 weeks. Moderate to high precipitation can increase dengue incidence rates with a lag of 10 or 20 weeks. The model of best fit successfully predicted dengue transmission between 2013 and 2015. The prediction accuracy ranged from 0.7 to 0.9, depending on the number of weeks ahead of the prediction. ENSO and IOD were associated with nonstationary inter-annual patterns of dengue transmission. IOD had a greater impact on the seasonality of local climate conditions. Our findings suggest that dengue transmission can be affected by regional and local climatic fluctuations in southern Taiwan. The climate-based model developed in this study can provide important information for dengue early warning systems in Taiwan. Local climate conditions might be influenced by ENSO and IOD, to result in unusual dengue outbreaks.
The Hubble diagram for a system within dark energy: influence of some relevant quantities
NASA Astrophysics Data System (ADS)
Saarinen, J.; Teerikorpi, P.
2014-08-01
Aims: We study the influence of relevant quantities, including the density of dark energy (DE), to the predicted Hubble outflow around a system of galaxies. In particular, we are interested in the difference between two models: 1) the standard ΛCDM model, with the everywhere constant DE density, and 2) the "Swiss cheese model", where the Universe is as old as the standard model and the DE density is zero on short scales, including the environment of the system. Methods: We calculated the current predicted outflow patterns of dwarf galaxies around the Local Group-like system, using different values for the mass of the group, the local DE density, and the time of ejection of the dwarf galaxies, which are treated as test particles. These results are compared with the observed Hubble flow around the Local Group. Results: The predicted distance-velocity relations around galaxy groups are not very sensitive indicators of the DE density, owing to the observational scatter and the uncertainties caused by the mass used for the group and a range in the ejection times. In general, the Local Group outflow data agree with the local DE density being equal to the global one, if the Local Group mass is about 4 × 1012 M⊙; a lower mass ≲ 2 × 1012 M⊙ could suggest a zero local DE density. The dependence of the inferred DE density on the mass is a handicap in this and other common dynamical methods. This emphasizes the need to use different approaches together, for constraining the local DE density.
Howell, Brett A; Chauhan, Anuj
2010-08-01
Physiologically based pharmacokinetic (PBPK) models were developed for design and optimization of liposome therapy for treatment of overdoses of tricyclic antidepressants and local anesthetics. In vitro drug-binding data for pegylated, anionic liposomes and published mechanistic equations for partition coefficients were used to develop the models. The models were proven reliable through comparisons to intravenous data. The liposomes were predicted to be highly effective at treating amitriptyline overdoses, with reductions in the area under the concentration versus time curves (AUC) of 64% for the heart and brain. Peak heart and brain drug concentrations were predicted to drop by 20%. Bupivacaine AUC and peak concentration reductions were lower at 15.4% and 17.3%, respectively, for the heart and brain. The predicted pharmacokinetic profiles following liposome administration agreed well with data from clinical studies where protein fragments were administered to patients for overdose treatment. Published data on local cardiac function were used to relate the predicted concentrations in the body to local pharmacodynamic effects in the heart. While the results offer encouragement for future liposome therapies geared toward overdose, it is imperative to point out that animal experiments and phase I clinical trials are the next steps to ensuring the efficacy of the treatment. (c) 2010 Wiley-Liss, Inc. and the American Pharmacists Association
Improved global prediction of 300 nautical mile mean free air anomalies
NASA Technical Reports Server (NTRS)
Cruz, J. Y.
1982-01-01
Current procedures used for the global prediction of 300nm mean anomalies starting from known values of 1 deg by 1 deg mean anomalies yield unreasonable prediction results when applied to 300nm blocks which have a rapidly varying gravity anomaly field and which contain relatively few observed 60nm blocks. Improvement of overall 300nm anomaly prediction is first achieved by using area-weighted as opposed to unweighted averaging of the 25 generated 60nm mean anomalies inside the 300nm block. Then, improvement of prediction over rough 300nm blocks is realized through the use of fully known 1 deg by 1 deg mean elevations, taking advantage of the correlation that locally exists between 60nm mean anomalies and 60nm mean elevations inside the 300nm block. An improved prediction model which adapts itself to the roughness of the local anomaly field is found to be the model of Least Squares Collocation with systematic parameters, the systematic parameter being the slope b which is a type of Bouguer slope expressing the correlation that locally exists between 60nm mean anomalies and 60nm mean elevations.
Nazemi, S Majid; Amini, Morteza; Kontulainen, Saija A; Milner, Jaques S; Holdsworth, David W; Masri, Bassam A; Wilson, David R; Johnston, James D
2017-01-01
Quantitative computed tomography based subject-specific finite element modeling has potential to clarify the role of subchondral bone alterations in knee osteoarthritis initiation, progression, and pain. However, it is unclear what density-modulus equation(s) should be applied with subchondral cortical and subchondral trabecular bone when constructing finite element models of the tibia. Using a novel approach applying neural networks, optimization, and back-calculation against in situ experimental testing results, the objective of this study was to identify subchondral-specific equations that optimized finite element predictions of local structural stiffness at the proximal tibial subchondral surface. Thirteen proximal tibial compartments were imaged via quantitative computed tomography. Imaged bone mineral density was converted to elastic moduli using multiple density-modulus equations (93 total variations) then mapped to corresponding finite element models. For each variation, root mean squared error was calculated between finite element prediction and in situ measured stiffness at 47 indentation sites. Resulting errors were used to train an artificial neural network, which provided an unlimited number of model variations, with corresponding error, for predicting stiffness at the subchondral bone surface. Nelder-Mead optimization was used to identify optimum density-modulus equations for predicting stiffness. Finite element modeling predicted 81% of experimental stiffness variance (with 10.5% error) using optimized equations for subchondral cortical and trabecular bone differentiated with a 0.5g/cm 3 density. In comparison with published density-modulus relationships, optimized equations offered improved predictions of local subchondral structural stiffness. Further research is needed with anisotropy inclusion, a smaller voxel size and de-blurring algorithms to improve predictions. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Reich, Felix A.; Rickert, Wilhelm; Müller, Wolfgang H.
2018-03-01
This study investigates the implications of various electromagnetic force models in macroscopic situations. There is an ongoing academic discussion which model is "correct," i.e., generally applicable. Often, gedankenexperiments with light waves or photons are used in order to motivate certain models. In this work, three problems with bodies at the macroscopic scale are used for computing theoretical model-dependent predictions. Two aspects are considered, total forces between bodies and local deformations. By comparing with experimental data, insight is gained regarding the applicability of the models. First, the total force between two cylindrical magnets is computed. Then a spherical magnetostriction problem is considered to show different deformation predictions. As a third example focusing on local deformations, a droplet of silicone oil in castor oil is considered, placed in a homogeneous electric field. By using experimental data, some conclusions are drawn and further work is motivated.
Polychlorinated Biphenyl (PCB) Bioaccumulation in Fish: A Look at Michigan's Upper Peninsula
NASA Astrophysics Data System (ADS)
Sokol, E. C.; Urban, N. R.; Perlinger, J. A.; Khan, T.; Friedman, C. L.
2014-12-01
Fish consumption is an important economic, social and cultural component of Michigan's UpperPeninsula, where safe fish consumption is threatened due to polychlorinated biphenyl (PCB)contamination. Despite its predominantly rural nature, the Upper Peninsula has a history of industrialPCB use. PCB congener concentrations in fish vary 50-fold in Upper Peninsula lakes. Several factors maycontribute to this high variability including local point sources, unique watershed and lakecharacteristics, and food web structure. It was hypothesized that the variability in congener distributionscould be used to identify factors controlling concentrations in fish, and then to use those factors topredict PCB contamination in fish from lakes that had not been monitored. Watershed and lakecharacteristics were acquired from several databases for 16 lakes sampled in the State's fishcontaminant survey. Species congener distributions were compared using Principal Component Analysis(PCA) to distinguish between lakes with local vs. regional, atmospheric sources; six lakes were predictedto have local sources and half of those have confirmed local PCB use. For lakes without local PCBsources, PCA indicated that lake size was the primary factor influencing PCB concentrations. The EPA'sbioaccumulation model, BASS, was used to predict PCB contamination in lakes without local sources as afunction of food web characteristics. The model was used to evaluate the hypothesis that deep,oligotrophic lakes have longer food webs and higher PCB concentrations in top predator fish. Based onthese findings, we will develop a mechanistic watershed-lake model to predict PCB concentrations infish as a function of atmospheric PCB concentrations, lake size, and trophic state. Future atmosphericconcentrations, predicted by modeling potential primary and secondary emission scenarios, will be usedto predict the time horizon for safe fish consumption.
Improved model quality assessment using ProQ2.
Ray, Arjun; Lindahl, Erik; Wallner, Björn
2012-09-10
Employing methods to assess the quality of modeled protein structures is now standard practice in bioinformatics. In a broad sense, the techniques can be divided into methods relying on consensus prediction on the one hand, and single-model methods on the other. Consensus methods frequently perform very well when there is a clear consensus, but this is not always the case. In particular, they frequently fail in selecting the best possible model in the hard cases (lacking consensus) or in the easy cases where models are very similar. In contrast, single-model methods do not suffer from these drawbacks and could potentially be applied on any protein of interest to assess quality or as a scoring function for sampling-based refinement. Here, we present a new single-model method, ProQ2, based on ideas from its predecessor, ProQ. ProQ2 is a model quality assessment algorithm that uses support vector machines to predict local as well as global quality of protein models. Improved performance is obtained by combining previously used features with updated structural and predicted features. The most important contribution can be attributed to the use of profile weighting of the residue specific features and the use features averaged over the whole model even though the prediction is still local. ProQ2 is significantly better than its predecessors at detecting high quality models, improving the sum of Z-scores for the selected first-ranked models by 20% and 32% compared to the second-best single-model method in CASP8 and CASP9, respectively. The absolute quality assessment of the models at both local and global level is also improved. The Pearson's correlation between the correct and local predicted score is improved from 0.59 to 0.70 on CASP8 and from 0.62 to 0.68 on CASP9; for global score to the correct GDT_TS from 0.75 to 0.80 and from 0.77 to 0.80 again compared to the second-best single methods in CASP8 and CASP9, respectively. ProQ2 is available at http://proq2.wallnerlab.org.
Piatkowski, Pawel; Kasprzak, Joanna M; Kumar, Deepak; Magnus, Marcin; Chojnowski, Grzegorz; Bujnicki, Janusz M
2016-01-01
RNA encompasses an essential part of all known forms of life. The functions of many RNA molecules are dependent on their ability to form complex three-dimensional (3D) structures. However, experimental determination of RNA 3D structures is laborious and challenging, and therefore, the majority of known RNAs remain structurally uncharacterized. To address this problem, computational structure prediction methods were developed that either utilize information derived from known structures of other RNA molecules (by way of template-based modeling) or attempt to simulate the physical process of RNA structure formation (by way of template-free modeling). All computational methods suffer from various limitations that make theoretical models less reliable than high-resolution experimentally determined structures. This chapter provides a protocol for computational modeling of RNA 3D structure that overcomes major limitations by combining two complementary approaches: template-based modeling that is capable of predicting global architectures based on similarity to other molecules but often fails to predict local unique features, and template-free modeling that can predict the local folding, but is limited to modeling the structure of relatively small molecules. Here, we combine the use of a template-based method ModeRNA with a template-free method SimRNA. ModeRNA requires a sequence alignment of the target RNA sequence to be modeled with a template of the known structure; it generates a model that predicts the structure of a conserved core and provides a starting point for modeling of variable regions. SimRNA can be used to fold small RNAs (<80 nt) without any additional structural information, and to refold parts of models for larger RNAs that have a correctly modeled core. ModeRNA can be either downloaded, compiled and run locally or run through a web interface at http://genesilico.pl/modernaserver/ . SimRNA is currently available to download for local use as a precompiled software package at http://genesilico.pl/software/stand-alone/simrna and as a web server at http://genesilico.pl/SimRNAweb . For model optimization we use QRNAS, available at http://genesilico.pl/qrnas .
Modelling daily PM2.5 concentrations at high spatio-temporal resolution across Switzerland.
de Hoogh, Kees; Héritier, Harris; Stafoggia, Massimo; Künzli, Nino; Kloog, Itai
2018-02-01
Spatiotemporal resolved models were developed predicting daily fine particulate matter (PM 2.5 ) concentrations across Switzerland from 2003 to 2013. Relatively sparse PM 2.5 monitoring data was supplemented by imputing PM 2.5 concentrations at PM 10 sites, using PM 2.5 /PM 10 ratios at co-located sites. Daily PM 2.5 concentrations were first estimated at a 1 × 1km resolution across Switzerland, using Multiangle Implementation of Atmospheric Correction (MAIAC) spectral aerosol optical depth (AOD) data in combination with spatiotemporal predictor data in a four stage approach. Mixed effect models (1) were used to predict PM 2.5 in cells with AOD but without PM 2.5 measurements (2). A generalized additive mixed model with spatial smoothing was applied to generate grid cell predictions for those grid cells where AOD was missing (3). Finally, local PM 2.5 predictions were estimated at each monitoring site by regressing the residuals from the 1 × 1km estimate against local spatial and temporal variables using machine learning techniques (4) and adding them to the stage 3 global estimates. The global (1 km) and local (100 m) models explained on average 73% of the total,71% of the spatial and 75% of the temporal variation (all cross validated) globally and on average 89% (total) 95% (spatial) and 88% (temporal) of the variation locally in measured PM 2.5 concentrations. Copyright © 2017 Elsevier Ltd. All rights reserved.
Patterns and multi-scale drivers of phytoplankton species richness in temperate peri-urban lakes.
Catherine, Arnaud; Selma, Maloufi; Mouillot, David; Troussellier, Marc; Bernard, Cécile
2016-07-15
Local species richness (SR) is a key characteristic affecting ecosystem functioning. Yet, the mechanisms regulating phytoplankton diversity in freshwater ecosystems are not fully understood, especially in peri-urban environments where anthropogenic pressures strongly impact the quality of aquatic ecosystems. To address this issue, we sampled the phytoplankton communities of 50 lakes in the Paris area (France) characterized by a large gradient of physico-chemical and catchment-scale characteristics. We used large phytoplankton datasets to describe phytoplankton diversity patterns and applied a machine-learning algorithm to test the degree to which species richness patterns are potentially controlled by environmental factors. Selected environmental factors were studied at two scales: the lake-scale (e.g. nutrients concentrations, water temperature, lake depth) and the catchment-scale (e.g. catchment, landscape and climate variables). Then, we used a variance partitioning approach to evaluate the interaction between lake-scale and catchment-scale variables in explaining local species richness. Finally, we analysed the residuals of predictive models to identify potential vectors of improvement of phytoplankton species richness predictive models. Lake-scale and catchment-scale drivers provided similar predictive accuracy of local species richness (R(2)=0.458 and 0.424, respectively). Both models suggested that seasonal temperature variations and nutrient supply strongly modulate local species richness. Integrating lake- and catchment-scale predictors in a single predictive model did not provide increased predictive accuracy; therefore suggesting that the catchment-scale model probably explains observed species richness variations through the impact of catchment-scale variables on in-lake water quality characteristics. Models based on catchment characteristics, which include simple and easy to obtain variables, provide a meaningful way of predicting phytoplankton species richness in temperate lakes. This approach may prove useful and cost-effective for the management and conservation of aquatic ecosystems. Copyright © 2016 Elsevier B.V. All rights reserved.
Zhou, Jiyun; Wang, Hongpeng; Zhao, Zhishan; Xu, Ruifeng; Lu, Qin
2018-05-08
Protein secondary structure is the three dimensional form of local segments of proteins and its prediction is an important problem in protein tertiary structure prediction. Developing computational approaches for protein secondary structure prediction is becoming increasingly urgent. We present a novel deep learning based model, referred to as CNNH_PSS, by using multi-scale CNN with highway. In CNNH_PSS, any two neighbor convolutional layers have a highway to deliver information from current layer to the output of the next one to keep local contexts. As lower layers extract local context while higher layers extract long-range interdependencies, the highways between neighbor layers allow CNNH_PSS to have ability to extract both local contexts and long-range interdependencies. We evaluate CNNH_PSS on two commonly used datasets: CB6133 and CB513. CNNH_PSS outperforms the multi-scale CNN without highway by at least 0.010 Q8 accuracy and also performs better than CNF, DeepCNF and SSpro8, which cannot extract long-range interdependencies, by at least 0.020 Q8 accuracy, demonstrating that both local contexts and long-range interdependencies are indeed useful for prediction. Furthermore, CNNH_PSS also performs better than GSM and DCRNN which need extra complex model to extract long-range interdependencies. It demonstrates that CNNH_PSS not only cost less computer resource, but also achieves better predicting performance. CNNH_PSS have ability to extracts both local contexts and long-range interdependencies by combing multi-scale CNN and highway network. The evaluations on common datasets and comparisons with state-of-the-art methods indicate that CNNH_PSS is an useful and efficient tool for protein secondary structure prediction.
Dussaillant, Francisca; Apablaza, Mauricio
2017-08-01
After a major earthquake, the assignment of scarce mental health emergency personnel to different geographic areas is crucial to the effective management of the crisis. The scarce information that is available in the aftermath of a disaster may be valuable in helping predict where are the populations that are in most need. The objectives of this study were to derive algorithms to predict posttraumatic stress (PTS) symptom prevalence and local distribution after an earthquake and to test whether there are algorithms that require few input data and are still reasonably predictive. A rich database of PTS symptoms, informed after Chile's 2010 earthquake and tsunami, was used. Several model specifications for the mean and centiles of the distribution of PTS symptoms, together with posttraumatic stress disorder (PTSD) prevalence, were estimated via linear and quantile regressions. The models varied in the set of covariates included. Adjusted R2 for the most liberal specifications (in terms of numbers of covariates included) ranged from 0.62 to 0.74, depending on the outcome. When only including peak ground acceleration (PGA), poverty rate, and household damage in linear and quadratic form, predictive capacity was still good (adjusted R2 from 0.59 to 0.67 were obtained). Information about local poverty, household damage, and PGA can be used as an aid to predict PTS symptom prevalence and local distribution after an earthquake. This can be of help to improve the assignment of mental health personnel to the affected localities. Dussaillant F , Apablaza M . Predicting posttraumatic stress symptom prevalence and local distribution after an earthquake with scarce data. Prehosp Disaster Med. 2017;32(4):357-367.
Development of a Localized Low-Dimensional Approach to Turbulence Simulation
NASA Astrophysics Data System (ADS)
Juttijudata, Vejapong; Rempfer, Dietmar; Lumley, John
2000-11-01
Our previous study has shown that the localized low-dimensional model derived from a projection of Navier-Stokes equations onto a set of one-dimensional scalar POD modes, with boundary conditions at y^+=40, can predict wall turbulence accurately for short times while failing to give a stable long-term solution. The structures obtained from the model and later studies suggest our boundary conditions from DNS are not consistent with the solution from the localized model resulting in an injection of energy at the top boundary. In the current study, we develop low-dimensional models using one-dimensional scalar POD modes derived from an explicitly filtered DNS. This model problem has exact no-slip boundary conditions at both walls while the locality of the wall layer is still retained. Furthermore, the interaction between wall and core region is attenuated via an explicit filter which allows us to investigate the quality of the model without requiring complicated modeling of the top boundary conditions. The full-channel model gives reasonable wall turbulence structures as well as long-term turbulent statistics while still having difficulty with the prediction of the mean velocity profile farther from the wall. We also consider a localized model with modified boundary conditions in the last part of our study.
Miconi, Thomas; Groomes, Laura; Kreiman, Gabriel
2016-01-01
When searching for an object in a scene, how does the brain decide where to look next? Visual search theories suggest the existence of a global “priority map” that integrates bottom-up visual information with top-down, target-specific signals. We propose a mechanistic model of visual search that is consistent with recent neurophysiological evidence, can localize targets in cluttered images, and predicts single-trial behavior in a search task. This model posits that a high-level retinotopic area selective for shape features receives global, target-specific modulation and implements local normalization through divisive inhibition. The normalization step is critical to prevent highly salient bottom-up features from monopolizing attention. The resulting activity pattern constitues a priority map that tracks the correlation between local input and target features. The maximum of this priority map is selected as the locus of attention. The visual input is then spatially enhanced around the selected location, allowing object-selective visual areas to determine whether the target is present at this location. This model can localize objects both in array images and when objects are pasted in natural scenes. The model can also predict single-trial human fixations, including those in error and target-absent trials, in a search task involving complex objects. PMID:26092221
Capabilities of current wildfire models when simulating topographical flow
NASA Astrophysics Data System (ADS)
Kochanski, A.; Jenkins, M.; Krueger, S. K.; McDermott, R.; Mell, W.
2009-12-01
Accurate predictions of the growth, spread and suppression of wild fires rely heavily on the correct prediction of the local wind conditions and the interactions between the fire and the local ambient airflow. Resolving local flows, often strongly affected by topographical features like hills, canyons and ridges, is a prerequisite for accurate simulation and prediction of fire behaviors. In this study, we present the results of high-resolution numerical simulations of the flow over a smooth hill, performed using (1) the NIST WFDS (WUI or Wildland-Urban-Interface version of the FDS or Fire Dynamic Simulator), and (2) the LES version of the NCAR Weather Research and Forecasting (WRF-LES) model. The WFDS model is in the initial stages of development for application to wind flow and fire spread over complex terrain. The focus of the talk is to assess how well simple topographical flow is represented by WRF-LES and the current version of WFDS. If sufficient progress has been made prior to the meeting then the importance of the discrepancies between the predicted and measured winds, in terms of simulated fire behavior, will be examined.
Auditory Time-Frequency Masking for Spectrally and Temporally Maximally-Compact Stimuli
Laback, Bernhard; Savel, Sophie; Ystad, Sølvi; Balazs, Peter; Meunier, Sabine; Kronland-Martinet, Richard
2016-01-01
Many audio applications perform perception-based time-frequency (TF) analysis by decomposing sounds into a set of functions with good TF localization (i.e. with a small essential support in the TF domain) using TF transforms and applying psychoacoustic models of auditory masking to the transform coefficients. To accurately predict masking interactions between coefficients, the TF properties of the model should match those of the transform. This involves having masking data for stimuli with good TF localization. However, little is known about TF masking for mathematically well-localized signals. Most existing masking studies used stimuli that are broad in time and/or frequency and few studies involved TF conditions. Consequently, the present study had two goals. The first was to collect TF masking data for well-localized stimuli in humans. Masker and target were 10-ms Gaussian-shaped sinusoids with a bandwidth of approximately one critical band. The overall pattern of results is qualitatively similar to existing data for long maskers. To facilitate implementation in audio processing algorithms, a dataset provides the measured TF masking function. The second goal was to assess the potential effect of auditory efferents on TF masking using a modeling approach. The temporal window model of masking was used to predict present and existing data in two configurations: (1) with standard model parameters (i.e. without efferents), (2) with cochlear gain reduction to simulate the activation of efferents. The ability of the model to predict the present data was quite good with the standard configuration but highly degraded with gain reduction. Conversely, the ability of the model to predict existing data for long maskers was better with than without gain reduction. Overall, the model predictions suggest that TF masking can be affected by efferent (or other) effects that reduce cochlear gain. Such effects were avoided in the experiment of this study by using maximally-compact stimuli. PMID:27875575
Auditory Time-Frequency Masking for Spectrally and Temporally Maximally-Compact Stimuli.
Necciari, Thibaud; Laback, Bernhard; Savel, Sophie; Ystad, Sølvi; Balazs, Peter; Meunier, Sabine; Kronland-Martinet, Richard
2016-01-01
Many audio applications perform perception-based time-frequency (TF) analysis by decomposing sounds into a set of functions with good TF localization (i.e. with a small essential support in the TF domain) using TF transforms and applying psychoacoustic models of auditory masking to the transform coefficients. To accurately predict masking interactions between coefficients, the TF properties of the model should match those of the transform. This involves having masking data for stimuli with good TF localization. However, little is known about TF masking for mathematically well-localized signals. Most existing masking studies used stimuli that are broad in time and/or frequency and few studies involved TF conditions. Consequently, the present study had two goals. The first was to collect TF masking data for well-localized stimuli in humans. Masker and target were 10-ms Gaussian-shaped sinusoids with a bandwidth of approximately one critical band. The overall pattern of results is qualitatively similar to existing data for long maskers. To facilitate implementation in audio processing algorithms, a dataset provides the measured TF masking function. The second goal was to assess the potential effect of auditory efferents on TF masking using a modeling approach. The temporal window model of masking was used to predict present and existing data in two configurations: (1) with standard model parameters (i.e. without efferents), (2) with cochlear gain reduction to simulate the activation of efferents. The ability of the model to predict the present data was quite good with the standard configuration but highly degraded with gain reduction. Conversely, the ability of the model to predict existing data for long maskers was better with than without gain reduction. Overall, the model predictions suggest that TF masking can be affected by efferent (or other) effects that reduce cochlear gain. Such effects were avoided in the experiment of this study by using maximally-compact stimuli.
Development of a recursion RNG-based turbulence model
NASA Technical Reports Server (NTRS)
Zhou, YE; Vahala, George; Thangam, S.
1993-01-01
Reynolds stress closure models based on the recursion renormalization group theory are developed for the prediction of turbulent separated flows. The proposed model uses a finite wavenumber truncation scheme to account for the spectral distribution of energy. In particular, the model incorporates effects of both local and nonlocal interactions. The nonlocal interactions are shown to yield a contribution identical to that from the epsilon-renormalization group (RNG), while the local interactions introduce higher order dispersive effects. A formal analysis of the model is presented and its ability to accurately predict separated flows is analyzed from a combined theoretical and computational stand point. Turbulent flow past a backward facing step is chosen as a test case and the results obtained based on detailed computations demonstrate that the proposed recursion -RNG model with finite cut-off wavenumber can yield very good predictions for the backstep problem.
Improving and Evaluating Nested Sampling Algorithm for Marginal Likelihood Estimation
NASA Astrophysics Data System (ADS)
Ye, M.; Zeng, X.; Wu, J.; Wang, D.; Liu, J.
2016-12-01
With the growing impacts of climate change and human activities on the cycle of water resources, an increasing number of researches focus on the quantification of modeling uncertainty. Bayesian model averaging (BMA) provides a popular framework for quantifying conceptual model and parameter uncertainty. The ensemble prediction is generated by combining each plausible model's prediction, and each model is attached with a model weight which is determined by model's prior weight and marginal likelihood. Thus, the estimation of model's marginal likelihood is crucial for reliable and accurate BMA prediction. Nested sampling estimator (NSE) is a new proposed method for marginal likelihood estimation. The process of NSE is accomplished by searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm is often used for local sampling. However, M-H is not an efficient sampling algorithm for high-dimensional or complicated parameter space. For improving the efficiency of NSE, it could be ideal to incorporate the robust and efficient sampling algorithm - DREAMzs into the local sampling of NSE. The comparison results demonstrated that the improved NSE could improve the efficiency of marginal likelihood estimation significantly. However, both improved and original NSEs suffer from heavy instability. In addition, the heavy computation cost of huge number of model executions is overcome by using an adaptive sparse grid surrogates.
Weather Research and Forecasting Model Wind Sensitivity Study at Edwards Air Force Base, CA
NASA Technical Reports Server (NTRS)
Watson, Leela R.; Bauman, William H., III; Hoeth, Brian
2009-01-01
This abstract describes work that will be done by the Applied Meteorology Unit (AMU) in assessing the success of different model configurations in predicting "wind cycling" cases at Edwards Air Force Base, CA (EAFB), in which the wind speeds and directions oscillate among towers near the EAFB runway. The Weather Research and Forecasting (WRF) model allows users to choose among two dynamical cores - the Advanced Research WRF (ARW) and the Non-hydrostatic Mesoscale Model (NMM). There are also data assimilation analysis packages available for the initialization of the WRF model - the Local Analysis and Prediction System (LAPS) and the Advanced Regional Prediction System (ARPS) Data Analysis System (ADAS). Having a series of initialization options and WRF cores, as well as many options within each core, creates challenges for local forecasters, such as determining which configuration options are best to address specific forecast concerns. The goal of this project is to assess the different configurations available and determine which configuration will best predict surface wind speed and direction at EAFB.
Previous modelling of the median lethal dose (oral rat LD50) has indicated that local class-based models yield better correlations than global models. We evaluated the hypothesis that dividing the dataset by pesticidal mechanisms would improve prediction accuracy. A linear discri...
Computational Models Predict Larger Muscle Tissue Strains at Faster Sprinting Speeds
Fiorentino, Niccolo M; Rehorn, Michael R; Chumanov, Elizabeth S; Thelen, Darryl G; Blemker, Silvia S
2014-01-01
Introduction: Proximal biceps femoris musculotendon strain injury has been well established as a common injury among athletes participating in sports that require sprinting near or at maximum speed; however, little is known about the mechanisms that make this muscle tissue more susceptible to injury at faster speeds. Purpose: Quantify localized tissue strain during sprinting at a range of speeds. Methods: Biceps femoris long head (BFlh) musculotendon dimensions of 14 athletes were measured on magnetic resonance (MR) images and used to generate a finite element computational model. The model was first validated through comparison with previous dynamic MR experiments. After validation, muscle activation and muscle-tendon unit length change were derived from forward dynamic simulations of sprinting at 70%, 85% and 100% maximum speed and used as input to the computational model simulations. Simulations ran from mid-swing to foot contact. Results: The model predictions of local muscle tissue strain magnitude compared favorably with in vivo tissue strain measurements determined from dynamic MR experiments of the BFlh. For simulations of sprinting, local fiber strain was non-uniform at all speeds, with the highest muscle tissue strain where injury is often observed (proximal myotendinous junction). At faster sprinting speeds, increases were observed in fiber strain non-uniformity and peak local fiber strain (0.56, 0.67 and 0.72, for sprinting at 70%, 85% and 100% maximum speed). A histogram of local fiber strains showed that more of the BFlh reached larger local fiber strains at faster speeds. Conclusions: At faster sprinting speeds, peak local fiber strain, fiber strain non-uniformity and the amount of muscle undergoing larger strains are predicted to increase, likely contributing to the BFlh muscle’s higher injury susceptibility at faster speeds. PMID:24145724
Deep Visual Attention Prediction
NASA Astrophysics Data System (ADS)
Wang, Wenguan; Shen, Jianbing
2018-05-01
In this work, we aim to predict human eye fixation with view-free scenes based on an end-to-end deep learning architecture. Although Convolutional Neural Networks (CNNs) have made substantial improvement on human attention prediction, it is still needed to improve CNN based attention models by efficiently leveraging multi-scale features. Our visual attention network is proposed to capture hierarchical saliency information from deep, coarse layers with global saliency information to shallow, fine layers with local saliency response. Our model is based on a skip-layer network structure, which predicts human attention from multiple convolutional layers with various reception fields. Final saliency prediction is achieved via the cooperation of those global and local predictions. Our model is learned in a deep supervision manner, where supervision is directly fed into multi-level layers, instead of previous approaches of providing supervision only at the output layer and propagating this supervision back to earlier layers. Our model thus incorporates multi-level saliency predictions within a single network, which significantly decreases the redundancy of previous approaches of learning multiple network streams with different input scales. Extensive experimental analysis on various challenging benchmark datasets demonstrate our method yields state-of-the-art performance with competitive inference time.
Prediction of resource volumes at untested locations using simple local prediction models
Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.
2006-01-01
This paper shows how local spatial nonparametric prediction models can be applied to estimate volumes of recoverable gas resources at individual undrilled sites, at multiple sites on a regional scale, and to compute confidence bounds for regional volumes based on the distribution of those estimates. An approach that combines cross-validation, the jackknife, and bootstrap procedures is used to accomplish this task. Simulation experiments show that cross-validation can be applied beneficially to select an appropriate prediction model. The cross-validation procedure worked well for a wide range of different states of nature and levels of information. Jackknife procedures are used to compute individual prediction estimation errors at undrilled locations. The jackknife replicates also are used with a bootstrap resampling procedure to compute confidence bounds for the total volume. The method was applied to data (partitioned into a training set and target set) from the Devonian Antrim Shale continuous-type gas play in the Michigan Basin in Otsego County, Michigan. The analysis showed that the model estimate of total recoverable volumes at prediction sites is within 4 percent of the total observed volume. The model predictions also provide frequency distributions of the cell volumes at the production unit scale. Such distributions are the basis for subsequent economic analyses. ?? Springer Science+Business Media, LLC 2007.
Popilski, Hen; Stepensky, David
2015-05-01
Solid tumors are characterized by complex morphology. Numerous factors relating to the composition of the cells and tumor stroma, vascularization and drainage of fluids affect the local microenvironment within a specific location inside the tumor. As a result, the intratumoral drug/drug delivery system (DDS) disposition following systemic or local administration is non-homogeneous and its complexity reflects the differences in the local microenvironment. Mathematical models can be used to analyze the intratumoral drug/DDS disposition and pharmacological effects and to assist in choice of optimal anticancer treatment strategies. The mathematical models that have been applied by different research groups to describe the intratumoral disposition of anticancer drugs/DDSs are summarized in this article. The properties of these models and of their suitability for prediction of the drug/DDS intratumoral disposition and pharmacological effects are reviewed. Currently available mathematical models appear to neglect some of the major factors that govern the drug/DDS intratumoral disposition, and apparently possess limited prediction capabilities. More sophisticated and detailed mathematical models and their extensive validation are needed for reliable prediction of different treatment scenarios and for optimization of drug treatment in the individual cancer patients.
Sherlock, M.; Brodrick, J. P.; Ridgers, C. P.
2017-08-08
Here, we compare the reduced non-local electron transport model developed to Vlasov-Fokker-Planck simulations. Two new test cases are considered: the propagation of a heat wave through a high density region into a lower density gas, and a one-dimensional hohlraum ablation problem. We find that the reduced model reproduces the peak heat flux well in the ablation region but significantly over-predicts the coronal preheat. The suitability of the reduced model for computing non-local transport effects other than thermal conductivity is considered by comparing the computed distribution function to the Vlasov-Fokker-Planck distribution function. It is shown that even when the reduced modelmore » reproduces the correct heat flux, the distribution function is significantly different to the Vlasov-Fokker-Planck prediction. Two simple modifications are considered which improve agreement between models in the coronal region.« less
Camproux, A C; Tufféry, P
2005-08-05
Understanding and predicting protein structures depend on the complexity and the accuracy of the models used to represent them. We have recently set up a Hidden Markov Model to optimally compress protein three-dimensional conformations into a one-dimensional series of letters of a structural alphabet. Such a model learns simultaneously the shape of representative structural letters describing the local conformation and the logic of their connections, i.e. the transition matrix between the letters. Here, we move one step further and report some evidence that such a model of protein local architecture also captures some accurate amino acid features. All the letters have specific and distinct amino acid distributions. Moreover, we show that words of amino acids can have significant propensities for some letters. Perspectives point towards the prediction of the series of letters describing the structure of a protein from its amino acid sequence.
The Green’s functions for peridynamic non-local diffusion
Wang, L. J.; Xu, J. F.
2016-01-01
In this work, we develop the Green’s function method for the solution of the peridynamic non-local diffusion model in which the spatial gradient of the generalized potential in the classical theory is replaced by an integral of a generalized response function in a horizon. We first show that the general solutions of the peridynamic non-local diffusion model can be expressed as functionals of the corresponding Green’s functions for point sources, along with volume constraints for non-local diffusion. Then, we obtain the Green’s functions by the Fourier transform method for unsteady and steady diffusions in infinite domains. We also demonstrate that the peridynamic non-local solutions converge to the classical differential solutions when the non-local length approaches zero. Finally, the peridynamic analytical solutions are applied to an infinite plate heated by a Gauss source, and the predicted variations of temperature are compared with the classical local solutions. The peridynamic non-local diffusion model predicts a lower rate of variation of the field quantities than that of the classical theory, which is consistent with experimental observations. The developed method is applicable to general diffusion-type problems. PMID:27713658
Evaluating Predictive Uncertainty of Hyporheic Exchange Modelling
NASA Astrophysics Data System (ADS)
Chow, R.; Bennett, J.; Dugge, J.; Wöhling, T.; Nowak, W.
2017-12-01
Hyporheic exchange is the interaction of water between rivers and groundwater, and is difficult to predict. One of the largest contributions to predictive uncertainty for hyporheic fluxes have been attributed to the representation of heterogeneous subsurface properties. This research aims to evaluate which aspect of the subsurface representation - the spatial distribution of hydrofacies or the model for local-scale (within-facies) heterogeneity - most influences the predictive uncertainty. Also, we seek to identify data types that help reduce this uncertainty best. For this investigation, we conduct a modelling study of the Steinlach River meander, in Southwest Germany. The Steinlach River meander is an experimental site established in 2010 to monitor hyporheic exchange at the meander scale. We use HydroGeoSphere, a fully integrated surface water-groundwater model, to model hyporheic exchange and to assess the predictive uncertainty of hyporheic exchange transit times (HETT). A highly parameterized complex model is built and treated as `virtual reality', which is in turn modelled with simpler subsurface parameterization schemes (Figure). Then, we conduct Monte-Carlo simulations with these models to estimate the predictive uncertainty. Results indicate that: Uncertainty in HETT is relatively small for early times and increases with transit times. Uncertainty from local-scale heterogeneity is negligible compared to uncertainty in the hydrofacies distribution. Introducing more data to a poor model structure may reduce predictive variance, but does not reduce predictive bias. Hydraulic head observations alone cannot constrain the uncertainty of HETT, however an estimate of hyporheic exchange flux proves to be more effective at reducing this uncertainty. Figure: Approach for evaluating predictive model uncertainty. A conceptual model is first developed from the field investigations. A complex model (`virtual reality') is then developed based on that conceptual model. This complex model then serves as the basis to compare simpler model structures. Through this approach, predictive uncertainty can be quantified relative to a known reference solution.
Michael, Edwin; Singh, Brajendra K; Mayala, Benjamin K; Smith, Morgan E; Hampton, Scott; Nabrzyski, Jaroslaw
2017-09-27
There are growing demands for predicting the prospects of achieving the global elimination of neglected tropical diseases as a result of the institution of large-scale nation-wide intervention programs by the WHO-set target year of 2020. Such predictions will be uncertain due to the impacts that spatial heterogeneity and scaling effects will have on parasite transmission processes, which will introduce significant aggregation errors into any attempt aiming to predict the outcomes of interventions at the broader spatial levels relevant to policy making. We describe a modeling platform that addresses this problem of upscaling from local settings to facilitate predictions at regional levels by the discovery and use of locality-specific transmission models, and we illustrate the utility of using this approach to evaluate the prospects for eliminating the vector-borne disease, lymphatic filariasis (LF), in sub-Saharan Africa by the WHO target year of 2020 using currently applied or newly proposed intervention strategies. METHODS AND RESULTS: We show how a computational platform that couples site-specific data discovery with model fitting and calibration can allow both learning of local LF transmission models and simulations of the impact of interventions that take a fuller account of the fine-scale heterogeneous transmission of this parasitic disease within endemic countries. We highlight how such a spatially hierarchical modeling tool that incorporates actual data regarding the roll-out of national drug treatment programs and spatial variability in infection patterns into the modeling process can produce more realistic predictions of timelines to LF elimination at coarse spatial scales, ranging from district to country to continental levels. Our results show that when locally applicable extinction thresholds are used, only three countries are likely to meet the goal of LF elimination by 2020 using currently applied mass drug treatments, and that switching to more intensive drug regimens, increasing the frequency of treatments, or switching to new triple drug regimens will be required if LF elimination is to be accelerated in Africa. The proportion of countries that would meet the goal of eliminating LF by 2020 may, however, reach up to 24/36 if the WHO 1% microfilaremia prevalence threshold is used and sequential mass drug deliveries are applied in countries. We have developed and applied a data-driven spatially hierarchical computational platform that uses the discovery of locally applicable transmission models in order to predict the prospects for eliminating the macroparasitic disease, LF, at the coarser country level in sub-Saharan Africa. We show that fine-scale spatial heterogeneity in local parasite transmission and extinction dynamics, as well as the exact nature of intervention roll-outs in countries, will impact the timelines to achieving national LF elimination on this continent.
Theoretical Prediction of Magnetism in C-doped TlBr
NASA Astrophysics Data System (ADS)
Zhou, Yuzhi; Haller, E. E.; Chrzan, D. C.
2014-05-01
We predict that C, N, and O dopants in TlBr can display large, localized magnetic moments. Density functional theory based electronic structure calculations show that the moments arise from partial filling of the crystal-field-split localized p states of the dopant atoms. A simple model is introduced to explain the magnitude of the moments.
lazar: a modular predictive toxicology framework
Maunz, Andreas; Gütlein, Martin; Rautenberg, Micha; Vorgrimmler, David; Gebele, Denis; Helma, Christoph
2013-01-01
lazar (lazy structure–activity relationships) is a modular framework for predictive toxicology. Similar to the read across procedure in toxicological risk assessment, lazar creates local QSAR (quantitative structure–activity relationship) models for each compound to be predicted. Model developers can choose between a large variety of algorithms for descriptor calculation and selection, chemical similarity indices, and model building. This paper presents a high level description of the lazar framework and discusses the performance of example classification and regression models. PMID:23761761
Saini, Harsh; Raicar, Gaurav; Dehzangi, Abdollah; Lal, Sunil; Sharma, Alok
2015-12-07
Protein subcellular localization is an important topic in proteomics since it is related to a protein׳s overall function, helps in the understanding of metabolic pathways, and in drug design and discovery. In this paper, a basic approximation technique from natural language processing called the linear interpolation smoothing model is applied for predicting protein subcellular localizations. The proposed approach extracts features from syntactical information in protein sequences to build probabilistic profiles using dependency models, which are used in linear interpolation to determine how likely is a sequence to belong to a particular subcellular location. This technique builds a statistical model based on maximum likelihood. It is able to deal effectively with high dimensionality that hinders other traditional classifiers such as Support Vector Machines or k-Nearest Neighbours without sacrificing performance. This approach has been evaluated by predicting subcellular localizations of Gram positive and Gram negative bacterial proteins. Copyright © 2015 Elsevier Ltd. All rights reserved.
Dendritic trafficking faces physiologically critical speed-precision tradeoffs
Williams, Alex H.; O'Donnell, Cian; Sejnowski, Terrence J.; ...
2016-12-30
Nervous system function requires intracellular transport of channels, receptors, mRNAs, and other cargo throughout complex neuronal morphologies. Local signals such as synaptic input can regulate cargo trafficking, motivating the leading conceptual model of neuron-wide transport, sometimes called the ‘sushi-belt model’. Current theories and experiments are based on this model, yet its predictions are not rigorously understood. We formalized the sushi belt model mathematically, and show that it can achieve arbitrarily complex spatial distributions of cargo in reconstructed morphologies. However, the model also predicts an unavoidable, morphology dependent tradeoff between speed, precision and metabolic efficiency of cargo transport. With experimental estimatesmore » of trafficking kinetics, the model predicts delays of many hours or days for modestly accurate and efficient cargo delivery throughout a dendritic tree. In conclusion, these findings challenge current understanding of the efficacy of nucleus-to-synapse trafficking and may explain the prevalence of local biosynthesis in neurons.« less
Dendritic trafficking faces physiologically critical speed-precision tradeoffs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Alex H.; O'Donnell, Cian; Sejnowski, Terrence J.
Nervous system function requires intracellular transport of channels, receptors, mRNAs, and other cargo throughout complex neuronal morphologies. Local signals such as synaptic input can regulate cargo trafficking, motivating the leading conceptual model of neuron-wide transport, sometimes called the ‘sushi-belt model’. Current theories and experiments are based on this model, yet its predictions are not rigorously understood. We formalized the sushi belt model mathematically, and show that it can achieve arbitrarily complex spatial distributions of cargo in reconstructed morphologies. However, the model also predicts an unavoidable, morphology dependent tradeoff between speed, precision and metabolic efficiency of cargo transport. With experimental estimatesmore » of trafficking kinetics, the model predicts delays of many hours or days for modestly accurate and efficient cargo delivery throughout a dendritic tree. In conclusion, these findings challenge current understanding of the efficacy of nucleus-to-synapse trafficking and may explain the prevalence of local biosynthesis in neurons.« less
Daga, Pankaj R; Bolger, Michael B; Haworth, Ian S; Clark, Robert D; Martin, Eric J
2018-03-05
When medicinal chemists need to improve bioavailability (%F) within a chemical series during lead optimization, they synthesize new series members with systematically modified properties mainly by following experience and general rules of thumb. More quantitative models that predict %F of proposed compounds from chemical structure alone have proven elusive. Global empirical %F quantitative structure-property (QSPR) models perform poorly, and projects have too little data to train local %F QSPR models. Mechanistic oral absorption and physiologically based pharmacokinetic (PBPK) models simulate the dissolution, absorption, systemic distribution, and clearance of a drug in preclinical species and humans. Attempts to build global PBPK models based purely on calculated inputs have not achieved the <2-fold average error needed to guide lead optimization. In this work, local GastroPlus PBPK models are instead customized for individual medchem series. The key innovation was building a local QSPR for a numerically fitted effective intrinsic clearance (CL loc ). All inputs are subsequently computed from structure alone, so the models can be applied in advance of synthesis. Training CL loc on the first 15-18 rat %F measurements gave adequate predictions, with clear improvements up to about 30 measurements, and incremental improvements beyond that.
Zhou, Hang; Yang, Yang; Shen, Hong-Bin
2017-03-15
Protein subcellular localization prediction has been an important research topic in computational biology over the last decade. Various automatic methods have been proposed to predict locations for large scale protein datasets, where statistical machine learning algorithms are widely used for model construction. A key step in these predictors is encoding the amino acid sequences into feature vectors. Many studies have shown that features extracted from biological domains, such as gene ontology and functional domains, can be very useful for improving the prediction accuracy. However, domain knowledge usually results in redundant features and high-dimensional feature spaces, which may degenerate the performance of machine learning models. In this paper, we propose a new amino acid sequence-based human protein subcellular location prediction approach Hum-mPLoc 3.0, which covers 12 human subcellular localizations. The sequences are represented by multi-view complementary features, i.e. context vocabulary annotation-based gene ontology (GO) terms, peptide-based functional domains, and residue-based statistical features. To systematically reflect the structural hierarchy of the domain knowledge bases, we propose a novel feature representation protocol denoted as HCM (Hidden Correlation Modeling), which will create more compact and discriminative feature vectors by modeling the hidden correlations between annotation terms. Experimental results on four benchmark datasets show that HCM improves prediction accuracy by 5-11% and F 1 by 8-19% compared with conventional GO-based methods. A large-scale application of Hum-mPLoc 3.0 on the whole human proteome reveals proteins co-localization preferences in the cell. www.csbio.sjtu.edu.cn/bioinf/Hum-mPLoc3/. hbshen@sjtu.edu.cn. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Dynamic fuzzy modeling of storm water infiltration in urban fractured aquifers
Hong, Y.-S.; Rosen, Michael R.; Reeves, R.R.
2002-01-01
In an urban fractured-rock aquifer in the Mt. Eden area of Auckland, New Zealand, disposal of storm water is via "soakholes" drilled directly into the top of the fractured basalt rock. The dynamic response of the groundwater level due to the storm water infiltration shows characteristics of a strongly time-varying system. A dynamic fuzzy modeling approach, which is based on multiple local models that are weighted using fuzzy membership functions, has been developed to identify and predict groundwater level fluctuations caused by storm water infiltration. The dynamic fuzzy model is initialized by the fuzzy clustering algorithm and optimized by the gradient-descent algorithm in order to effectively derive the multiple local models-each of which is associated with a locally valid model that represents the groundwater level state as a response to different intensities of rainfall events. The results have shown that even if the number of fuzzy local models derived is small, the fuzzy modeling approach developed provides good prediction results despite the highly time-varying nature of this urban fractured-rock aquifer system. Further, it allows interpretable representations of the dynamic behavior of the groundwater system due to storm water infiltration.
NASA Astrophysics Data System (ADS)
Wang, Jun; Wang, Yang; Zeng, Hui
2016-01-01
A key issue to address in synthesizing spatial data with variable-support in spatial analysis and modeling is the change-of-support problem. We present an approach for solving the change-of-support and variable-support data fusion problems. This approach is based on geostatistical inverse modeling that explicitly accounts for differences in spatial support. The inverse model is applied here to produce both the best predictions of a target support and prediction uncertainties, based on one or more measurements, while honoring measurements. Spatial data covering large geographic areas often exhibit spatial nonstationarity and can lead to computational challenge due to the large data size. We developed a local-window geostatistical inverse modeling approach to accommodate these issues of spatial nonstationarity and alleviate computational burden. We conducted experiments using synthetic and real-world raster data. Synthetic data were generated and aggregated to multiple supports and downscaled back to the original support to analyze the accuracy of spatial predictions and the correctness of prediction uncertainties. Similar experiments were conducted for real-world raster data. Real-world data with variable-support were statistically fused to produce single-support predictions and associated uncertainties. The modeling results demonstrate that geostatistical inverse modeling can produce accurate predictions and associated prediction uncertainties. It is shown that the local-window geostatistical inverse modeling approach suggested offers a practical way to solve the well-known change-of-support problem and variable-support data fusion problem in spatial analysis and modeling.
Spatiotemporal integration for tactile localization during arm movements: a probabilistic approach.
Maij, Femke; Wing, Alan M; Medendorp, W Pieter
2013-12-01
It has been shown that people make systematic errors in the localization of a brief tactile stimulus that is delivered to the index finger while they are making an arm movement. Here we modeled these spatial errors with a probabilistic approach, assuming that they follow from temporal uncertainty about the occurrence of the stimulus. In the model, this temporal uncertainty converts into a spatial likelihood about the external stimulus location, depending on arm velocity. We tested the prediction of the model that the localization errors depend on arm velocity. Participants (n = 8) were instructed to localize a tactile stimulus that was presented to their index finger while they were making either slow- or fast-targeted arm movements. Our results confirm the model's prediction that participants make larger localization errors when making faster arm movements. The model, which was used to fit the errors for both slow and fast arm movements simultaneously, accounted very well for all the characteristics of these data with temporal uncertainty in stimulus processing as the only free parameter. We conclude that spatial errors in dynamic tactile perception stem from the temporal precision with which tactile inputs are processed.
Improving Localization Accuracy: Successive Measurements Error Modeling
Abu Ali, Najah; Abu-Elkheir, Mervat
2015-01-01
Vehicle self-localization is an essential requirement for many of the safety applications envisioned for vehicular networks. The mathematical models used in current vehicular localization schemes focus on modeling the localization error itself, and overlook the potential correlation between successive localization measurement errors. In this paper, we first investigate the existence of correlation between successive positioning measurements, and then incorporate this correlation into the modeling positioning error. We use the Yule Walker equations to determine the degree of correlation between a vehicle’s future position and its past positions, and then propose a p-order Gauss–Markov model to predict the future position of a vehicle from its past p positions. We investigate the existence of correlation for two datasets representing the mobility traces of two vehicles over a period of time. We prove the existence of correlation between successive measurements in the two datasets, and show that the time correlation between measurements can have a value up to four minutes. Through simulations, we validate the robustness of our model and show that it is possible to use the first-order Gauss–Markov model, which has the least complexity, and still maintain an accurate estimation of a vehicle’s future location over time using only its current position. Our model can assist in providing better modeling of positioning errors and can be used as a prediction tool to improve the performance of classical localization algorithms such as the Kalman filter. PMID:26140345
Holt, James B.; Zhang, Xingyou; Lu, Hua; Shah, Snehal N.; Dooley, Daniel P.; Matthews, Kevin A.; Croft, Janet B.
2017-01-01
Introduction Local health authorities need small-area estimates for prevalence of chronic diseases and health behaviors for multiple purposes. We generated city-level and census-tract–level prevalence estimates of 27 measures for the 500 largest US cities. Methods To validate the methodology, we constructed multilevel logistic regressions to predict 10 selected health indicators among adults aged 18 years or older by using 2013 Behavioral Risk Factor Surveillance System (BRFSS) data; we applied their predicted probabilities to census population data to generate city-level, neighborhood-level, and zip-code–level estimates for the city of Boston, Massachusetts. Results By comparing the predicted estimates with their corresponding direct estimates from a locally administered survey (Boston BRFSS 2010 and 2013), we found that our model-based estimates for most of the selected health indicators at the city level were close to the direct estimates from the local survey. We also found strong correlation between the model-based estimates and direct survey estimates at neighborhood and zip code levels for most indicators. Conclusion Findings suggest that our model-based estimates are reliable and valid at the city level for certain health outcomes. Local health authorities can use the neighborhood-level estimates if high quality local health survey data are not otherwise available. PMID:29049020
NASA Astrophysics Data System (ADS)
Hodgson, Murray; Wareing, Andrew
2008-01-01
A combined beam-tracing and transfer-matrix model for predicting steady-state sound-pressure levels in rooms with multilayer bounding surfaces was used to compare the effect of extended- and local-reaction surfaces, and the accuracy of the local-reaction approximation. Three rooms—an office, a corridor and a workshop—with one or more multilayer test surfaces were considered. The test surfaces were a single-glass panel, a double-drywall panel, a carpeted floor, a suspended-acoustical ceiling, a double-steel panel, and glass fibre on a hard backing. Each test surface was modeled as of extended or of local reaction. Sound-pressure levels were predicted and compared to determine the significance of the surface-reaction assumption. The main conclusions were that the difference between modeling a room surface as of extended or of local reaction is not significant when the surface is a single plate or a single layer of material (solid or porous) with a hard backing. The difference is significant when the surface consists of multilayers of solid or porous material and includes a layer of fluid with a large thickness relative to the other layers. The results are partially explained by considering the surface-reflection coefficients at the first-reflection angles.
A general framework for predicting delayed responses of ecological communities to habitat loss.
Chen, Youhua; Shen, Tsung-Jen
2017-04-20
Although biodiversity crisis at different spatial scales has been well recognised, the phenomena of extinction debt and immigration credit at a crossing-scale context are, at best, unclear. Based on two community patterns, regional species abundance distribution (SAD) and spatial abundance distribution (SAAD), Kitzes and Harte (2015) presented a macroecological framework for predicting post-disturbance delayed extinction patterns in the entire ecological community. In this study, we further expand this basic framework to predict diverse time-lagged effects of habitat destruction on local communities. Specifically, our generalisation of KH's model could address the questions that could not be answered previously: (1) How many species are subjected to delayed extinction in a local community when habitat is destructed in other areas? (2) How do rare or endemic species contribute to extinction debt or immigration credit of the local community? (3) How will species differ between two local areas? From the demonstrations using two SAD models (single-parameter lognormal and logseries), the predicted patterns of the debt, credit, and change in the fraction of unique species can vary, but with consistencies and depending on several factors. The general framework deepens the understanding of the theoretical effects of habitat loss on community dynamic patterns in local samples.
Puig, V; Cembrano, G; Romera, J; Quevedo, J; Aznar, B; Ramón, G; Cabot, J
2009-01-01
This paper deals with the global control of the Riera Blanca catchment in the Barcelona sewer network using a predictive optimal control approach. This catchment has been modelled using a conceptual modelling approach based on decomposing the catchments in subcatchments and representing them as virtual tanks. This conceptual modelling approach allows real-time model calibration and control of the sewer network. The global control problem of the Riera Blanca catchment is solved using a optimal/predictive control algorithm. To implement the predictive optimal control of the Riera Blanca catchment, a software tool named CORAL is used. The on-line control is simulated by interfacing CORAL with a high fidelity simulator of sewer networks (MOUSE). CORAL interchanges readings from the limnimeters and gate commands with MOUSE as if it was connected with the real SCADA system. Finally, the global control results obtained using the predictive optimal control are presented and compared against the results obtained using current local control system. The results obtained using the global control are very satisfactory compared to those obtained using the local control.
NASA Astrophysics Data System (ADS)
Bermúdez, María; Neal, Jeffrey C.; Bates, Paul D.; Coxon, Gemma; Freer, Jim E.; Cea, Luis; Puertas, Jerónimo
2016-04-01
Flood inundation models require appropriate boundary conditions to be specified at the limits of the domain, which commonly consist of upstream flow rate and downstream water level. These data are usually acquired from gauging stations on the river network where measured water levels are converted to discharge via a rating curve. Derived streamflow estimates are therefore subject to uncertainties in this rating curve, including extrapolating beyond the maximum observed ratings magnitude. In addition, the limited number of gauges in reach-scale studies often requires flow to be routed from the nearest upstream gauge to the boundary of the model domain. This introduces additional uncertainty, derived not only from the flow routing method used, but also from the additional lateral rainfall-runoff contributions downstream of the gauging point. Although generally assumed to have a minor impact on discharge in fluvial flood modeling, this local hydrological input may become important in a sparse gauge network or in events with significant local rainfall. In this study, a method to incorporate rating curve uncertainty and the local rainfall-runoff dynamics into the predictions of a reach-scale flood inundation model is proposed. Discharge uncertainty bounds are generated by applying a non-parametric local weighted regression approach to stage-discharge measurements for two gauging stations, while measured rainfall downstream from these locations is cascaded into a hydrological model to quantify additional inflows along the main channel. A regional simplified-physics hydraulic model is then applied to combine these inputs and generate an ensemble of discharge and water elevation time series at the boundaries of a local-scale high complexity hydraulic model. Finally, the effect of these rainfall dynamics and uncertain boundary conditions are evaluated on the local-scale model. Improvements in model performance when incorporating these processes are quantified using observed flood extent data and measured water levels from a 2007 summer flood event on the river Severn. The area of interest is a 7 km reach in which the river passes through the city of Worcester, a low water slope, subcritical reach in which backwater effects are significant. For this domain, the catchment area between flow gauging stations extends over 540 km2. Four hydrological models from the FUSE framework (Framework for Understanding Structural Errors) were set up to simulate the rainfall-runoff process over this area. At this regional scale, a 2-dimensional hydraulic model that solves the local inertial approximation of the shallow water equations was applied to route the flow, whereas the full form of these equations was solved at the local scale to predict the urban flow field. This nested approach hence allows an examination of water fluxes from the catchment to the building scale, while requiring short setup and computational times. An accurate prediction of the magnitude and timing of the flood peak was obtained with the proposed method, in spite of the unusual structure of the rain episode and the complexity of the River Severn system. The findings highlight the importance of estimating boundary condition uncertainty and local rainfall contribution for accurate prediction of river flows and inundation.
Long Term Mean Local Time of the Ascending Node Prediction
NASA Technical Reports Server (NTRS)
McKinley, David P.
2007-01-01
Significant error has been observed in the long term prediction of the Mean Local Time of the Ascending Node on the Aqua spacecraft. This error of approximately 90 seconds over a two year prediction is a complication in planning and timing of maneuvers for all members of the Earth Observing System Afternoon Constellation, which use Aqua's MLTAN as the reference for their inclination maneuvers. It was determined that the source of the prediction error was the lack of a solid Earth tide model in the operational force models. The Love Model of the solid Earth tide potential was used to derive analytic corrections to the inclination and right ascension of the ascending node of Aqua's Sun-synchronous orbit. Additionally, it was determined that the resonance between the Sun and orbit plane of the Sun-synchronous orbit is the primary driver of this error. The analytic corrections have been added to the operational force models for the Aqua spacecraft reducing the two-year 90-second error to less than 7 seconds.
2017-01-01
Background Influenza is a viral respiratory disease capable of causing epidemics that represent a threat to communities worldwide. The rapidly growing availability of electronic “big data” from diagnostic and prediagnostic sources in health care and public health settings permits advance of a new generation of methods for local detection and prediction of winter influenza seasons and influenza pandemics. Objective The aim of this study was to present a method for integrated detection and prediction of influenza virus activity in local settings using electronically available surveillance data and to evaluate its performance by retrospective application on authentic data from a Swedish county. Methods An integrated detection and prediction method was formally defined based on a design rationale for influenza detection and prediction methods adapted for local surveillance. The novel method was retrospectively applied on data from the winter influenza season 2008-09 in a Swedish county (population 445,000). Outcome data represented individuals who met a clinical case definition for influenza (based on International Classification of Diseases version 10 [ICD-10] codes) from an electronic health data repository. Information from calls to a telenursing service in the county was used as syndromic data source. Results The novel integrated detection and prediction method is based on nonmechanistic statistical models and is designed for integration in local health information systems. The method is divided into separate modules for detection and prediction of local influenza virus activity. The function of the detection module is to alert for an upcoming period of increased load of influenza cases on local health care (using influenza-diagnosis data), whereas the function of the prediction module is to predict the timing of the activity peak (using syndromic data) and its intensity (using influenza-diagnosis data). For detection modeling, exponential regression was used based on the assumption that the beginning of a winter influenza season has an exponential growth of infected individuals. For prediction modeling, linear regression was applied on 7-day periods at the time in order to find the peak timing, whereas a derivate of a normal distribution density function was used to find the peak intensity. We found that the integrated detection and prediction method detected the 2008-09 winter influenza season on its starting day (optimal timeliness 0 days), whereas the predicted peak was estimated to occur 7 days ahead of the factual peak and the predicted peak intensity was estimated to be 26% lower than the factual intensity (6.3 compared with 8.5 influenza-diagnosis cases/100,000). Conclusions Our detection and prediction method is one of the first integrated methods specifically designed for local application on influenza data electronically available for surveillance. The performance of the method in a retrospective study indicates that further prospective evaluations of the methods are justified. PMID:28619700
Spreco, Armin; Eriksson, Olle; Dahlström, Örjan; Cowling, Benjamin John; Timpka, Toomas
2017-06-15
Influenza is a viral respiratory disease capable of causing epidemics that represent a threat to communities worldwide. The rapidly growing availability of electronic "big data" from diagnostic and prediagnostic sources in health care and public health settings permits advance of a new generation of methods for local detection and prediction of winter influenza seasons and influenza pandemics. The aim of this study was to present a method for integrated detection and prediction of influenza virus activity in local settings using electronically available surveillance data and to evaluate its performance by retrospective application on authentic data from a Swedish county. An integrated detection and prediction method was formally defined based on a design rationale for influenza detection and prediction methods adapted for local surveillance. The novel method was retrospectively applied on data from the winter influenza season 2008-09 in a Swedish county (population 445,000). Outcome data represented individuals who met a clinical case definition for influenza (based on International Classification of Diseases version 10 [ICD-10] codes) from an electronic health data repository. Information from calls to a telenursing service in the county was used as syndromic data source. The novel integrated detection and prediction method is based on nonmechanistic statistical models and is designed for integration in local health information systems. The method is divided into separate modules for detection and prediction of local influenza virus activity. The function of the detection module is to alert for an upcoming period of increased load of influenza cases on local health care (using influenza-diagnosis data), whereas the function of the prediction module is to predict the timing of the activity peak (using syndromic data) and its intensity (using influenza-diagnosis data). For detection modeling, exponential regression was used based on the assumption that the beginning of a winter influenza season has an exponential growth of infected individuals. For prediction modeling, linear regression was applied on 7-day periods at the time in order to find the peak timing, whereas a derivate of a normal distribution density function was used to find the peak intensity. We found that the integrated detection and prediction method detected the 2008-09 winter influenza season on its starting day (optimal timeliness 0 days), whereas the predicted peak was estimated to occur 7 days ahead of the factual peak and the predicted peak intensity was estimated to be 26% lower than the factual intensity (6.3 compared with 8.5 influenza-diagnosis cases/100,000). Our detection and prediction method is one of the first integrated methods specifically designed for local application on influenza data electronically available for surveillance. The performance of the method in a retrospective study indicates that further prospective evaluations of the methods are justified. ©Armin Spreco, Olle Eriksson, Örjan Dahlström, Benjamin John Cowling, Toomas Timpka. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 15.06.2017.
2014-01-01
Background It is important to predict the quality of a protein structural model before its native structure is known. The method that can predict the absolute local quality of individual residues in a single protein model is rare, yet particularly needed for using, ranking and refining protein models. Results We developed a machine learning tool (SMOQ) that can predict the distance deviation of each residue in a single protein model. SMOQ uses support vector machines (SVM) with protein sequence and structural features (i.e. basic feature set), including amino acid sequence, secondary structures, solvent accessibilities, and residue-residue contacts to make predictions. We also trained a SVM model with two new additional features (profiles and SOV scores) on 20 CASP8 targets and found that including them can only improve the performance when real deviations between native and model are higher than 5Å. The SMOQ tool finally released uses the basic feature set trained on 85 CASP8 targets. Moreover, SMOQ implemented a way to convert predicted local quality scores into a global quality score. SMOQ was tested on the 84 CASP9 single-domain targets. The average difference between the residue-specific distance deviation predicted by our method and the actual distance deviation on the test data is 2.637Å. The global quality prediction accuracy of the tool is comparable to other good tools on the same benchmark. Conclusion SMOQ is a useful tool for protein single model quality assessment. Its source code and executable are available at: http://sysbio.rnet.missouri.edu/multicom_toolbox/. PMID:24776231
Cao, Renzhi; Wang, Zheng; Wang, Yiheng; Cheng, Jianlin
2014-04-28
It is important to predict the quality of a protein structural model before its native structure is known. The method that can predict the absolute local quality of individual residues in a single protein model is rare, yet particularly needed for using, ranking and refining protein models. We developed a machine learning tool (SMOQ) that can predict the distance deviation of each residue in a single protein model. SMOQ uses support vector machines (SVM) with protein sequence and structural features (i.e. basic feature set), including amino acid sequence, secondary structures, solvent accessibilities, and residue-residue contacts to make predictions. We also trained a SVM model with two new additional features (profiles and SOV scores) on 20 CASP8 targets and found that including them can only improve the performance when real deviations between native and model are higher than 5Å. The SMOQ tool finally released uses the basic feature set trained on 85 CASP8 targets. Moreover, SMOQ implemented a way to convert predicted local quality scores into a global quality score. SMOQ was tested on the 84 CASP9 single-domain targets. The average difference between the residue-specific distance deviation predicted by our method and the actual distance deviation on the test data is 2.637Å. The global quality prediction accuracy of the tool is comparable to other good tools on the same benchmark. SMOQ is a useful tool for protein single model quality assessment. Its source code and executable are available at: http://sysbio.rnet.missouri.edu/multicom_toolbox/.
ADOT state-specific crash prediction models : an Arizona needs study.
DOT National Transportation Integrated Search
2016-12-01
The predictive method in the Highway Safety Manual (HSM) includes a safety performance function (SPF), : crash modification factors (CMFs), and a local calibration factor (C), if available. Two alternatives exist for : applying the HSM prediction met...
NASA Technical Reports Server (NTRS)
Kloog, Itai; Chudnovsky, Alexandra A.; Just, Allan C.; Nordio, Francesco; Koutrakis, Petros; Coull, Brent A.; Lyapustin, Alexei; Wang, Yujie; Schwartz, Joel
2014-01-01
The use of satellite-based aerosol optical depth (AOD) to estimate fine particulate matter PM(sub 2.5) for epidemiology studies has increased substantially over the past few years. These recent studies often report moderate predictive power, which can generate downward bias in effect estimates. In addition, AOD measurements have only moderate spatial resolution, and have substantial missing data. We make use of recent advances in MODIS satellite data processing algorithms (Multi-Angle Implementation of Atmospheric Correction (MAIAC), which allow us to use 1 km (versus currently available 10 km) resolution AOD data.We developed and cross validated models to predict daily PM(sub 2.5) at a 1X 1 km resolution across the northeastern USA (New England, New York and New Jersey) for the years 2003-2011, allowing us to better differentiate daily and long term exposure between urban, suburban, and rural areas. Additionally, we developed an approach that allows us to generate daily high-resolution 200 m localized predictions representing deviations from the area 1 X 1 km grid predictions. We used mixed models regressing PM(sub 2.5) measurements against day-specific random intercepts, and fixed and random AOD and temperature slopes. We then use generalized additive mixed models with spatial smoothing to generate grid cell predictions when AOD was missing. Finally, to get 200 m localized predictions, we regressed the residuals from the final model for each monitor against the local spatial and temporal variables at each monitoring site. Our model performance was excellent (mean out-of-sample R(sup 2) = 0.88). The spatial and temporal components of the out-of-sample results also presented very good fits to the withheld data (R(sup 2) = 0.87, R(sup)2 = 0.87). In addition, our results revealed very little bias in the predicted concentrations (Slope of predictions versus withheld observations = 0.99). Our daily model results show high predictive accuracy at high spatial resolutions and will be useful in reconstructing exposure histories for epidemiological studies across this region.
Kloog, Itai; Chudnovsky, Alexandra A; Just, Allan C; Nordio, Francesco; Koutrakis, Petros; Coull, Brent A; Lyapustin, Alexei; Wang, Yujie; Schwartz, Joel
2014-10-01
The use of satellite-based aerosol optical depth (AOD) to estimate fine particulate matter (PM 2.5 ) for epidemiology studies has increased substantially over the past few years. These recent studies often report moderate predictive power, which can generate downward bias in effect estimates. In addition, AOD measurements have only moderate spatial resolution, and have substantial missing data. We make use of recent advances in MODIS satellite data processing algorithms (Multi-Angle Implementation of Atmospheric Correction (MAIAC), which allow us to use 1 km (versus currently available 10 km) resolution AOD data. We developed and cross validated models to predict daily PM 2.5 at a 1×1km resolution across the northeastern USA (New England, New York and New Jersey) for the years 2003-2011, allowing us to better differentiate daily and long term exposure between urban, suburban, and rural areas. Additionally, we developed an approach that allows us to generate daily high-resolution 200 m localized predictions representing deviations from the area 1×1 km grid predictions. We used mixed models regressing PM 2.5 measurements against day-specific random intercepts, and fixed and random AOD and temperature slopes. We then use generalized additive mixed models with spatial smoothing to generate grid cell predictions when AOD was missing. Finally, to get 200 m localized predictions, we regressed the residuals from the final model for each monitor against the local spatial and temporal variables at each monitoring site. Our model performance was excellent (mean out-of-sample R 2 =0.88). The spatial and temporal components of the out-of-sample results also presented very good fits to the withheld data (R 2 =0.87, R 2 =0.87). In addition, our results revealed very little bias in the predicted concentrations (Slope of predictions versus withheld observations = 0.99). Our daily model results show high predictive accuracy at high spatial resolutions and will be useful in reconstructing exposure histories for epidemiological studies across this region.
Kloog, Itai; Chudnovsky, Alexandra A.; Just, Allan C.; Nordio, Francesco; Koutrakis, Petros; Coull, Brent A.; Lyapustin, Alexei; Wang, Yujie; Schwartz, Joel
2017-01-01
Background The use of satellite-based aerosol optical depth (AOD) to estimate fine particulate matter (PM2.5) for epidemiology studies has increased substantially over the past few years. These recent studies often report moderate predictive power, which can generate downward bias in effect estimates. In addition, AOD measurements have only moderate spatial resolution, and have substantial missing data. Methods We make use of recent advances in MODIS satellite data processing algorithms (Multi-Angle Implementation of Atmospheric Correction (MAIAC), which allow us to use 1 km (versus currently available 10 km) resolution AOD data. We developed and cross validated models to predict daily PM2.5 at a 1×1km resolution across the northeastern USA (New England, New York and New Jersey) for the years 2003–2011, allowing us to better differentiate daily and long term exposure between urban, suburban, and rural areas. Additionally, we developed an approach that allows us to generate daily high-resolution 200 m localized predictions representing deviations from the area 1×1 km grid predictions. We used mixed models regressing PM2.5 measurements against day-specific random intercepts, and fixed and random AOD and temperature slopes. We then use generalized additive mixed models with spatial smoothing to generate grid cell predictions when AOD was missing. Finally, to get 200 m localized predictions, we regressed the residuals from the final model for each monitor against the local spatial and temporal variables at each monitoring site. Results Our model performance was excellent (mean out-of-sample R2=0.88). The spatial and temporal components of the out-of-sample results also presented very good fits to the withheld data (R2=0.87, R2=0.87). In addition, our results revealed very little bias in the predicted concentrations (Slope of predictions versus withheld observations = 0.99). Conclusion Our daily model results show high predictive accuracy at high spatial resolutions and will be useful in reconstructing exposure histories for epidemiological studies across this region. PMID:28966552
Le Moullec, Y; Potier, O; Gentric, C; Leclerc, J P
2011-05-01
This paper presents an experimental and numerical study of an activated sludge channel pilot plant. Concentration profiles of oxygen, COD, NO(3) and NH(4) have been measured for several operating conditions. These profiles have been compared to the simulated ones with three different modelling approaches, namely a systemic approach, CFD and compartmental modelling. For these three approaches, the kinetics model was the ASM-1 model (Henze et al., 2001). The three approaches allowed a reasonable simulation of all the concentration profiles except for ammonium for which the simulations results were far from the experimental ones. The analysis of the results showed that the role of the kinetics model is of primary importance for the prediction of activated sludge reactors performance. The fact that existing kinetics parameters in the literature have been determined by parametric optimisation using a systemic model limits the reliability of the prediction of local concentrations and of the local design of activated sludge reactors. Copyright © 2011 Elsevier Ltd. All rights reserved.
Park, Soo Hyun; Talebi, Mohammad; Amos, Ruth I J; Tyteca, Eva; Haddad, Paul R; Szucs, Roman; Pohl, Christopher A; Dolan, John W
2017-11-10
Quantitative Structure-Retention Relationships (QSRR) are used to predict retention times of compounds based only on their chemical structures encoded by molecular descriptors. The main concern in QSRR modelling is to build models with high predictive power, allowing reliable retention prediction for the unknown compounds across the chromatographic space. With the aim of enhancing the prediction power of the models, in this work, our previously proposed QSRR modelling approach called "federation of local models" is extended in ion chromatography to predict retention times of unknown ions, where a local model for each target ion (unknown) is created using only structurally similar ions from the dataset. A Tanimoto similarity (TS) score was utilised as a measure of structural similarity and training sets were developed by including ions that were similar to the target ion, as defined by a threshold value. The prediction of retention parameters (a- and b-values) in the linear solvent strength (LSS) model in ion chromatography, log k=a - blog[eluent], allows the prediction of retention times under all eluent concentrations. The QSRR models for a- and b-values were developed by a genetic algorithm-partial least squares method using the retention data of inorganic and small organic anions and larger organic cations (molecular mass up to 507) on four Thermo Fisher Scientific columns (AS20, AS19, AS11HC and CS17). The corresponding predicted retention times were calculated by fitting the predicted a- and b-values of the models into the LSS model equation. The predicted retention times were also plotted against the experimental values to evaluate the goodness of fit and the predictive power of the models. The application of a TS threshold of 0.6 was found to successfully produce predictive and reliable QSRR models (Q ext(F2) 2 >0.8 and Mean Absolute Error<0.1), and hence accurate retention time predictions with an average Mean Absolute Error of 0.2min. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.
Mammalian spontaneous otoacoustic emissions are amplitude-stabilized cochlear standing waves.
Shera, Christopher A
2003-07-01
Mammalian spontaneous otoacoustic emissions (SOAEs) have been suggested to arise by three different mechanisms. The local-oscillator model, dating back to the work of Thomas Gold, supposes that SOAEs arise through the local, autonomous oscillation of some cellular constituent of the organ of Corti (e.g., the "active process" underlying the cochlear amplifier). Two other models, by contrast, both suppose that SOAEs are a global collective phenomenon--cochlear standing waves created by multiple internal reflection--but differ on the nature of the proposed power source: Whereas the "passive" standing-wave model supposes that SOAEs are biological noise, passively amplified by cochlear standing-wave resonances acting as narrow-band nonlinear filters, the "active" standing-wave model supposes that standing-wave amplitudes are actively maintained by coherent wave amplification within the cochlea. Quantitative tests of key predictions that distinguish the local-oscillator and global standing-wave models are presented and shown to support the global standing-wave model. In addition to predicting the existence of multiple emissions with a characteristic minimum frequency spacing, the global standing-wave model accurately predicts the mean value of this spacing, its standard deviation, and its power-law dependence on SOAE frequency. Furthermore, the global standing-wave model accounts for the magnitude, sign, and frequency dependence of changes in SOAE frequency that result from modulations in middle-ear stiffness. Although some of these SOAE characteristics may be replicable through artful ad hoc adjustment of local-oscillator models, they all arise quite naturally in the standing-wave framework. Finally, the statistics of SOAE time waveforms demonstrate that SOAEs are coherent, amplitude-stabilized signals, as predicted by the active standing-wave model. Taken together, the results imply that SOAEs are amplitude-stabilized standing waves produced by the cochlea acting as a biological, hydromechanical analog of a laser oscillator. Contrary to recent claims, spontaneous emission of sound from the ear does not require the autonomous mechanical oscillation of its cellular constituents.
Dawadi, Mahesh B; Bhatta, Ram S; Perry, David S
2013-12-19
Two torsion-inversion tunneling models (models I and II) are reported for the CH-stretch vibrationally excited states in the G12 family of molecules. The torsion and inversion tunneling parameters, h(2v) and h(3v), respectively, are combined with low-order coupling terms involving the CH-stretch vibrations. Model I is a group theoretical treatment starting from the symmetric rotor methyl CH-stretch vibrations; model II is an internal coordinate model including the local-local CH-stretch coupling. Each model yields predicted torsion-inversion tunneling patterns of the four symmetry species, A, B, E1, and E2, in the CH-stretch excited states. Although the predicted tunneling patterns for the symmetric CH-stretch excited state are the same as for the ground state, inverted tunneling patterns are predicted for the asymmetric CH-stretches. The qualitative tunneling patterns predicted are independent of the model type and of the particular coupling terms considered. In model I, the magnitudes of the tunneling splittings in the two asymmetric CH-stretch excited states are equal to half of that in the ground state, but in model II, they differ when the tunneling rate is fast. The model predictions are compared across the series of molecules methanol, methylamine, 2-methylmalonaldehyde, and 5-methyltropolone and to the available experimental data.
NASA Astrophysics Data System (ADS)
Wayand, N. E.; Stimberis, J.; Zagrodnik, J.; Mass, C.; Lundquist, J. D.
2016-12-01
Low-level cold air from eastern Washington state often flows westward through mountain passes in the Washington Cascades, creating localized inversions and locally reducing climatological temperatures. The persistence of this inversion during a frontal passage can result in complex patterns of snow and rain that are difficult to predict. Yet, these predictions are critical to support highway avalanche control, ski resort operations, and modeling of headwater snowpack storage. In this study we used observations of precipitation phase from a disdrometer and snow depth sensors across Snoqualmie Pass, WA, to evaluate surface-air-temperature-based and mesoscale-model-based predictions of precipitation phase during the anomalously warm 2014-2015 winter. The skill of surface-based methods was greatly improved by using air temperature from a nearby higher-elevation station, which was less impacted by low-level inversions. Alternatively, we found a hybrid method that combines surface-based predictions with output from the Weather Research and Forecasting mesoscale model to have improved skill over both parent models. These results suggest that prediction of precipitation phase in mountain passes can be improved by incorporating observations or models from above the surface layer.
A comparison of non-local electron transport models relevant to inertial confinement fusion
NASA Astrophysics Data System (ADS)
Sherlock, Mark; Brodrick, Jonathan; Ridgers, Christopher
2017-10-01
We compare the reduced non-local electron transport model developed by Schurtz et al. to Vlasov-Fokker-Planck simulations. Two new test cases are considered: the propagation of a heat wave through a high density region into a lower density gas, and a 1-dimensional hohlraum ablation problem. We find the reduced model reproduces the peak heat flux well in the ablation region but significantly over-predicts the coronal preheat. The suitability of the reduced model for computing non-local transport effects other than thermal conductivity is considered by comparing the computed distribution function to the Vlasov-Fokker-Planck distribution function. It is shown that even when the reduced model reproduces the correct heat flux, the distribution function is significantly different to the Vlasov-Fokker-Planck prediction. Two simple modifications are considered which improve agreement between models in the coronal region. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Demekhin, E A; Kalaidin, E N; Kalliadasis, S; Vlaskin, S Yu
2010-09-01
We validate experimentally the Kapitsa-Shkadov model utilized in the theoretical studies by Demekhin [Phys. Fluids 19, 114103 (2007)10.1063/1.2793148; Phys. Fluids 19, 114104 (2007)]10.1063/1.2793149 of surface turbulence on a thin liquid film flowing down a vertical planar wall. For water at 15° , surface turbulence typically occurs at an inlet Reynolds number of ≃40 . Of particular interest is to assess experimentally the predictions of the model for three-dimensional nonlinear localized coherent structures, which represent elementary processes of surface turbulence. For this purpose we devise simple experiments to investigate the instabilities and transitions leading to such structures. Our experimental results are in good agreement with the theoretical predictions of the model. We also perform time-dependent computations for the formation of coherent structures and their interaction with localized structures of smaller amplitude on the surface of the film.
Nowcasting the spread of chikungunya virus in the Americas.
Johansson, Michael A; Powers, Ann M; Pesik, Nicki; Cohen, Nicole J; Staples, J Erin
2014-01-01
In December 2013, the first locally-acquired chikungunya virus (CHIKV) infections in the Americas were reported in the Caribbean. As of May 16, 55,992 cases had been reported and the outbreak was still spreading. Identification of newly affected locations is paramount to intervention activities, but challenging due to limitations of current data on the outbreak and on CHIKV transmission. We developed models to make probabilistic predictions of spread based on current data considering these limitations. Branching process models capturing travel patterns, local infection prevalence, climate dependent transmission factors, and associated uncertainty estimates were developed to predict probable locations for the arrival of CHIKV-infected travelers and for the initiation of local transmission. Many international cities and areas close to where transmission has already occurred were likely to have received infected travelers. Of the ten locations predicted to be the most likely locations for introduced CHIKV transmission in the first four months of the outbreak, eight had reported local cases by the end of April. Eight additional locations were likely to have had introduction leading to local transmission in April, but with substantial uncertainty. Branching process models can characterize the risk of CHIKV introduction and spread during the ongoing outbreak. Local transmission of CHIKV is currently likely in several Caribbean locations and possible, though uncertain, for other locations in the continental United States, Central America, and South America. This modeling framework may also be useful for other outbreaks where the risk of pathogen spread over heterogeneous transportation networks must be rapidly assessed on the basis of limited information.
Yousefsani, Seyed Abdolmajid; Shamloo, Amir; Farahmand, Farzam
2018-04-01
A transverse-plane hyperelastic micromechanical model of brain white matter tissue was developed using the embedded element technique (EET). The model consisted of a histology-informed probabilistic distribution of axonal fibers embedded within an extracellular matrix, both described using the generalized Ogden hyperelastic material model. A correcting method, based on the strain energy density function, was formulated to resolve the stiffness redundancy problem of the EET in large deformation regime. The model was then used to predict the homogenized tissue behavior and the associated localized responses of the axonal fibers under quasi-static, transverse, large deformations. Results indicated that with a sufficiently large representative volume element (RVE) and fine mesh, the statistically randomized microstructure implemented in the RVE exhibits directional independency in transverse plane, and the model predictions for the overall and local tissue responses, characterized by the normalized strain energy density and Cauchy and von Mises stresses, are independent from the modeling parameters. Comparison of the responses of the probabilistic model with that of a simple uniform RVE revealed that only the first one is capable of representing the localized behavior of the tissue constituents. The validity test of the model predictions for the corona radiata against experimental data from the literature indicated a very close agreement. In comparison with the conventional direct meshing method, the model provided almost the same results after correcting the stiffness redundancy, however, with much less computational cost and facilitated geometrical modeling, meshing, and boundary conditions imposing. It was concluded that the EET can be used effectively for detailed probabilistic micromechanical modeling of the white matter in order to provide more accurate predictions for the axonal responses, which are of great importance when simulating the brain trauma or tumor growth. Copyright © 2018 Elsevier Ltd. All rights reserved.
Kulinkina, Alexandra V; Walz, Yvonne; Koch, Magaly; Biritwum, Nana-Kwadwo; Utzinger, Jürg; Naumova, Elena N
2018-06-04
Schistosomiasis is a water-related neglected tropical disease. In many endemic low- and middle-income countries, insufficient surveillance and reporting lead to poor characterization of the demographic and geographic distribution of schistosomiasis cases. Hence, modeling is relied upon to predict areas of high transmission and to inform control strategies. We hypothesized that utilizing remotely sensed (RS) environmental data in combination with water, sanitation, and hygiene (WASH) variables could improve on the current predictive modeling approaches. Schistosoma haematobium prevalence data, collected from 73 rural Ghanaian schools, were used in a random forest model to investigate the predictive capacity of 15 environmental variables derived from RS data (Landsat 8, Sentinel-2, and Global Digital Elevation Model) with fine spatial resolution (10-30 m). Five methods of variable extraction were tested to determine the spatial linkage between school-based prevalence and the environmental conditions of potential transmission sites, including applying the models to known human water contact locations. Lastly, measures of local water access and groundwater quality were incorporated into RS-based models to assess the relative importance of environmental and WASH variables. Predictive models based on environmental characterization of specific locations where people contact surface water bodies offered some improvement as compared to the traditional approach based on environmental characterization of locations where prevalence is measured. A water index (MNDWI) and topographic variables (elevation and slope) were important environmental risk factors, while overall, groundwater iron concentration predominated in the combined model that included WASH variables. The study helps to understand localized drivers of schistosomiasis transmission. Specifically, unsatisfactory water quality in boreholes perpetuates reliance of surface water bodies, indirectly increasing schistosomiasis risk and resulting in rapid reinfection (up to 40% prevalence six months following preventive chemotherapy). Considering WASH-related risk factors in schistosomiasis prediction can help shift the focus of control strategies from treating symptoms to reducing exposure.
NASA Astrophysics Data System (ADS)
Xie, G.; Thompson, D. J.; Jones, C. J. C.
2006-06-01
Modern railway vehicles are often constructed from double walled aluminium extrusions, which give a stiff, light construction. However, the acoustic performance of such panels is less satisfactory, with the airborne sound transmission being considerably worse than the mass law for the equivalent simple panel. To compensate for this, vehicle manufacturers are forced to add treatments such as damping layers, absorptive layers and floating floors. Moreover, a model for extruded panels that is both simple and reliable is required to assist in the early stages of design. An statistical energy analysis (SEA) model to predict the vibroacoustic behaviour of aluminium extrusions is presented here. An extruded panel is represented by a single global mode subsystem and three subsystems representing local modes of the various strips which occur for frequencies typically above 500 Hz. An approximate model for the modal density of extruded panels is developed and this is verified using an FE model. The coupling between global and local modes is approximated with the coupling between a travelling global wave and uncorrelated local waves. This model enables the response difference across the panels to be predicted. For the coupling with air, the average radiation efficiency of a baffled extruded panel is modelled in terms of the contributions from global and local modes. Experimental studies of a sample extruded panel have also been carried out. The vibration of an extruded panel under mechanical excitation is measured for various force positions and the vibration distribution over the panel is obtained in detail. The radiation efficiencies of a free extruded panel have also been measured. The complete SEA model of a panel is finally used to predict the response of the extruded panel under mechanical and acoustic excitations. Especially for mechanical excitation, the proposed SEA model gives a good prediction compared with the measurement results.
Reis, H; Papadopoulos, M G; Grzybowski, A
2006-09-21
This is the second part of a study to elucidate the local field effects on the nonlinear optical properties of p-nitroaniline (pNA) in three solvents of different multipolar character, that is, cyclohexane (CH), 1,4-dioxane (DI), and tetrahydrofuran (THF), employing a discrete description of the solutions. By the use of liquid structure information from molecular dynamics simulations and molecular properties computed by high-level ab initio methods, the local field and local field gradients on p-nitroaniline and the solvent molecules are computed in quadrupolar approximation. To validate the simulations and the induction model, static and dynamic (non)linear properties of the pure solvents are also computed. With the exception of the static dielectric constant of pure THF, a good agreement between computed and experimental refractive indices, dielectric constants, and third harmonic generation signals is obtained for the solvents. For the solutions, it is found that multipole moments up to two orders higher than quadrupole have a negligible influence on the local fields on pNA, if a simple distribution model is employed for the electric properties of pNA. Quadrupole effects are found to be nonnegligible in all three solvents but are especially pronounced in the 1,4-dioxane solvent, in which the local fields are similar to those in THF, although the dielectric constant of DI is 2.2 and that of the simulated THF is 5.4. The electric-field-induced second harmonic generation (EFISH) signal and the hyper-Rayleigh scattering signal of pNA in the solutions computed with the local field are in good to fair agreement with available experimental results. This confirms the effect of the "dioxane anomaly" also on nonlinear optical properties. Predictions based on an ellipsoidal Onsager model as applied by experimentalists are in very good agreement with the discrete model predictions. This is in contrast to a recent discrete reaction field calculation of pNA in 1,4-dioxane, which found that the predicted first hyperpolarizability of pNA deviated strongly from the predictions obtained using Onsager-Lorentz local field factors.
NASA Technical Reports Server (NTRS)
Lyle, Karen H.; Vassilakos, Gregory J.
2015-01-01
This report summarizes initial modeling of the local response of the Bigelow Expandable Activity Module (BEAM) to micrometeorite and orbital debris (MMOD) impacts using a structural, non-linear, transient dynamic finite element code. Complementary test results for a local BEAM structure are presented for both hammer and projectile impacts. Review of these data provided guidance for the transient dynamic model development. The local model is intended to support predictions using the global BEAM model, described in a companion report. Two types of local models were developed. One mimics the simplified Soft-Goods (fabric envelop) part of the BEAM NASTRAN model delivered by the project. The second investigates through-the-thickness modeling challenges for MMOD-type impacts. Both the testing and the analysis summaries contain lessons learned and areas for future efforts.
Progress towards a more predictive model for hohlraum radiation drive and symmetry
NASA Astrophysics Data System (ADS)
Jones, O. S.; Suter, L. J.; Scott, H. A.; Barrios, M. A.; Farmer, W. A.; Hansen, S. B.; Liedahl, D. A.; Mauche, C. W.; Moore, A. S.; Rosen, M. D.; Salmonson, J. D.; Strozzi, D. J.; Thomas, C. A.; Turnbull, D. P.
2017-05-01
For several years, we have been calculating the radiation drive in laser-heated gold hohlraums using flux-limited heat transport with a limiter of 0.15, tabulated values of local thermodynamic equilibrium gold opacity, and an approximate model for not in a local thermodynamic equilibrium (NLTE) gold emissivity (DCA_2010). This model has been successful in predicting the radiation drive in vacuum hohlraums, but for gas-filled hohlraums used to drive capsule implosions, the model consistently predicts too much drive and capsule bang times earlier than measured. In this work, we introduce a new model that brings the calculated bang time into better agreement with the measured bang time. The new model employs (1) a numerical grid that is fully converged in space, energy, and time, (2) a modified approximate NLTE model that includes more physics and is in better agreement with more detailed offline emissivity models, and (3) a reduced flux limiter value of 0.03. We applied this model to gas-filled hohlraum experiments using high density carbon and plastic ablator capsules that had hohlraum He fill gas densities ranging from 0.06 to 1.6 mg/cc and hohlraum diameters of 5.75 or 6.72 mm. The new model predicts bang times to within ±100 ps for most experiments with low to intermediate fill densities (up to 0.85 mg/cc). This model predicts higher temperatures in the plasma than the old model and also predicts that at higher gas fill densities, a significant amount of inner beam laser energy escapes the hohlraum through the opposite laser entrance hole.
Quantum decay model with exact explicit analytical solution
NASA Astrophysics Data System (ADS)
Marchewka, Avi; Granot, Er'El
2009-01-01
A simple decay model is introduced. The model comprises a point potential well, which experiences an abrupt change. Due to the temporal variation, the initial quantum state can either escape from the well or stay localized as a new bound state. The model allows for an exact analytical solution while having the necessary features of a decay process. The results show that the decay is never exponential, as classical dynamics predicts. Moreover, at short times the decay has a fractional power law, which differs from perturbation quantum method predictions. At long times the decay includes oscillations with an envelope that decays algebraically. This is a model where the final state can be either continuous or localized, and that has an exact analytical solution.
Parafoveal Target Detectability Reversal Predicted by Local Luminance and Contrast Gain Control
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.; Beard, Bettina L.; Null, Cynthia H. (Technical Monitor)
1996-01-01
This project is part of a program to develop image discrimination models for the prediction of the detectability of objects in a range of backgrounds. We wanted to see if the models could predict parafoveal object detection as well as they predict detection in foveal vision. We also wanted to make our simplified models more general by local computation of luminance and contrast gain control. A signal image (0.78 x 0.17 deg) was made by subtracting a simulated airport runway scene background image (2.7 deg square) from the same scene containing an obstructing aircraft. Signal visibility contrast thresholds were measured in a fully crossed factorial design with three factors: eccentricity (0 deg or 4 deg), background (uniform or runway scene background), and fixed-pattern white noise contrast (0%, 5%, or 10%). Three experienced observers responded to three repetitions of 60 2IFC trials in each condition and thresholds were estimated by maximum likelihood probit analysis. In the fovea the average detection contrast threshold was 4 dB lower for the runway background than for the uniform background, but in the parafovea, the average threshold was 6 dB higher for the runway background than for the uniform background. This interaction was similar across the different noise levels and for all three observers. A likely reason for the runway background giving a lower threshold in the fovea is the low luminance near the signal in that scene. In our model, the local luminance computation is controlled by a spatial spread parameter. When this parameter and a corresponding parameter for the spatial spread of contrast gain were increased for the parafoveal predictions, the model predicts the interaction of background with eccentricity.
Modeling number of claims and prediction of total claim amount
NASA Astrophysics Data System (ADS)
Acar, Aslıhan Şentürk; Karabey, Uǧur
2017-07-01
In this study we focus on annual number of claims of a private health insurance data set which belongs to a local insurance company in Turkey. In addition to Poisson model and negative binomial model, zero-inflated Poisson model and zero-inflated negative binomial model are used to model the number of claims in order to take into account excess zeros. To investigate the impact of different distributional assumptions for the number of claims on the prediction of total claim amount, predictive performances of candidate models are compared by using root mean square error (RMSE) and mean absolute error (MAE) criteria.
NASA Astrophysics Data System (ADS)
Liang, Yunyun; Liu, Sanyang; Zhang, Shengli
2017-02-01
Apoptosis is a fundamental process controlling normal tissue homeostasis by regulating a balance between cell proliferation and death. Predicting subcellular location of apoptosis proteins is very helpful for understanding its mechanism of programmed cell death. Prediction of apoptosis protein subcellular location is still a challenging and complicated task, and existing methods mainly based on protein primary sequences. In this paper, we propose a new position-specific scoring matrix (PSSM)-based model by using Geary autocorrelation function and detrended cross-correlation coefficient (DCCA coefficient). Then a 270-dimensional (270D) feature vector is constructed on three widely used datasets: ZD98, ZW225 and CL317, and support vector machine is adopted as classifier. The overall prediction accuracies are significantly improved by rigorous jackknife test. The results show that our model offers a reliable and effective PSSM-based tool for prediction of apoptosis protein subcellular localization.
Thogmartin, W.E.; Knutson, M.G.
2007-01-01
Much of what is known about avian species-habitat relations has been derived from studies of birds at local scales. It is entirely unclear whether the relations observed at these scales translate to the larger landscape in a predictable linear fashion. We derived habitat models and mapped predicted abundances for three forest bird species of eastern North America using bird counts, environmental variables, and hierarchical models applied at three spatial scales. Our purpose was to understand habitat associations at multiple spatial scales and create predictive abundance maps for purposes of conservation planning at a landscape scale given the constraint that the variables used in this exercise were derived from local-level studies. Our models indicated a substantial influence of landscape context for all species, many of which were counter to reported associations at finer spatial extents. We found land cover composition provided the greatest contribution to the relative explained variance in counts for all three species; spatial structure was second in importance. No single spatial scale dominated any model, indicating that these species are responding to factors at multiple spatial scales. For purposes of conservation planning, areas of predicted high abundance should be investigated to evaluate the conservation potential of the landscape in their general vicinity. In addition, the models and spatial patterns of abundance among species suggest locations where conservation actions may benefit more than one species. ?? 2006 Springer Science+Business Media B.V.
Models for nearly every occasion: Part III - One box decreasing emission models.
Hewett, Paul; Ganser, Gary H
2017-11-01
New one box "well-mixed room" decreasing emission (DE) models are introduced that allow for local exhaust or local exhaust with filtered return, as well the recirculation of a filtered (or cleaned) portion of the general room ventilation. For each control device scenario, a steady state and transient model is presented. The transient equations predict the concentration at any time t after the application of a known mass of a volatile substance to a surface, and can be used to predict the task exposure profile, the average task exposure, as well as peak and short-term exposures. The steady state equations can be used to predict the "average concentration per application" that is reached whenever the substance is repeatedly applied. Whenever the beginning and end concentrations are expected to be zero (or near zero) the steady state equations can also be used to predict the average concentration for a single task with multiple applications during the task, or even a series of such tasks. The transient equations should be used whenever these criteria cannot be met. A structured calibration procedure is proposed that utilizes a mass balance approach. Depending upon the DE model selected, one or more calibration measurements are collected. Using rearranged versions of the steady state equations, estimates of the model variables-e.g., the mass of the substance applied during each application, local exhaust capture efficiency, and the various cleaning or filtration efficiencies-can be calculated. A new procedure is proposed for estimating the emission rate constant.
NASA Astrophysics Data System (ADS)
Psikuta, Agnes; Mert, Emel; Annaheim, Simon; Rossi, René M.
2018-02-01
To evaluate the quality of new energy-saving and performance-supporting building and urban settings, the thermal sensation and comfort models are often used. The accuracy of these models is related to accurate prediction of the human thermo-physiological response that, in turn, is highly sensitive to the local effect of clothing. This study aimed at the development of an empirical regression model of the air gap thickness and the contact area in clothing to accurately simulate human thermal and perceptual response. The statistical model predicted reliably both parameters for 14 body regions based on the clothing ease allowances. The effect of the standard error in air gap prediction on the thermo-physiological response was lower than the differences between healthy humans. It was demonstrated that currently used assumptions and methods for determination of the air gap thickness can produce a substantial error for all global, mean, and local physiological parameters, and hence, lead to false estimation of the resultant physiological state of the human body, thermal sensation, and comfort. Thus, this model may help researchers to strive for improvement of human thermal comfort, health, productivity, safety, and overall sense of well-being with simultaneous reduction of energy consumption and costs in built environment.
Chikenji, George; Fujitsuka, Yoshimi; Takada, Shoji
2006-02-28
Predicting protein tertiary structure by folding-like simulations is one of the most stringent tests of how much we understand the principle of protein folding. Currently, the most successful method for folding-based structure prediction is the fragment assembly (FA) method. Here, we address why the FA method is so successful and its lesson for the folding problem. To do so, using the FA method, we designed a structure prediction test of "chimera proteins." In the chimera proteins, local structural preference is specific to the target sequences, whereas nonlocal interactions are only sequence-independent compaction forces. We find that these chimera proteins can find the native folds of the intact sequences with high probability indicating dominant roles of the local interactions. We further explore roles of local structural preference by exact calculation of the HP lattice model of proteins. From these results, we suggest principles of protein folding: For small proteins, compact structures that are fully compatible with local structural preference are few, one of which is the native fold. These local biases shape up the funnel-like energy landscape.
Shaping up the protein folding funnel by local interaction: Lesson from a structure prediction study
Chikenji, George; Fujitsuka, Yoshimi; Takada, Shoji
2006-01-01
Predicting protein tertiary structure by folding-like simulations is one of the most stringent tests of how much we understand the principle of protein folding. Currently, the most successful method for folding-based structure prediction is the fragment assembly (FA) method. Here, we address why the FA method is so successful and its lesson for the folding problem. To do so, using the FA method, we designed a structure prediction test of “chimera proteins.” In the chimera proteins, local structural preference is specific to the target sequences, whereas nonlocal interactions are only sequence-independent compaction forces. We find that these chimera proteins can find the native folds of the intact sequences with high probability indicating dominant roles of the local interactions. We further explore roles of local structural preference by exact calculation of the HP lattice model of proteins. From these results, we suggest principles of protein folding: For small proteins, compact structures that are fully compatible with local structural preference are few, one of which is the native fold. These local biases shape up the funnel-like energy landscape. PMID:16488978
Instrument Landing System performance prediction
DOT National Transportation Integrated Search
1974-01-01
Further achievements made in fiscal year 1973 on the development : of an Instrument Landing System (ILS) performance prediction model : are reported. These include (ILS) localizer scattering from generalized : slanted rectangular, triangular and cyli...
Geomorphically based predictive mapping of soil thickness in upland watersheds
NASA Astrophysics Data System (ADS)
Pelletier, Jon D.; Rasmussen, Craig
2009-09-01
The hydrologic response of upland watersheds is strongly controlled by soil (regolith) thickness. Despite the need to quantify soil thickness for input into hydrologic models, there is currently no widely used, geomorphically based method for doing so. In this paper we describe and illustrate a new method for predictive mapping of soil thicknesses using high-resolution topographic data, numerical modeling, and field-based calibration. The model framework works directly with input digital elevation model data to predict soil thicknesses assuming a long-term balance between soil production and erosion. Erosion rates in the model are quantified using one of three geomorphically based sediment transport models: nonlinear slope-dependent transport, nonlinear area- and slope-dependent transport, and nonlinear depth- and slope-dependent transport. The model balances soil production and erosion locally to predict a family of solutions corresponding to a range of values of two unconstrained model parameters. A small number of field-based soil thickness measurements can then be used to calibrate the local value of those unconstrained parameters, thereby constraining which solution is applicable at a particular study site. As an illustration, the model is used to predictively map soil thicknesses in two small, ˜0.1 km2, drainage basins in the Marshall Gulch watershed, a semiarid drainage basin in the Santa Catalina Mountains of Pima County, Arizona. Field observations and calibration data indicate that the nonlinear depth- and slope-dependent sediment transport model is the most appropriate transport model for this site. The resulting framework provides a generally applicable, geomorphically based tool for predictive mapping of soil thickness using high-resolution topographic data sets.
A compound reconstructed prediction model for nonstationary climate processes
NASA Astrophysics Data System (ADS)
Wang, Geli; Yang, Peicai
2005-07-01
Based on the idea of climate hierarchy and the theory of state space reconstruction, a local approximation prediction model with the compound structure is built for predicting some nonstationary climate process. By means of this model and the data sets consisting of north Indian Ocean sea-surface temperature, Asian zonal circulation index and monthly mean precipitation anomaly from 37 observation stations in the Inner Mongolia area of China (IMC), a regional prediction experiment for the winter precipitation of IMC is also carried out. When using the same sign ratio R between the prediction field and the actual field to measure the prediction accuracy, an averaged R of 63% given by 10 predictions samples is reached.
Kumagai, Naoki H; Yamano, Hiroya
2018-01-01
Coral reefs are one of the world's most threatened ecosystems, with global and local stressors contributing to their decline. Excessive sea-surface temperatures (SSTs) can cause coral bleaching, resulting in coral death and decreases in coral cover. A SST threshold of 1 °C over the climatological maximum is widely used to predict coral bleaching. In this study, we refined thermal indices predicting coral bleaching at high-spatial resolution (1 km) by statistically optimizing thermal thresholds, as well as considering other environmental influences on bleaching such as ultraviolet (UV) radiation, water turbidity, and cooling effects. We used a coral bleaching dataset derived from the web-based monitoring system Sango Map Project, at scales appropriate for the local and regional conservation of Japanese coral reefs. We recorded coral bleaching events in the years 2004-2016 in Japan. We revealed the influence of multiple factors on the ability to predict coral bleaching, including selection of thermal indices, statistical optimization of thermal thresholds, quantification of multiple environmental influences, and use of multiple modeling methods (generalized linear models and random forests). After optimization, differences in predictive ability among thermal indices were negligible. Thermal index, UV radiation, water turbidity, and cooling effects were important predictors of the occurrence of coral bleaching. Predictions based on the best model revealed that coral reefs in Japan have experienced recent and widespread bleaching. A practical method to reduce bleaching frequency by screening UV radiation was also demonstrated in this paper.
Yamano, Hiroya
2018-01-01
Coral reefs are one of the world’s most threatened ecosystems, with global and local stressors contributing to their decline. Excessive sea-surface temperatures (SSTs) can cause coral bleaching, resulting in coral death and decreases in coral cover. A SST threshold of 1 °C over the climatological maximum is widely used to predict coral bleaching. In this study, we refined thermal indices predicting coral bleaching at high-spatial resolution (1 km) by statistically optimizing thermal thresholds, as well as considering other environmental influences on bleaching such as ultraviolet (UV) radiation, water turbidity, and cooling effects. We used a coral bleaching dataset derived from the web-based monitoring system Sango Map Project, at scales appropriate for the local and regional conservation of Japanese coral reefs. We recorded coral bleaching events in the years 2004–2016 in Japan. We revealed the influence of multiple factors on the ability to predict coral bleaching, including selection of thermal indices, statistical optimization of thermal thresholds, quantification of multiple environmental influences, and use of multiple modeling methods (generalized linear models and random forests). After optimization, differences in predictive ability among thermal indices were negligible. Thermal index, UV radiation, water turbidity, and cooling effects were important predictors of the occurrence of coral bleaching. Predictions based on the best model revealed that coral reefs in Japan have experienced recent and widespread bleaching. A practical method to reduce bleaching frequency by screening UV radiation was also demonstrated in this paper. PMID:29473007
Comprehensive analytical model for locally contacted rear surface passivated solar cells
NASA Astrophysics Data System (ADS)
Wolf, Andreas; Biro, Daniel; Nekarda, Jan; Stumpp, Stefan; Kimmerle, Achim; Mack, Sebastian; Preu, Ralf
2010-12-01
For optimum performance of solar cells featuring a locally contacted rear surface, the metallization fraction as well as the size and distribution of the local contacts are crucial, since Ohmic and recombination losses have to be balanced. In this work we present a set of equations which enable to calculate this trade off without the need of numerical simulations. Our model combines established analytical and empirical equations to predict the energy conversion efficiency of a locally contacted device. For experimental verification, we fabricate devices from float zone silicon wafers of different resistivity using the laser fired contact technology for forming the local rear contacts. The detailed characterization of test structures enables the determination of important physical parameters, such as the surface recombination velocity at the contacted area and the spreading resistance of the contacts. Our analytical model reproduces the experimental results very well and correctly predicts the optimum contact spacing without the use of free fitting parameters. We use our model to estimate the optimum bulk resistivity for locally contacted devices fabricated from conventional Czochralski-grown silicon material. These calculations use literature values for the stable minority carrier lifetime to account for the bulk recombination caused by the formation of boron-oxygen complexes under carrier injection.
NASA Astrophysics Data System (ADS)
Totani, T.; Takeuchi, T. T.
2001-12-01
A new model of infrared galaxy counts and the cosmic background radiation (CBR) is developed by extending a model for optical/near-infrared galaxies. Important new characteristics of this model are that mass scale dependence of dust extinction is introduced based on the size-luminosity relation of optical galaxies, and that the big grain dust temperature T dust is calculated based on a physical consideration for energy balance, rather than using the empirical relation between T dust and total infrared luminosity L IR found in local galaxies, which has been employed in most of previous works. Consequently, the local properties of infrared galaxies, i.e., optical/infrared luminosity ratios, L IR-T dust correlation, and infrared luminosity function are outputs predicted by the model, while these have been inputs in a number of previous models. Our model indeed reproduces these local properties reasonably well. Then we make predictions for faint infrared counts (in 15, 60, 90, 170, 450, and 850 μ m) and CBR by this model. We found considerably different results from most of previous works based on the empirical L IR-T dust relation; especially, it is shown that the dust temperature of starbursting primordial elliptical galaxies is expected to be very high (40--80K). This indicates that intense starbursts of forming elliptical galaxies should have occurred at z ~ 2--3, in contrast to the previous results that significant starbursts beyond z ~ 1 tend to overproduce the far-infrared (FIR) CBR detected by COBE/FIRAS. On the other hand, our model predicts that the mid-infrared (MIR) flux from warm/nonequilibrium dust is relatively weak in such galaxies making FIR CBR, and this effect reconciles the prima facie conflict between the upper limit on MIR CBR from TeV gamma-ray observations and the COBE\\ detections of FIR CBR. The authors thank the financial support by the Japan Society for Promotion of Science.
Zhao, Meng; Ding, Baocang
2015-03-01
This paper considers the distributed model predictive control (MPC) of nonlinear large-scale systems with dynamically decoupled subsystems. According to the coupled state in the overall cost function of centralized MPC, the neighbors are confirmed and fixed for each subsystem, and the overall objective function is disassembled into each local optimization. In order to guarantee the closed-loop stability of distributed MPC algorithm, the overall compatibility constraint for centralized MPC algorithm is decomposed into each local controller. The communication between each subsystem and its neighbors is relatively low, only the current states before optimization and the optimized input variables after optimization are being transferred. For each local controller, the quasi-infinite horizon MPC algorithm is adopted, and the global closed-loop system is proven to be exponentially stable. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Global/local methods for probabilistic structural analysis
NASA Technical Reports Server (NTRS)
Millwater, H. R.; Wu, Y.-T.
1993-01-01
A probabilistic global/local method is proposed to reduce the computational requirements of probabilistic structural analysis. A coarser global model is used for most of the computations with a local more refined model used only at key probabilistic conditions. The global model is used to establish the cumulative distribution function (cdf) and the Most Probable Point (MPP). The local model then uses the predicted MPP to adjust the cdf value. The global/local method is used within the advanced mean value probabilistic algorithm. The local model can be more refined with respect to the g1obal model in terms of finer mesh, smaller time step, tighter tolerances, etc. and can be used with linear or nonlinear models. The basis for this approach is described in terms of the correlation between the global and local models which can be estimated from the global and local MPPs. A numerical example is presented using the NESSUS probabilistic structural analysis program with the finite element method used for the structural modeling. The results clearly indicate a significant computer savings with minimal loss in accuracy.
Global/local methods for probabilistic structural analysis
NASA Astrophysics Data System (ADS)
Millwater, H. R.; Wu, Y.-T.
1993-04-01
A probabilistic global/local method is proposed to reduce the computational requirements of probabilistic structural analysis. A coarser global model is used for most of the computations with a local more refined model used only at key probabilistic conditions. The global model is used to establish the cumulative distribution function (cdf) and the Most Probable Point (MPP). The local model then uses the predicted MPP to adjust the cdf value. The global/local method is used within the advanced mean value probabilistic algorithm. The local model can be more refined with respect to the g1obal model in terms of finer mesh, smaller time step, tighter tolerances, etc. and can be used with linear or nonlinear models. The basis for this approach is described in terms of the correlation between the global and local models which can be estimated from the global and local MPPs. A numerical example is presented using the NESSUS probabilistic structural analysis program with the finite element method used for the structural modeling. The results clearly indicate a significant computer savings with minimal loss in accuracy.
Ziegler, G; Ridgway, G R; Dahnke, R; Gaser, C
2014-08-15
Structural imaging based on MRI is an integral component of the clinical assessment of patients with potential dementia. We here propose an individualized Gaussian process-based inference scheme for clinical decision support in healthy and pathological aging elderly subjects using MRI. The approach aims at quantitative and transparent support for clinicians who aim to detect structural abnormalities in patients at risk of Alzheimer's disease or other types of dementia. Firstly, we introduce a generative model incorporating our knowledge about normative decline of local and global gray matter volume across the brain in elderly. By supposing smooth structural trajectories the models account for the general course of age-related structural decline as well as late-life accelerated loss. Considering healthy subjects' demography and global brain parameters as informative about normal brain aging variability affords individualized predictions in single cases. Using Gaussian process models as a normative reference, we predict new subjects' brain scans and quantify the local gray matter abnormalities in terms of Normative Probability Maps (NPM) and global z-scores. By integrating the observed expectation error and the predictive uncertainty, the local maps and global scores exploit the advantages of Bayesian inference for clinical decisions and provide a valuable extension of diagnostic information about pathological aging. We validate the approach in simulated data and real MRI data. We train the GP framework using 1238 healthy subjects with ages 18-94 years, and predict in 415 independent test subjects diagnosed as healthy controls, Mild Cognitive Impairment and Alzheimer's disease. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
Ziegler, G.; Ridgway, G.R.; Dahnke, R.; Gaser, C.
2014-01-01
Structural imaging based on MRI is an integral component of the clinical assessment of patients with potential dementia. We here propose an individualized Gaussian process-based inference scheme for clinical decision support in healthy and pathological aging elderly subjects using MRI. The approach aims at quantitative and transparent support for clinicians who aim to detect structural abnormalities in patients at risk of Alzheimer's disease or other types of dementia. Firstly, we introduce a generative model incorporating our knowledge about normative decline of local and global gray matter volume across the brain in elderly. By supposing smooth structural trajectories the models account for the general course of age-related structural decline as well as late-life accelerated loss. Considering healthy subjects' demography and global brain parameters as informative about normal brain aging variability affords individualized predictions in single cases. Using Gaussian process models as a normative reference, we predict new subjects' brain scans and quantify the local gray matter abnormalities in terms of Normative Probability Maps (NPM) and global z-scores. By integrating the observed expectation error and the predictive uncertainty, the local maps and global scores exploit the advantages of Bayesian inference for clinical decisions and provide a valuable extension of diagnostic information about pathological aging. We validate the approach in simulated data and real MRI data. We train the GP framework using 1238 healthy subjects with ages 18–94 years, and predict in 415 independent test subjects diagnosed as healthy controls, Mild Cognitive Impairment and Alzheimer's disease. PMID:24742919
Predictive modelling of contagious deforestation in the Brazilian Amazon.
Rosa, Isabel M D; Purves, Drew; Souza, Carlos; Ewers, Robert M
2013-01-01
Tropical forests are diminishing in extent due primarily to the rapid expansion of agriculture, but the future magnitude and geographical distribution of future tropical deforestation is uncertain. Here, we introduce a dynamic and spatially-explicit model of deforestation that predicts the potential magnitude and spatial pattern of Amazon deforestation. Our model differs from previous models in three ways: (1) it is probabilistic and quantifies uncertainty around predictions and parameters; (2) the overall deforestation rate emerges "bottom up", as the sum of local-scale deforestation driven by local processes; and (3) deforestation is contagious, such that local deforestation rate increases through time if adjacent locations are deforested. For the scenarios evaluated-pre- and post-PPCDAM ("Plano de Ação para Proteção e Controle do Desmatamento na Amazônia")-the parameter estimates confirmed that forests near roads and already deforested areas are significantly more likely to be deforested in the near future and less likely in protected areas. Validation tests showed that our model correctly predicted the magnitude and spatial pattern of deforestation that accumulates over time, but that there is very high uncertainty surrounding the exact sequence in which pixels are deforested. The model predicts that under pre-PPCDAM (assuming no change in parameter values due to, for example, changes in government policy), annual deforestation rates would halve between 2050 compared to 2002, although this partly reflects reliance on a static map of the road network. Consistent with other models, under the pre-PPCDAM scenario, states in the south and east of the Brazilian Amazon have a high predicted probability of losing nearly all forest outside of protected areas by 2050. This pattern is less strong in the post-PPCDAM scenario. Contagious spread along roads and through areas lacking formal protection could allow deforestation to reach the core, which is currently experiencing low deforestation rates due to its isolation.
Predictive Modelling of Contagious Deforestation in the Brazilian Amazon
Rosa, Isabel M. D.; Purves, Drew; Souza, Carlos; Ewers, Robert M.
2013-01-01
Tropical forests are diminishing in extent due primarily to the rapid expansion of agriculture, but the future magnitude and geographical distribution of future tropical deforestation is uncertain. Here, we introduce a dynamic and spatially-explicit model of deforestation that predicts the potential magnitude and spatial pattern of Amazon deforestation. Our model differs from previous models in three ways: (1) it is probabilistic and quantifies uncertainty around predictions and parameters; (2) the overall deforestation rate emerges “bottom up”, as the sum of local-scale deforestation driven by local processes; and (3) deforestation is contagious, such that local deforestation rate increases through time if adjacent locations are deforested. For the scenarios evaluated–pre- and post-PPCDAM (“Plano de Ação para Proteção e Controle do Desmatamento na Amazônia”)–the parameter estimates confirmed that forests near roads and already deforested areas are significantly more likely to be deforested in the near future and less likely in protected areas. Validation tests showed that our model correctly predicted the magnitude and spatial pattern of deforestation that accumulates over time, but that there is very high uncertainty surrounding the exact sequence in which pixels are deforested. The model predicts that under pre-PPCDAM (assuming no change in parameter values due to, for example, changes in government policy), annual deforestation rates would halve between 2050 compared to 2002, although this partly reflects reliance on a static map of the road network. Consistent with other models, under the pre-PPCDAM scenario, states in the south and east of the Brazilian Amazon have a high predicted probability of losing nearly all forest outside of protected areas by 2050. This pattern is less strong in the post-PPCDAM scenario. Contagious spread along roads and through areas lacking formal protection could allow deforestation to reach the core, which is currently experiencing low deforestation rates due to its isolation. PMID:24204776
NASA Astrophysics Data System (ADS)
Hong, Y.; Kirschbaum, D. B.; Fukuoka, H.
2011-12-01
The key to advancing the predictability of rainfall-triggered landslides is to use physically based slope-stability models that simulate the dynamical response of the subsurface moisture to spatiotemporal variability of rainfall in complex terrains. An early warning system applying such physical models has been developed to predict rainfall-induced shallow landslides over Java Island in Indonesia and Honduras. The prototyped early warning system integrates three major components: (1) a susceptibility mapping or hotspot identification component based on a land surface geospatial database (topographical information, maps of soil properties, and local landslide inventory etc.); (2) a satellite-based precipitation monitoring system (http://trmm.gsfc.nasa.gov) and a precipitation forecasting model (i.e. Weather Research Forecast); and (3) a physically-based, rainfall-induced landslide prediction model SLIDE (SLope-Infiltration-Distributed Equilibrium). The system utilizes the modified physical model to calculate a Factor of Safety (FS) that accounts for the contribution of rainfall infiltration and partial saturation to the shear strength of the soil in topographically complex terrains. The system's prediction performance has been evaluated using a local landslide inventory. In Java Island, Indonesia, evaluation of SLIDE modeling results by local news reports shows that the system successfully predicted landslides in correspondence to the time of occurrence of the real landslide events. Further study of SLIDE is implemented in Honduras where Hurricane Mitch triggered widespread landslides in 1998. Results shows within the approximately 1,200 square kilometers study areas, the values of hit rates reached as high as 78% and 75%, while the error indices were 35% and 49%. Despite positive model performance, the SLIDE model is limited in the early warning system by several assumptions including, using general parameter calibration rather than in situ tests and neglecting geologic information. Advantages and limitations of this model will be discussed with respect to future applications of landslide assessment and prediction over large scales. In conclusion, integration of spatially distributed remote sensing precipitation products and in-situ datasets and physical models in this prototype system enable us to further develop a regional early warning tool in the future for forecasting storm-induced landslides.
Space-Time Urban Air Pollution Forecasts
NASA Astrophysics Data System (ADS)
Russo, A.; Trigo, R. M.; Soares, A.
2012-04-01
Air pollution, like other natural phenomena, may be considered a space-time process. However, the simultaneous integration of time and space is not an easy task to perform, due to the existence of different uncertainties levels and data characteristics. In this work we propose a hybrid method that combines geostatistical and neural models to analyze PM10 time series recorded in the urban area of Lisbon (Portugal) for the 2002-2006 period and to produce forecasts. Geostatistical models have been widely used to characterize air pollution in urban areas, where the pollutant sources are considered diffuse, and also to industrial areas with localized emission sources. It should be stressed however that most geostatistical models correspond basically to an interpolation methodology (estimation, simulation) of a set of variables in a spatial or space-time domain. The temporal prediction of a pollutant usually requires knowledge of the main trends and complex patterns of physical dispersion phenomenon. To deal with low resolution problems and to enhance reliability of predictions, an approach based on neural network short term predictions in the monitoring stations which behave as a local conditioner to a fine grid stochastic simulation model is presented here. After the pollutant concentration is predicted for a given time period at the monitoring stations, we can use the local conditional distributions of observed values, given the predicted value for that period, to perform the spatial simulations for the entire area and consequently evaluate the spatial uncertainty of pollutant concentration. To attain this objective, we propose the use of direct sequential simulations with local distributions. With this approach one succeed to predict the space-time distribution of pollutant concentration that accounts for the time prediction uncertainty (reflecting the neural networks efficiency at each local monitoring station) and the spatial uncertainty as revealed by the spatial variograms. The dataset used consists of PM10 concentrations recorded hourly by 12 monitoring stations within the Lisbon's area, for the period 2002-2006. In addition, meteorological data recorded at 3 monitoring stations and boundary layer height (BLH) daily values from the ECMWF (European Centre for Medium Weather Forecast), ERA Interim, were also used. Based on the large-scale standard pressure fields from the ERA40/ECMWF, prevailing circulation patterns at regional scale where determined and used on the construction of the models. After the daily forecasts were produced, the difference between the average maps based on real observations and predicted values were determined and the model's performance was assessed. Based on the analysis of the results, we conclude that the proposed approach shows to be a very promising alternative for urban air quality characterization because of its good results and simplicity of application.
Solvable Hydrodynamics of Quantum Integrable Systems
NASA Astrophysics Data System (ADS)
Bulchandani, Vir B.; Vasseur, Romain; Karrasch, Christoph; Moore, Joel E.
2017-12-01
The conventional theory of hydrodynamics describes the evolution in time of chaotic many-particle systems from local to global equilibrium. In a quantum integrable system, local equilibrium is characterized by a local generalized Gibbs ensemble or equivalently a local distribution of pseudomomenta. We study time evolution from local equilibria in such models by solving a certain kinetic equation, the "Bethe-Boltzmann" equation satisfied by the local pseudomomentum density. Explicit comparison with density matrix renormalization group time evolution of a thermal expansion in the XXZ model shows that hydrodynamical predictions from smooth initial conditions can be remarkably accurate, even for small system sizes. Solutions are also obtained in the Lieb-Liniger model for free expansion into vacuum and collisions between clouds of particles, which model experiments on ultracold one-dimensional Bose gases.
Shimotohno, Akie; Sotta, Naoyuki; Sato, Takafumi; De Ruvo, Micol; Marée, Athanasius F M; Grieneisen, Verônica A; Fujiwara, Toru
2015-04-01
Boron, an essential micronutrient, is transported in roots of Arabidopsis thaliana mainly by two different types of transporters, BORs and NIPs (nodulin26-like intrinsic proteins). Both are plasma membrane localized, but have distinct transport properties and patterns of cell type-specific accumulation with different polar localizations, which are likely to affect boron distribution. Here, we used mathematical modeling and an experimental determination to address boron distributions in the root. A computational model of the root is created at the cellular level, describing the boron transporters as observed experimentally. Boron is allowed to diffuse into roots, in cells and cell walls, and to be transported over plasma membranes, reflecting the properties of the different transporters. The model predicts that a region around the quiescent center has a higher concentration of soluble boron than other portions. To evaluate this prediction experimentally, we determined the boron distribution in roots using laser ablation-inductivity coupled plasma-mass spectrometry. The analysis indicated that the boron concentration is highest near the tip and is lower in the more proximal region of the meristem zone, similar to the pattern of soluble boron distribution predicted by the model. Our model also predicts that upward boron flux does not continuously increase from the root tip toward the mature region, indicating that boron taken up in the root tip is not efficiently transported to shoots. This suggests that root tip-absorbed boron is probably used for local root growth, and that instead it is the more mature root regions which have a greater role in transporting boron toward the shoots. © The Author 2015. Published by Oxford University Press on behalf of Japanese Society of Plant Physiologists.
Shimotohno, Akie; Sotta, Naoyuki; Sato, Takafumi; De Ruvo, Micol; Marée, Athanasius F.M.; Grieneisen, Verônica A.; Fujiwara, Toru
2015-01-01
Boron, an essential micronutrient, is transported in roots of Arabidopsis thaliana mainly by two different types of transporters, BORs and NIPs (nodulin26-like intrinsic proteins). Both are plasma membrane localized, but have distinct transport properties and patterns of cell type-specific accumulation with different polar localizations, which are likely to affect boron distribution. Here, we used mathematical modeling and an experimental determination to address boron distributions in the root. A computational model of the root is created at the cellular level, describing the boron transporters as observed experimentally. Boron is allowed to diffuse into roots, in cells and cell walls, and to be transported over plasma membranes, reflecting the properties of the different transporters. The model predicts that a region around the quiescent center has a higher concentration of soluble boron than other portions. To evaluate this prediction experimentally, we determined the boron distribution in roots using laser ablation-inductivity coupled plasma-mass spectrometry. The analysis indicated that the boron concentration is highest near the tip and is lower in the more proximal region of the meristem zone, similar to the pattern of soluble boron distribution predicted by the model. Our model also predicts that upward boron flux does not continuously increase from the root tip toward the mature region, indicating that boron taken up in the root tip is not efficiently transported to shoots. This suggests that root tip-absorbed boron is probably used for local root growth, and that instead it is the more mature root regions which have a greater role in transporting boron toward the shoots. PMID:25670713
Cottrell, Gilles; Kouwaye, Bienvenue; Pierrat, Charlotte; le Port, Agnès; Bouraïma, Aziz; Fonton, Noël; Hounkonnou, Mahouton Norbert; Massougbodji, Achille; Corbel, Vincent; Garcia, André
2012-01-01
Malaria remains endemic in tropical areas, especially in Africa. For the evaluation of new tools and to further our understanding of host-parasite interactions, knowing the environmental risk of transmission--even at a very local scale--is essential. The aim of this study was to assess how malaria transmission is influenced and can be predicted by local climatic and environmental factors.As the entomological part of a cohort study of 650 newborn babies in nine villages in the Tori Bossito district of Southern Benin between June 2007 and February 2010, human landing catches were performed to assess the density of malaria vectors and transmission intensity. Climatic factors as well as household characteristics were recorded throughout the study. Statistical correlations between Anopheles density and environmental and climatic factors were tested using a three-level Poisson mixed regression model. The results showed both temporal variations in vector density (related to season and rainfall), and spatial variations at the level of both village and house. These spatial variations could be largely explained by factors associated with the house's immediate surroundings, namely soil type, vegetation index and the proximity of a watercourse. Based on these results, a predictive regression model was developed using a leave-one-out method, to predict the spatiotemporal variability of malaria transmission in the nine villages.This study points up the importance of local environmental factors in malaria transmission and describes a model to predict the transmission risk of individual children, based on environmental and behavioral characteristics.
Pierrat, Charlotte; le Port, Agnès; Bouraïma, Aziz; Fonton, Noël; Hounkonnou, Mahouton Norbert; Massougbodji, Achille; Corbel, Vincent; Garcia, André
2012-01-01
Malaria remains endemic in tropical areas, especially in Africa. For the evaluation of new tools and to further our understanding of host-parasite interactions, knowing the environmental risk of transmission—even at a very local scale—is essential. The aim of this study was to assess how malaria transmission is influenced and can be predicted by local climatic and environmental factors. As the entomological part of a cohort study of 650 newborn babies in nine villages in the Tori Bossito district of Southern Benin between June 2007 and February 2010, human landing catches were performed to assess the density of malaria vectors and transmission intensity. Climatic factors as well as household characteristics were recorded throughout the study. Statistical correlations between Anopheles density and environmental and climatic factors were tested using a three-level Poisson mixed regression model. The results showed both temporal variations in vector density (related to season and rainfall), and spatial variations at the level of both village and house. These spatial variations could be largely explained by factors associated with the house's immediate surroundings, namely soil type, vegetation index and the proximity of a watercourse. Based on these results, a predictive regression model was developed using a leave-one-out method, to predict the spatiotemporal variability of malaria transmission in the nine villages. This study points up the importance of local environmental factors in malaria transmission and describes a model to predict the transmission risk of individual children, based on environmental and behavioral characteristics. PMID:22238582
Semi-supervised protein subcellular localization.
Xu, Qian; Hu, Derek Hao; Xue, Hong; Yu, Weichuan; Yang, Qiang
2009-01-30
Protein subcellular localization is concerned with predicting the location of a protein within a cell using computational method. The location information can indicate key functionalities of proteins. Accurate predictions of subcellular localizations of protein can aid the prediction of protein function and genome annotation, as well as the identification of drug targets. Computational methods based on machine learning, such as support vector machine approaches, have already been widely used in the prediction of protein subcellular localization. However, a major drawback of these machine learning-based approaches is that a large amount of data should be labeled in order to let the prediction system learn a classifier of good generalization ability. However, in real world cases, it is laborious, expensive and time-consuming to experimentally determine the subcellular localization of a protein and prepare instances of labeled data. In this paper, we present an approach based on a new learning framework, semi-supervised learning, which can use much fewer labeled instances to construct a high quality prediction model. We construct an initial classifier using a small set of labeled examples first, and then use unlabeled instances to refine the classifier for future predictions. Experimental results show that our methods can effectively reduce the workload for labeling data using the unlabeled data. Our method is shown to enhance the state-of-the-art prediction results of SVM classifiers by more than 10%.
Pan, Xiaoyong; Shen, Hong-Bin
2018-05-02
RNA-binding proteins (RBPs) take over 5∼10% of the eukaryotic proteome and play key roles in many biological processes, e.g. gene regulation. Experimental detection of RBP binding sites is still time-intensive and high-costly. Instead, computational prediction of the RBP binding sites using pattern learned from existing annotation knowledge is a fast approach. From the biological point of view, the local structure context derived from local sequences will be recognized by specific RBPs. However, in computational modeling using deep learning, to our best knowledge, only global representations of entire RNA sequences are employed. So far, the local sequence information is ignored in the deep model construction process. In this study, we present a computational method iDeepE to predict RNA-protein binding sites from RNA sequences by combining global and local convolutional neural networks (CNNs). For the global CNN, we pad the RNA sequences into the same length. For the local CNN, we split a RNA sequence into multiple overlapping fixed-length subsequences, where each subsequence is a signal channel of the whole sequence. Next, we train deep CNNs for multiple subsequences and the padded sequences to learn high-level features, respectively. Finally, the outputs from local and global CNNs are combined to improve the prediction. iDeepE demonstrates a better performance over state-of-the-art methods on two large-scale datasets derived from CLIP-seq. We also find that the local CNN run 1.8 times faster than the global CNN with comparable performance when using GPUs. Our results show that iDeepE has captured experimentally verified binding motifs. https://github.com/xypan1232/iDeepE. xypan172436@gmail.com or hbshen@sjtu.edu.cn. Supplementary data are available at Bioinformatics online.
Dendritic trafficking faces physiologically critical speed-precision tradeoffs
Williams, Alex H; O'Donnell, Cian; Sejnowski, Terrence J; O'Leary, Timothy
2016-01-01
Nervous system function requires intracellular transport of channels, receptors, mRNAs, and other cargo throughout complex neuronal morphologies. Local signals such as synaptic input can regulate cargo trafficking, motivating the leading conceptual model of neuron-wide transport, sometimes called the ‘sushi-belt model’ (Doyle and Kiebler, 2011). Current theories and experiments are based on this model, yet its predictions are not rigorously understood. We formalized the sushi belt model mathematically, and show that it can achieve arbitrarily complex spatial distributions of cargo in reconstructed morphologies. However, the model also predicts an unavoidable, morphology dependent tradeoff between speed, precision and metabolic efficiency of cargo transport. With experimental estimates of trafficking kinetics, the model predicts delays of many hours or days for modestly accurate and efficient cargo delivery throughout a dendritic tree. These findings challenge current understanding of the efficacy of nucleus-to-synapse trafficking and may explain the prevalence of local biosynthesis in neurons. DOI: http://dx.doi.org/10.7554/eLife.20556.001 PMID:28034367
Modelling of a spread of hazardous substances in a Floreon+ system
NASA Astrophysics Data System (ADS)
Ronovsky, Ales; Brzobohaty, Tomas; Kuchar, Stepan; Vojtek, David
2017-07-01
This paper is focused on a module of an automatized numerical modelling of a spread of hazardous substances developed for the Floreon+ system on demand of the Fire Brigade of Moravian-Silesian. The main purpose of the module is to provide more accurate prediction for smog situations that are frequent problems in the region. It can be operated by non-scientific user through the Floreon+ client and can be used as a short term prediction model of an evolution of concentrations of dangerous substances (SO2, PMx) from stable sources, such as heavy industry factories, local furnaces or highways or as fast prediction of spread of hazardous substances in case of crash of mobile source of contamination (transport of dangerous substances) or in case of a leakage in a local chemical factory. The process of automatic gathering of atmospheric data, connection of Floreon+ system with an HPC infrastructure necessary for computing of such an advantageous model and the model itself are described bellow.
NASA Astrophysics Data System (ADS)
Chardon, Jérémy; Hingray, Benoit; Favre, Anne-Catherine
2018-01-01
Statistical downscaling models (SDMs) are often used to produce local weather scenarios from large-scale atmospheric information. SDMs include transfer functions which are based on a statistical link identified from observations between local weather and a set of large-scale predictors. As physical processes driving surface weather vary in time, the most relevant predictors and the regression link are likely to vary in time too. This is well known for precipitation for instance and the link is thus often estimated after some seasonal stratification of the data. In this study, we present a two-stage analog/regression model where the regression link is estimated from atmospheric analogs of the current prediction day. Atmospheric analogs are identified from fields of geopotential heights at 1000 and 500 hPa. For the regression stage, two generalized linear models are further used to model the probability of precipitation occurrence and the distribution of non-zero precipitation amounts, respectively. The two-stage model is evaluated for the probabilistic prediction of small-scale precipitation over France. It noticeably improves the skill of the prediction for both precipitation occurrence and amount. As the analog days vary from one prediction day to another, the atmospheric predictors selected in the regression stage and the value of the corresponding regression coefficients can vary from one prediction day to another. The model allows thus for a day-to-day adaptive and tailored downscaling. It can also reveal specific predictors for peculiar and non-frequent weather configurations.
Non-local damage rheology and size effect
NASA Astrophysics Data System (ADS)
Lyakhovsky, V.
2011-12-01
We study scaling relations controlling the onset of transiently-accelerating fracturing and transition to dynamic rupture propagation in a non-local damage rheology model. The size effect is caused principally by growth of a fracture process zone, involving stress redistribution and energy release associated with a large fracture. This implies that rupture nucleation and transition to dynamic propagation are inherently scale-dependent processes. Linear elastic fracture mechanics (LEFM) and local damage mechanics are formulated in terms of dimensionless strain components and thus do not allow introducing any space scaling, except linear relations between fracture length and displacements. Generalization of Weibull theory provides scaling relations between stress and crack length at the onset of failure. A powerful extension of the LEFM formulation is the displacement-weakening model which postulates that yielding is complete when the crack wall displacement exceeds some critical value or slip-weakening distance Dc at which a transition to kinetic friction is complete. Scaling relations controlling the transition to dynamic rupture propagation in slip-weakening formulation are widely accepted in earthquake physics. Strong micro-crack interaction in a process zone may be accounted for by adopting either integral or gradient type non-local damage models. We formulate a gradient-type model with free energy depending on the scalar damage parameter and its spatial derivative. The damage-gradient term leads to structural stresses in the constitutive stress-strain relations and a damage diffusion term in the kinetic equation for damage evolution. The damage diffusion eliminates the singular localization predicted by local models. The finite width of the localization zone provides a fundamental length scale that allows numerical simulations with the model to achieve the continuum limit. A diffusive term in the damage evolution gives rise to additional damage diffusive time scale associated with the structural length scale. The ratio between two time scales associated with damage accumulation and diffusion, the damage diffusivity ratio, reflects the role of the diffusion-controlled delocalization. We demonstrate that localized fracturing occurs at the damage diffusivity ratio below certain critical value leading to a linear scaling between stress and crack length compatible with size effect for failures at crack initiation. A subseuqent quasi-static fracture growth is self-similar with increasing size of the process zone proportional to the fracture length. At a certain stage, controlled by dynamic weakening, the self-similarity breaks down and crack velocity significantly deviates from that predicted by the quasi-static regime, the size of the process zone decreases, and the rate of crack growth ceases to be controlled by the rate of damage increase. Furthermore, the crack speed approaches that predicted by the elasto-dynamic equation. The non-local damage rheology model predicts that the nucleation size of the dynamic fracture scales with fault zone thickness distance of the stress interraction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santos, Mario, E-mail: mgsantoss@gmail.com; Freitas, Raul, E-mail: raulfreitas@portugalmail.com; Crespi, Antonio L., E-mail: aluis.crespi@gmail.com
2011-10-15
This study assesses the potential of an integrated methodology for predicting local trends in invasive exotic plant species (invasive richness) using indirect, regional information on human disturbance. The distribution of invasive plants was assessed in North Portugal using herbarium collections and local environmental, geophysical and socio-economic characteristics. Invasive richness response to anthropogenic disturbance was predicted using a dynamic model based on a sequential modeling process (stochastic dynamic methodology-StDM). Derived scenarios showed that invasive richness trends were clearly associated with ongoing socio-economic change. Simulations including scenarios of growing urbanization showed an increase in invasive richness while simulations in municipalities with decreasingmore » populations showed stable or decreasing levels of invasive richness. The model simulations demonstrate the interest and feasibility of using this methodology in disturbance ecology. - Highlights: {yields} Socio-economic data indicate human induced disturbances. {yields} Socio-economic development increase disturbance in ecosystems. {yields} Disturbance promotes opportunities for invasive plants.{yields} Increased opportunities promote richness of invasive plants.{yields} Increase in richness of invasive plants change natural ecosystems.« less
Godin, Bruno; Mayer, Frédéric; Agneessens, Richard; Gerin, Patrick; Dardenne, Pierre; Delfosse, Philippe; Delcarte, Jérôme
2015-01-01
The reliability of different models to predict the biochemical methane potential (BMP) of various plant biomasses using a multispecies dataset was compared. The most reliable prediction models of the BMP were those based on the near infrared (NIR) spectrum compared to those based on the chemical composition. The NIR predictions of local (specific regression and non-linear) models were able to estimate quantitatively, rapidly, cheaply and easily the BMP. Such a model could be further used for biomethanation plant management and optimization. The predictions of non-linear models were more reliable compared to those of linear models. The presentation form (green-dried, silage-dried and silage-wet form) of biomasses to the NIR spectrometer did not influence the performances of the NIR prediction models. The accuracy of the BMP method should be improved to enhance further the BMP prediction models. Copyright © 2014 Elsevier Ltd. All rights reserved.
Multi-Scale Computational Modeling of Two-Phased Metal Using GMC Method
NASA Technical Reports Server (NTRS)
Moghaddam, Masoud Ghorbani; Achuthan, A.; Bednacyk, B. A.; Arnold, S. M.; Pineda, E. J.
2014-01-01
A multi-scale computational model for determining plastic behavior in two-phased CMSX-4 Ni-based superalloys is developed on a finite element analysis (FEA) framework employing crystal plasticity constitutive model that can capture the microstructural scale stress field. The generalized method of cells (GMC) micromechanics model is used for homogenizing the local field quantities. At first, GMC as stand-alone is validated by analyzing a repeating unit cell (RUC) as a two-phased sample with 72.9% volume fraction of gamma'-precipitate in the gamma-matrix phase and comparing the results with those predicted by finite element analysis (FEA) models incorporating the same crystal plasticity constitutive model. The global stress-strain behavior and the local field quantity distributions predicted by GMC demonstrated good agreement with FEA. High computational saving, at the expense of some accuracy in the components of local tensor field quantities, was obtained with GMC. Finally, the capability of the developed multi-scale model linking FEA and GMC to solve real life sized structures is demonstrated by analyzing an engine disc component and determining the microstructural scale details of the field quantities.
Readmission prediction via deep contextual embedding of clinical concepts.
Xiao, Cao; Ma, Tengfei; Dieng, Adji B; Blei, David M; Wang, Fei
2018-01-01
Hospital readmission costs a lot of money every year. Many hospital readmissions are avoidable, and excessive hospital readmissions could also be harmful to the patients. Accurate prediction of hospital readmission can effectively help reduce the readmission risk. However, the complex relationship between readmission and potential risk factors makes readmission prediction a difficult task. The main goal of this paper is to explore deep learning models to distill such complex relationships and make accurate predictions. We propose CONTENT, a deep model that predicts hospital readmissions via learning interpretable patient representations by capturing both local and global contexts from patient Electronic Health Records (EHR) through a hybrid Topic Recurrent Neural Network (TopicRNN) model. The experiment was conducted using the EHR of a real world Congestive Heart Failure (CHF) cohort of 5,393 patients. The proposed model outperforms state-of-the-art methods in readmission prediction (e.g. 0.6103 ± 0.0130 vs. second best 0.5998 ± 0.0124 in terms of ROC-AUC). The derived patient representations were further utilized for patient phenotyping. The learned phenotypes provide more precise understanding of readmission risks. Embedding both local and global context in patient representation not only improves prediction performance, but also brings interpretable insights of understanding readmission risks for heterogeneous chronic clinical conditions. This is the first of its kind model that integrates the power of both conventional deep neural network and the probabilistic generative models for highly interpretable deep patient representation learning. Experimental results and case studies demonstrate the improved performance and interpretability of the model.
Novel Approach for Prediction of Localized Necking in Case of Nonlinear Strain Paths
NASA Astrophysics Data System (ADS)
Drotleff, K.; Liewald, M.
2017-09-01
Rising customer expectations regarding design complexity and weight reduction of sheet metal components alongside with further reduced time to market implicate increased demand for process validation using numerical forming simulation. Formability prediction though often is still based on the forming limit diagram first presented in the 1960s. Despite many drawbacks in case of nonlinear strain paths and major advances in research in the recent years, the forming limit curve (FLC) is still one of the most commonly used criteria for assessing formability of sheet metal materials. Especially when forming complex part geometries nonlinear strain paths may occur, which cannot be predicted using the conventional FLC-Concept. In this paper a novel approach for calculation of FLCs for nonlinear strain paths is presented. Combining an interesting approach for prediction of FLC using tensile test data and IFU-FLC-Criterion a model for prediction of localized necking for nonlinear strain paths can be derived. Presented model is purely based on experimental tensile test data making it easy to calibrate for any given material. Resulting prediction of localized necking is validated using an experimental deep drawing specimen made of AA6014 material having a sheet thickness of 1.04 mm. The results are compared to IFU-FLC-Criterion based on data of pre-stretched Nakajima specimen.
NASA Astrophysics Data System (ADS)
Guillemaut, C.; Metzger, C.; Moulton, D.; Heinola, K.; O’Mullane, M.; Balboa, I.; Boom, J.; Matthews, G. F.; Silburn, S.; Solano, E. R.; contributors, JET
2018-06-01
The design and operation of future fusion devices relying on H-mode plasmas requires reliable modelling of edge-localized modes (ELMs) for precise prediction of divertor target conditions. An extensive experimental validation of simple analytical predictions of the time evolution of target plasma loads during ELMs has been carried out here in more than 70 JET-ITER-like wall H-mode experiments with a wide range of conditions. Comparisons of these analytical predictions with diagnostic measurements of target ion flux density, power density, impact energy and electron temperature during ELMs are presented in this paper and show excellent agreement. The analytical predictions tested here are made with the ‘free-streaming’ kinetic model (FSM) which describes ELMs as a quasi-neutral plasma bunch expanding along the magnetic field lines into the Scrape-Off Layer without collisions. Consequences of the FSM on energy reflection and deposition on divertor targets during ELMs are also discussed.
Can arsenic occurrence rate in bedrock aquifers be predicted?
Yang, Qiang; Jung, Hun Bok; Marvinney, Robert G.; Culbertson, Charles W.; Zheng, Yan
2012-01-01
A high percentage (31%) of groundwater samples from bedrock aquifers in the greater Augusta area, Maine was found to contain greater than 10 μg L–1 of arsenic. Elevated arsenic concentrations are associated with bedrock geology, and more frequently observed in samples with high pH, low dissolved oxygen, and low nitrate. These associations were quantitatively compared by statistical analysis. Stepwise logistic regression models using bedrock geology and/or water chemistry parameters are developed and tested with external data sets to explore the feasibility of predicting groundwater arsenic occurrence rates (the percentages of arsenic concentrations higher than 10 μg L–1) in bedrock aquifers. Despite the under-prediction of high arsenic occurrence rates, models including groundwater geochemistry parameters predict arsenic occurrence rates better than those with bedrock geology only. Such simple models with very few parameters can be applied to obtain a preliminary arsenic risk assessment in bedrock aquifers at local to intermediate scales at other localities with similar geology.
Rowlinson, Steve; Jia, Yunyan Andrea
2014-04-01
Existing heat stress risk management guidelines recommended by international standards are not practical for the construction industry which needs site supervision staff to make instant managerial decisions to mitigate heat risks. The ability of the predicted heat strain (PHS) model [ISO 7933 (2004). Ergonomics of the thermal environment analytical determination and interpretation of heat stress using calculation of the predicted heat strain. Geneva: International Standard Organisation] to predict maximum allowable exposure time (D lim) has now enabled development of localized, action-triggering and threshold-based guidelines for implementation by lay frontline staff on construction sites. This article presents a protocol for development of two heat stress management tools by applying the PHS model to its full potential. One of the tools is developed to facilitate managerial decisions on an optimized work-rest regimen for paced work. The other tool is developed to enable workers' self-regulation during self-paced work.
Can arsenic occurrence rates in bedrock aquifers be predicted?
Yang, Qiang; Jung, Hun Bok; Marvinney, Robert G.; Culbertson, Charles W.; Zheng, Yan
2012-01-01
A high percentage (31%) of groundwater samples from bedrock aquifers in the greater Augusta area, Maine was found to contain greater than 10 µg L−1 of arsenic. Elevated arsenic concentrations are associated with bedrock geology, and more frequently observed in samples with high pH, low dissolved oxygen, and low nitrate. These associations were quantitatively compared by statistical analysis. Stepwise logistic regression models using bedrock geology and/or water chemistry parameters are developed and tested with external data sets to explore the feasibility of predicting groundwater arsenic occurrence rates (the percentages of arsenic concentrations higher than 10 µg L−1) in bedrock aquifers. Despite the under-prediction of high arsenic occurrence rates, models including groundwater geochemistry parameters predict arsenic occurrence rates better than those with bedrock geology only. Such simple models with very few parameters can be applied to obtain a preliminary arsenic risk assessment in bedrock aquifers at local to intermediate scales at other localities with similar geology. PMID:22260208
Reagan, Andrew J; Dubief, Yves; Dodds, Peter Sheridan; Danforth, Christopher M
2016-01-01
A thermal convection loop is a annular chamber filled with water, heated on the bottom half and cooled on the top half. With sufficiently large forcing of heat, the direction of fluid flow in the loop oscillates chaotically, dynamics analogous to the Earth's weather. As is the case for state-of-the-art weather models, we only observe the statistics over a small region of state space, making prediction difficult. To overcome this challenge, data assimilation (DA) methods, and specifically ensemble methods, use the computational model itself to estimate the uncertainty of the model to optimally combine these observations into an initial condition for predicting the future state. Here, we build and verify four distinct DA methods, and then, we perform a twin model experiment with the computational fluid dynamics simulation of the loop using the Ensemble Transform Kalman Filter (ETKF) to assimilate observations and predict flow reversals. We show that using adaptively shaped localized covariance outperforms static localized covariance with the ETKF, and allows for the use of less observations in predicting flow reversals. We also show that a Dynamic Mode Decomposition (DMD) of the temperature and velocity fields recovers the low dimensional system underlying reversals, finding specific modes which together are predictive of reversal direction.
Reagan, Andrew J.; Dubief, Yves; Dodds, Peter Sheridan; Danforth, Christopher M.
2016-01-01
A thermal convection loop is a annular chamber filled with water, heated on the bottom half and cooled on the top half. With sufficiently large forcing of heat, the direction of fluid flow in the loop oscillates chaotically, dynamics analogous to the Earth’s weather. As is the case for state-of-the-art weather models, we only observe the statistics over a small region of state space, making prediction difficult. To overcome this challenge, data assimilation (DA) methods, and specifically ensemble methods, use the computational model itself to estimate the uncertainty of the model to optimally combine these observations into an initial condition for predicting the future state. Here, we build and verify four distinct DA methods, and then, we perform a twin model experiment with the computational fluid dynamics simulation of the loop using the Ensemble Transform Kalman Filter (ETKF) to assimilate observations and predict flow reversals. We show that using adaptively shaped localized covariance outperforms static localized covariance with the ETKF, and allows for the use of less observations in predicting flow reversals. We also show that a Dynamic Mode Decomposition (DMD) of the temperature and velocity fields recovers the low dimensional system underlying reversals, finding specific modes which together are predictive of reversal direction. PMID:26849061
Outbreaks of Tularemia in a Boreal Forest Region Depends on Mosquito Prevalence
Rydén, Patrik; Björk, Rafael; Schäfer, Martina L.; Lundström, Jan O.; Petersén, Bodil; Lindblom, Anders; Forsman, Mats; Sjöstedt, Anders
2012-01-01
Background. We aimed to evaluate the potential association of mosquito prevalence in a boreal forest area with transmission of the bacterial disease tularemia to humans, and model the annual variation of disease using local weather data. Methods. A prediction model for mosquito abundance was built using weather and mosquito catch data. Then a negative binomial regression model based on the predicted mosquito abundance and local weather data was built to predict annual numbers of humans contracting tularemia in Dalarna County, Sweden. Results. Three hundred seventy humans were diagnosed with tularemia between 1981 and 2007, 94% of them during 7 summer outbreaks. Disease transmission was concentrated along rivers in the area. The predicted mosquito abundance was correlated (0.41, P < .05) with the annual number of human cases. The predicted mosquito peaks consistently preceded the median onset time of human tularemia (temporal correlation, 0.76; P < .05). Our final predictive model included 5 environmental variables and identified 6 of the 7 outbreaks. Conclusions. This work suggests that a high prevalence of mosquitoes in late summer is a prerequisite for outbreaks of tularemia in a tularemia-endemic boreal forest area of Sweden and that environmental variables can be used as risk indicators. PMID:22124130
Predictions of Poisson's ratio in cross-ply laminates containing matrix cracks and delaminations
NASA Technical Reports Server (NTRS)
Harris, Charles E.; Allen, David H.; Nottorf, Eric W.
1989-01-01
A damage-dependent constitutive model for laminated composites has been developed for the combined damage modes of matrix cracks and delaminations. The model is based on the concept of continuum damage mechanics and uses second-order tensor valued internal state variables to represent each mode of damage. The internal state variables are defined as the local volume average of the relative crack face displacements. Since the local volume for delaminations is specified at the laminate level, the constitutive model takes the form of laminate analysis equations modified by the internal state variables. Model implementation is demonstrated for the laminate engineering modulus E(x) and Poisson's ratio nu(xy) of quasi-isotropic and cross-ply laminates. The model predictions are in close agreement to experimental results obtained for graphite/epoxy laminates.
Downscaler Model for predicting daily air pollution
This model combines daily ozone and particulate matter monitoring and modeling data from across the U.S. to provide improved fine-scale estimates of air quality in communities and other specific locales.
Linking Local Scale Ecosystem Science to Regional Scale Management
NASA Astrophysics Data System (ADS)
Shope, C. L.; Tenhunen, J.; Peiffer, S.
2012-04-01
Ecosystem management with respect to sufficient water yield, a quality water supply, habitat and biodiversity conservation, and climate change effects requires substantial observational data at a range of scales. Complex interactions of local physical processes oftentimes vary over space and time, particularly in locations with extreme meteorological conditions. Modifications to local conditions (ie: agricultural land use changes, nutrient additions, landscape management, water usage) can further affect regional ecosystem services. The international, inter-disciplinary TERRECO research group is intensively investigating a variety of local processes, parameters, and conditions to link complex physical, economic, and social interactions at the regional scale. Field-based meteorology, hydrology, soil physics, plant production, solute and sediment transport, economic, and social behavior data were measured in a South Korean catchment. The data are used to parameterize suite of models describing local to landscape level water, sediment, nutrient, and monetary relationships. We focus on using the agricultural and hydrological SWAT model to synthesize the experimental field data and local-scale models throughout the catchment. The approach of our study was to describe local scientific processes, link potential interrelationships between different processes, and predict environmentally efficient management efforts. The Haean catchment case study shows how research can be structured to provide cross-disciplinary scientific linkages describing complex ecosystems and landscapes that can be used for regional management evaluations and predictions.
Freedman, Adam H; Buermann, Wolfgang; Lebreton, Matthew; Chirio, Laurent; Smith, Thomas B
2009-02-01
We used a species-distribution modeling approach, ground-based climate data sets, and newly available remote-sensing data on vegetation from the MODIS and Quick Scatterometer sensors to investigate the combined effects of human-caused habitat alterations and climate on potential invasions of rainforest by 3 savanna snake species in Cameroon, Central Africa: the night adder (Causus maculatus), olympic lined snake (Dromophis lineatus), and African house snake (Lamprophis fuliginosus). Models with contemporary climate variables and localities from native savanna habitats showed that the current climate in undisturbed rainforest was unsuitable for any of the snake species due to high precipitation. Limited availability of thermally suitable nest sites and mismatches between important life-history events and prey availability are a likely explanation for the predicted exclusion from undisturbed rainforest. Models with only MODIS-derived vegetation variables and savanna localities predicted invasion in disturbed areas within the rainforest zone, which suggests that human removal of forest cover creates suitable microhabitats that facilitate invasions into rainforest. Models with a combination of contemporary climate, MODIS- and Quick Scatterometer-derived vegetation variables, and forest and savanna localities predicted extensive invasion into rainforest caused by rainforest loss. In contrast, a projection of the present-day species-climate envelope on future climate suggested a reduction in invasion potential within the rainforest zone as a consequence of predicted increases in precipitation. These results emphasize that the combined responses of deforestation and climate change will likely be complex in tropical rainforest systems.
Lin, Lixin; Wang, Yunjia; Teng, Jiyao; Xi, Xiuxiu
2015-07-23
The measurement of soil total nitrogen (TN) by hyperspectral remote sensing provides an important tool for soil restoration programs in areas with subsided land caused by the extraction of natural resources. This study used the local correlation maximization-complementary superiority method (LCMCS) to establish TN prediction models by considering the relationship between spectral reflectance (measured by an ASD FieldSpec 3 spectroradiometer) and TN based on spectral reflectance curves of soil samples collected from subsided land which is determined by synthetic aperture radar interferometry (InSAR) technology. Based on the 1655 selected effective bands of the optimal spectrum (OSP) of the first derivate differential of reciprocal logarithm ([log{1/R}]'), (correlation coefficients, p < 0.01), the optimal model of LCMCS method was obtained to determine the final model, which produced lower prediction errors (root mean square error of validation [RMSEV] = 0.89, mean relative error of validation [MREV] = 5.93%) when compared with models built by the local correlation maximization (LCM), complementary superiority (CS) and partial least squares regression (PLS) methods. The predictive effect of LCMCS model was optional in Cangzhou, Renqiu and Fengfeng District. Results indicate that the LCMCS method has great potential to monitor TN in subsided lands caused by the extraction of natural resources including groundwater, oil and coal.
Ghouzam, Yassine; Postic, Guillaume; Guerin, Pierre-Edouard; de Brevern, Alexandre G.; Gelly, Jean-Christophe
2016-01-01
Protein structure prediction based on comparative modeling is the most efficient way to produce structural models when it can be performed. ORION is a dedicated webserver based on a new strategy that performs this task. The identification by ORION of suitable templates is performed using an original profile-profile approach that combines sequence and structure evolution information. Structure evolution information is encoded into profiles using structural features, such as solvent accessibility and local conformation —with Protein Blocks—, which give an accurate description of the local protein structure. ORION has recently been improved, increasing by 5% the quality of its results. The ORION web server accepts a single protein sequence as input and searches homologous protein structures within minutes. Various databases such as PDB, SCOP and HOMSTRAD can be mined to find an appropriate structural template. For the modeling step, a protein 3D structure can be directly obtained from the selected template by MODELLER and displayed with global and local quality model estimation measures. The sequence and the predicted structure of 4 examples from the CAMEO server and a recent CASP11 target from the ‘Hard’ category (T0818-D1) are shown as pertinent examples. Our web server is accessible at http://www.dsimb.inserm.fr/ORION/. PMID:27319297
Ghouzam, Yassine; Postic, Guillaume; Guerin, Pierre-Edouard; de Brevern, Alexandre G; Gelly, Jean-Christophe
2016-06-20
Protein structure prediction based on comparative modeling is the most efficient way to produce structural models when it can be performed. ORION is a dedicated webserver based on a new strategy that performs this task. The identification by ORION of suitable templates is performed using an original profile-profile approach that combines sequence and structure evolution information. Structure evolution information is encoded into profiles using structural features, such as solvent accessibility and local conformation -with Protein Blocks-, which give an accurate description of the local protein structure. ORION has recently been improved, increasing by 5% the quality of its results. The ORION web server accepts a single protein sequence as input and searches homologous protein structures within minutes. Various databases such as PDB, SCOP and HOMSTRAD can be mined to find an appropriate structural template. For the modeling step, a protein 3D structure can be directly obtained from the selected template by MODELLER and displayed with global and local quality model estimation measures. The sequence and the predicted structure of 4 examples from the CAMEO server and a recent CASP11 target from the 'Hard' category (T0818-D1) are shown as pertinent examples. Our web server is accessible at http://www.dsimb.inserm.fr/ORION/.
Hällfors, Maria Helena; Liao, Jishan; Dzurisin, Jason D. K.; Grundel, Ralph; Hyvärinen, Marko; Towle, Kevin; Wu, Grace C.; Hellmann, Jessica J.
2016-01-01
Species distribution models (SDMs) have been criticized for involving assumptions that ignore or categorize many ecologically relevant factors such as dispersal ability and biotic interactions. Another potential source of model error is the assumption that species are ecologically uniform in their climatic tolerances across their range. Typically, SDMs to treat a species as a single entity, although populations of many species differ due to local adaptation or other genetic differentiation. Not taking local adaptation into account, may lead to incorrect range prediction and therefore misplaced conservation efforts. A constraint is that we often do not know the degree to which populations are locally adapted, however. Lacking experimental evidence, we still can evaluate niche differentiation within a species' range to promote better conservation decisions. We explore possible conservation implications of making type I or type II errors in this context. For each of two species, we construct three separate MaxEnt models, one considering the species as a single population and two of disjunct populations. PCA analyses and response curves indicate different climate characteristics in the current environments of the populations. Model projections into future climates indicate minimal overlap between areas predicted to be climatically suitable by the whole species versus population-based models. We present a workflow for addressing uncertainty surrounding local adaptation in SDM application and illustrate the value of conducting population-based models to compare with whole-species models. These comparisons might result in more cautious management actions when alternative range outcomes are considered.
NASA Technical Reports Server (NTRS)
Hochhalter, Jake D.; Littlewood, David J.; Christ, Robert J., Jr.; Veilleux, M. G.; Bozek, J. E.; Ingraffea, A. R.; Maniatty, Antionette M.
2010-01-01
The objective of this paper is to develop further a framework for computationally modeling microstructurally small fatigue crack growth in AA 7075-T651 [1]. The focus is on the nucleation event, when a crack extends from within a second-phase particle into a surrounding grain, since this has been observed to be an initiating mechanism for fatigue crack growth in this alloy. It is hypothesized that nucleation can be predicted by computing a non-local nucleation metric near the crack front. The hypothesis is tested by employing a combination of experimentation and nite element modeling in which various slip-based and energy-based nucleation metrics are tested for validity, where each metric is derived from a continuum crystal plasticity formulation. To investigate each metric, a non-local procedure is developed for the calculation of nucleation metrics in the neighborhood of a crack front. Initially, an idealized baseline model consisting of a single grain containing a semi-ellipsoidal surface particle is studied to investigate the dependence of each nucleation metric on lattice orientation, number of load cycles, and non-local regularization method. This is followed by a comparison of experimental observations and computational results for microstructural models constructed by replicating the observed microstructural geometry near second-phase particles in fatigue specimens. It is found that orientation strongly influences the direction of slip localization and, as a result, in uences the nucleation mechanism. Also, the baseline models, replication models, and past experimental observation consistently suggest that a set of particular grain orientations is most likely to nucleate fatigue cracks. It is found that a continuum crystal plasticity model and a non-local nucleation metric can be used to predict the nucleation event in AA 7075-T651. However, nucleation metric threshold values that correspond to various nucleation governing mechanisms must be calibrated.
Localized Density/Drag Prediction for Improved Onboard Orbit Propagation
2009-09-01
Localized Density/Drag Prediction for Improved Onboard Orbit Propagation Nathan B. Stastny, Frank R. Chavez, Chin Lin, T. Alan Lovell , Robert A...Terrestrial Physics, Vol. 70, 774-793, 2008 3. Storz, M.F, Bowman, B.R., Branson, J.I., High Accuracy Satellite Drag Model (HASDM), AIAA/ AAS ...Geomagnetic Indices, AIAA/ AAS Astrodynamics Specialist Conference, Honolulu, HI, Aug. 2008 5. Bruinsma, S., Biancale, R., Total Densities Derived from
The Zeldovich & Adhesion approximations and applications to the local universe
NASA Astrophysics Data System (ADS)
Hidding, Johan; van de Weygaert, Rien; Shandarin, Sergei
2016-10-01
The Zeldovich approximation (ZA) predicts the formation of a web of singularities. While these singularities may only exist in the most formal interpretation of the ZA, they provide a powerful tool for the analysis of initial conditions. We present a novel method to find the skeleton of the resulting cosmic web based on singularities in the primordial deformation tensor and its higher order derivatives. We show that the A 3 lines predict the formation of filaments in a two-dimensional model. We continue with applications of the adhesion model to visualise structures in the local (z < 0.03) universe.
Bogani, Giorgio; Cromi, Antonella; Serati, Maurizio; Uccella, Stefano; Donato, Violante Di; Casarin, Jvan; Naro, Edoardo Di; Ghezzi, Fabio
2017-06-01
To identify factors predicting for recurrence in vulvar cancer patients undergoing surgical treatment. We retrospectively evaluated data of consecutive patients with squamous cell vulvar cancer treated between January 1, 1990 and December 31, 2013. Basic descriptive statistics and multivariable analysis were used to design predicting models influencing outcomes. Five-year disease-free survival (DFS) and overall survival (OS) were analyzed using the Cox model. The study included 101 patients affected by vulvar cancer: 64 (63%) stage I, 12 (12%) stage II, 20 (20%) stage III, and 5 (5%) stage IV. After a mean (SD) follow-up of 37.6 (22.1) months, 21 (21%) recurrences occurred. Local, regional, and distant failures were recorded in 14 (14%), 6 (6%), and 3 (3%) patients, respectively. Five-year DFS and OS were 77% and 82%, respectively. At multivariate analysis only stromal invasion >2 mm (hazard ratio: 4.9 [95% confidence interval, 1.17-21.1]; P=0.04) and extracapsular lymph node involvement (hazard ratio: 9.0 (95% confidence interval, 1.17-69.5); P=0.03) correlated with worse DFS, although no factor independently correlated with OS. Looking at factors influencing local and regional failure, we observed that stromal invasion >2 mm was the only factor predicting for local recurrence, whereas lymph node extracapsular involvement predicted for regional recurrence. Stromal invasion >2 mm and lymph node extracapsular spread are the most important factors predicting for local and regional failure, respectively. Studies evaluating the effectiveness of adjuvant treatment in high-risk patients are warranted.
Critical excitation spectrum of a quantum chain with a local three-spin coupling.
McCabe, John F; Wydro, Tomasz
2011-09-01
Using the phenomenological renormalization group (PRG), we evaluate the low-energy excitation spectrum along the critical line of a quantum spin chain having a local interaction between three Ising spins and longitudinal and transverse magnetic fields, i.e., a Turban model. The low-energy excitation spectrum found with the PRG agrees with the spectrum predicted for the (D(4),A(4)) conformal minimal model under a nontrivial correspondence between translations at the critical line and discrete lattice translations. Under this correspondence, the measurements confirm a prediction that the critical line of this quantum spin chain and the critical point of the two-dimensional three-state Potts model are in the same universality class.
Protein subcellular localization prediction using artificial intelligence technology.
Nair, Rajesh; Rost, Burkhard
2008-01-01
Proteins perform many important tasks in living organisms, such as catalysis of biochemical reactions, transport of nutrients, and recognition and transmission of signals. The plethora of aspects of the role of any particular protein is referred to as its "function." One aspect of protein function that has been the target of intensive research by computational biologists is its subcellular localization. Proteins must be localized in the same subcellular compartment to cooperate toward a common physiological function. Aberrant subcellular localization of proteins can result in several diseases, including kidney stones, cancer, and Alzheimer's disease. To date, sequence homology remains the most widely used method for inferring the function of a protein. However, the application of advanced artificial intelligence (AI)-based techniques in recent years has resulted in significant improvements in our ability to predict the subcellular localization of a protein. The prediction accuracy has risen steadily over the years, in large part due to the application of AI-based methods such as hidden Markov models (HMMs), neural networks (NNs), and support vector machines (SVMs), although the availability of larger experimental datasets has also played a role. Automatic methods that mine textual information from the biological literature and molecular biology databases have considerably sped up the process of annotation for proteins for which some information regarding function is available in the literature. State-of-the-art methods based on NNs and HMMs can predict the presence of N-terminal sorting signals extremely accurately. Ab initio methods that predict subcellular localization for any protein sequence using only the native amino acid sequence and features predicted from the native sequence have shown the most remarkable improvements. The prediction accuracy of these methods has increased by over 30% in the past decade. The accuracy of these methods is now on par with high-throughput methods for predicting localization, and they are beginning to play an important role in directing experimental research. In this chapter, we review some of the most important methods for the prediction of subcellular localization.
SELF-BLM: Prediction of drug-target interactions via self-training SVM.
Keum, Jongsoo; Nam, Hojung
2017-01-01
Predicting drug-target interactions is important for the development of novel drugs and the repositioning of drugs. To predict such interactions, there are a number of methods based on drug and target protein similarity. Although these methods, such as the bipartite local model (BLM), show promise, they often categorize unknown interactions as negative interaction. Therefore, these methods are not ideal for finding potential drug-target interactions that have not yet been validated as positive interactions. Thus, here we propose a method that integrates machine learning techniques, such as self-training support vector machine (SVM) and BLM, to develop a self-training bipartite local model (SELF-BLM) that facilitates the identification of potential interactions. The method first categorizes unlabeled interactions and negative interactions among unknown interactions using a clustering method. Then, using the BLM method and self-training SVM, the unlabeled interactions are self-trained and final local classification models are constructed. When applied to four classes of proteins that include enzymes, G-protein coupled receptors (GPCRs), ion channels, and nuclear receptors, SELF-BLM showed the best performance for predicting not only known interactions but also potential interactions in three protein classes compare to other related studies. The implemented software and supporting data are available at https://github.com/GIST-CSBL/SELF-BLM.
NASA Astrophysics Data System (ADS)
Frey, M. P.; Stamm, C.; Schneider, M. K.; Reichert, P.
2011-12-01
A distributed hydrological model was used to simulate the distribution of fast runoff formation as a proxy for critical source areas for herbicide pollution in a small agricultural catchment in Switzerland. We tested to what degree predictions based on prior knowledge without local measurements could be improved upon relying on observed discharge. This learning process consisted of five steps: For the prior prediction (step 1), knowledge of the model parameters was coarse and predictions were fairly uncertain. In the second step, discharge data were used to update the prior parameter distribution. Effects of uncertainty in input data and model structure were accounted for by an autoregressive error model. This step decreased the width of the marginal distributions of parameters describing the lower boundary (percolation rates) but hardly affected soil hydraulic parameters. Residual analysis (step 3) revealed model structure deficits. We modified the model, and in the subsequent Bayesian updating (step 4) the widths of the posterior marginal distributions were reduced for most parameters compared to those of the prior. This incremental procedure led to a strong reduction in the uncertainty of the spatial prediction. Thus, despite only using spatially integrated data (discharge), the spatially distributed effect of the improved model structure can be expected to improve the spatially distributed predictions also. The fifth step consisted of a test with independent spatial data on herbicide losses and revealed ambiguous results. The comparison depended critically on the ratio of event to preevent water that was discharged. This ratio cannot be estimated from hydrological data only. The results demonstrate that the value of local data is strongly dependent on a correct model structure. An iterative procedure of Bayesian updating, model testing, and model modification is suggested.
Nandola, Naresh N.; Rivera, Daniel E.
2011-01-01
This paper presents a data-centric modeling and predictive control approach for nonlinear hybrid systems. System identification of hybrid systems represents a challenging problem because model parameters depend on the mode or operating point of the system. The proposed algorithm applies Model-on-Demand (MoD) estimation to generate a local linear approximation of the nonlinear hybrid system at each time step, using a small subset of data selected by an adaptive bandwidth selector. The appeal of the MoD approach lies in the fact that model parameters are estimated based on a current operating point; hence estimation of locations or modes governed by autonomous discrete events is achieved automatically. The local MoD model is then converted into a mixed logical dynamical (MLD) system representation which can be used directly in a model predictive control (MPC) law for hybrid systems using multiple-degree-of-freedom tuning. The effectiveness of the proposed MoD predictive control algorithm for nonlinear hybrid systems is demonstrated on a hypothetical adaptive behavioral intervention problem inspired by Fast Track, a real-life preventive intervention for improving parental function and reducing conduct disorder in at-risk children. Simulation results demonstrate that the proposed algorithm can be useful for adaptive intervention problems exhibiting both nonlinear and hybrid character. PMID:21874087
Klinzing, Gerard R; Zavaliangos, Antonios
2016-08-01
This work establishes a predictive model that explicitly recognizes microstructural parameters in the description of the overall mass uptake and local gradients of moisture into tablets. Model equations were formulated based on local tablet geometry to describe the transient uptake of moisture. An analytical solution to a simplified set of model equations was solved to predict the overall mass uptake and moisture gradients with the tablets. The analytical solution takes into account individual diffusion mechanisms in different scales of porosity and diffusion into the solid phase. The time constant of mass uptake was found to be a function of several key material properties, such as tablet relative density, pore tortuosity, and equilibrium moisture content of the material. The predictions of the model are in excellent agreement with experimental results for microcrystalline cellulose tablets without the need for parameter fitting. The model presented provides a new method to analyze the transient uptake of moisture into hydrophilic materials with the knowledge of only a few fundamental material and microstructural parameters. In addition, the model allows for quick and insightful predictions of moisture diffusion for a variety of practical applications including pharmaceutical tablets, porous polymer systems, or cementitious materials. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Predictive model for local scour downstream of hydrokinetic turbines in erodible channels
NASA Astrophysics Data System (ADS)
Musa, Mirko; Heisel, Michael; Guala, Michele
2018-02-01
A modeling framework is derived to predict the scour induced by marine hydrokinetic turbines installed on fluvial or tidal erodible bed surfaces. Following recent advances in bridge scour formulation, the phenomenological theory of turbulence is applied to describe the flow structures that dictate the equilibrium scour depth condition at the turbine base. Using scaling arguments, we link the turbine operating conditions to the flow structures and scour depth through the drag force exerted by the device on the flow. The resulting theoretical model predicts scour depth using dimensionless parameters and considers two potential scenarios depending on the proximity of the turbine rotor to the erodible bed. The model is validated at the laboratory scale with experimental data comprising the two sediment mobility regimes (clear water and live bed), different turbine configurations, hydraulic settings, bed material compositions, and migrating bedform types. The present work provides future developers of flow energy conversion technologies with a physics-based predictive formula for local scour depth beneficial to feasibility studies and anchoring system design. A potential prototype-scale deployment in a large sandy river is also considered with our model to quantify how the expected scour depth varies as a function of the flow discharge and rotor diameter.
Deformation, Failure, and Fatigue Life of SiC/Ti-15-3 Laminates Accurately Predicted by MAC/GMC
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett A.; Arnold, Steven M.
2002-01-01
NASA Glenn Research Center's Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) (ref.1) has been extended to enable fully coupled macro-micro deformation, failure, and fatigue life predictions for advanced metal matrix, ceramic matrix, and polymer matrix composites. Because of the multiaxial nature of the code's underlying micromechanics model, GMC--which allows the incorporation of complex local inelastic constitutive models--MAC/GMC finds its most important application in metal matrix composites, like the SiC/Ti-15-3 composite examined here. Furthermore, since GMC predicts the microscale fields within each constituent of the composite material, submodels for local effects such as fiber breakage, interfacial debonding, and matrix fatigue damage can and have been built into MAC/GMC. The present application of MAC/GMC highlights the combination of these features, which has enabled the accurate modeling of the deformation, failure, and life of titanium matrix composites.
Continuum Model of Gas Uptake for Inhomogeneous Fluids
Ihm, Yungok; Cooper, Valentino R.; Vlcek, Lukas; ...
2017-07-20
We describe a continuum model of gas uptake for inhomogeneous fluids (CMGIF) and use it to predict fluid adsorption in porous materials directly from gas-substrate interaction energies determined by first principles calculations or accurate effective force fields. The method uses a perturbation approach to correct bulk fluid interactions for local inhomogeneities caused by gas substrate interactions, and predicts local pressure and density of the adsorbed gas. The accuracy and limitations of the model are tested by comparison with the results of Grand Canonical Monte Carlo simulations of hydrogen uptake in metal-organic frameworks (MOFs). We show that the approach provides accuratemore » predictions at room temperature and at low temperatures for less strongly interacting materials. As a result, the speed of the CMGIF method makes it a promising candidate for high-throughput materials discovery in connection with existing databases of nano-porous materials.« less
2015-01-01
The impacts of climate change on marine species are often compounded by other stressors that make direct attribution and prediction difficult. Shy albatrosses (Thalassarche cauta) breeding on Albatross Island, Tasmania, show an unusually restricted foraging range, allowing easier discrimination between the influence of non-climate stressors (fisheries bycatch) and environmental variation. Local environmental conditions (rainfall, air temperature, and sea-surface height, an indicator of upwelling) during the vulnerable chick-rearing stage, have been correlated with breeding success of shy albatrosses. We use an age-, stage- and sex-structured population model to explore potential relationships between local environmental factors and albatross breeding success while accounting for fisheries bycatch by trawl and longline fisheries. The model uses time-series of observed breeding population counts, breeding success, adult and juvenile survival rates and a bycatch mortality observation for trawl fishing to estimate fisheries catchability, environmental influence, natural mortality rate, density dependence, and productivity. Observed at-sea distributions for adult and juvenile birds were coupled with reported fishing effort to estimate vulnerability to incidental bycatch. The inclusion of rainfall, temperature and sea-surface height as explanatory variables for annual chick mortality rate was statistically significant. Global climate models predict little change in future local average rainfall, however, increases are forecast in both temperatures and upwelling, which are predicted to have detrimental and beneficial effects, respectively, on breeding success. The model shows that mitigation of at least 50% of present bycatch is required to offset losses due to future temperature changes, even if upwelling increases substantially. Our results highlight the benefits of using an integrated modeling approach, which uses available demographic as well as environmental data within a single estimation framework, to provide future predictions. Such predictions inform the development of management options in the face of climate change. PMID:26057739
Thomson, Robin B; Alderman, Rachael L; Tuck, Geoffrey N; Hobday, Alistair J
2015-01-01
The impacts of climate change on marine species are often compounded by other stressors that make direct attribution and prediction difficult. Shy albatrosses (Thalassarche cauta) breeding on Albatross Island, Tasmania, show an unusually restricted foraging range, allowing easier discrimination between the influence of non-climate stressors (fisheries bycatch) and environmental variation. Local environmental conditions (rainfall, air temperature, and sea-surface height, an indicator of upwelling) during the vulnerable chick-rearing stage, have been correlated with breeding success of shy albatrosses. We use an age-, stage- and sex-structured population model to explore potential relationships between local environmental factors and albatross breeding success while accounting for fisheries bycatch by trawl and longline fisheries. The model uses time-series of observed breeding population counts, breeding success, adult and juvenile survival rates and a bycatch mortality observation for trawl fishing to estimate fisheries catchability, environmental influence, natural mortality rate, density dependence, and productivity. Observed at-sea distributions for adult and juvenile birds were coupled with reported fishing effort to estimate vulnerability to incidental bycatch. The inclusion of rainfall, temperature and sea-surface height as explanatory variables for annual chick mortality rate was statistically significant. Global climate models predict little change in future local average rainfall, however, increases are forecast in both temperatures and upwelling, which are predicted to have detrimental and beneficial effects, respectively, on breeding success. The model shows that mitigation of at least 50% of present bycatch is required to offset losses due to future temperature changes, even if upwelling increases substantially. Our results highlight the benefits of using an integrated modeling approach, which uses available demographic as well as environmental data within a single estimation framework, to provide future predictions. Such predictions inform the development of management options in the face of climate change.
Modeling of Dendritic Structure and Microsegregation in Solidification of Al-Rich Quaternary Alloys
NASA Astrophysics Data System (ADS)
Dai, Ting; Zhu, Mingfang; Chen, Shuanglin; Cao, Weisheng
A two-dimensional cellular automaton (CA) model is coupled with a CALPHAD tool for the simulation of dendritic growth and microsegregation in solidification of quaternary alloys. The dynamics of dendritic growth is calculated according to the difference between the local equilibrium liquidus temperature and the actual temperature, incorporating with the Gibbs—Thomson effect and preferential dendritic growth orientations. Based on the local liquid compositions determined by solving the solutal transport equation in the domain, the local equilibrium liquidus temperature and the solid concentrations at the solid/liquid (SL) interface are calculated by the CALPHAD tool. The model was validated through the comparisons of the simulated results with the Scheil predictions for the solid composition profiles as a function of solid fraction in an Al-6wt%Cu-0.6wt%Mg-1wt%Si alloy. It is demonstrated that the model is capable of not only reproducing realistic dendrite morphologies, but also reasonably predicting microsegregation patterns in solidification of Al-rich quaternary alloys.
Scoring and staging systems using cox linear regression modeling and recursive partitioning.
Lee, J W; Um, S H; Lee, J B; Mun, J; Cho, H
2006-01-01
Scoring and staging systems are used to determine the order and class of data according to predictors. Systems used for medical data, such as the Child-Turcotte-Pugh scoring and staging systems for ordering and classifying patients with liver disease, are often derived strictly from physicians' experience and intuition. We construct objective and data-based scoring/staging systems using statistical methods. We consider Cox linear regression modeling and recursive partitioning techniques for censored survival data. In particular, to obtain a target number of stages we propose cross-validation and amalgamation algorithms. We also propose an algorithm for constructing scoring and staging systems by integrating local Cox linear regression models into recursive partitioning, so that we can retain the merits of both methods such as superior predictive accuracy, ease of use, and detection of interactions between predictors. The staging system construction algorithms are compared by cross-validation evaluation of real data. The data-based cross-validation comparison shows that Cox linear regression modeling is somewhat better than recursive partitioning when there are only continuous predictors, while recursive partitioning is better when there are significant categorical predictors. The proposed local Cox linear recursive partitioning has better predictive accuracy than Cox linear modeling and simple recursive partitioning. This study indicates that integrating local linear modeling into recursive partitioning can significantly improve prediction accuracy in constructing scoring and staging systems.
Local Burn-Up Effects in the NBSR Fuel Element
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown N. R.; Hanson A.; Diamond, D.
2013-01-31
This study addresses the over-prediction of local power when the burn-up distribution in each half-element of the NBSR is assumed to be uniform. A single-element model was utilized to quantify the impact of axial and plate-wise burn-up on the power distribution within the NBSR fuel elements for both high-enriched uranium (HEU) and low-enriched uranium (LEU) fuel. To validate this approach, key parameters in the single-element model were compared to parameters from an equilibrium core model, including neutron energy spectrum, power distribution, and integral U-235 vector. The power distribution changes significantly when incorporating local burn-up effects and has lower power peakingmore » relative to the uniform burn-up case. In the uniform burn-up case, the axial relative power peaking is over-predicted by as much as 59% in the HEU single-element and 46% in the LEU single-element with uniform burn-up. In the uniform burn-up case, the plate-wise power peaking is over-predicted by as much as 23% in the HEU single-element and 18% in the LEU single-element. The degree of over-prediction increases as a function of burn-up cycle, with the greatest over-prediction at the end of Cycle 8. The thermal flux peak is always in the mid-plane gap; this causes the local cumulative burn-up near the mid-plane gap to be significantly higher than the fuel element average. Uniform burn-up distribution throughout a half-element also causes a bias in fuel element reactivity worth, due primarily to the neutronic importance of the fissile inventory in the mid-plane gap region.« less
NASA Astrophysics Data System (ADS)
Dæhli, Lars Edvard Bryhni; Morin, David; Børvik, Tore; Hopperstad, Odd Sture
2017-10-01
Numerical unit cell models of an approximative representative volume element for a porous ductile solid are utilized to investigate differences in the mechanical response between a quadratic and a non-quadratic matrix yield surface. A Hershey equivalent stress measure with two distinct values of the yield surface exponent is employed as the matrix description. Results from the unit cell calculations are further used to calibrate a heuristic extension of the Gurson model which incorporates effects of the third deviatoric stress invariant. An assessment of the porous plasticity model reveals its ability to describe the unit cell response to some extent, however underestimating the effect of the Lode parameter for the lower triaxiality ratios imposed in this study when compared to unit cell simulations. Ductile failure predictions by means of finite element simulations using a unit cell model that resembles an imperfection band are then conducted to examine how the non-quadratic matrix yield surface influences the failure strain as compared to the quadratic matrix yield surface. Further, strain localization predictions based on bifurcation analyses and imperfection band analyses are undertaken using the calibrated porous plasticity model. These simulations are then compared to the unit cell calculations in order to elucidate the differences between the various modelling strategies. The current study reveals that strain localization analyses using an imperfection band model and a spatially discretized unit cell are in reasonable agreement, while the bifurcation analyses predict higher strain levels at localization. Imperfection band analyses are finally used to calculate failure loci for the quadratic and the non-quadratic matrix yield surface under a wide range of loading conditions. The underlying matrix yield surface is demonstrated to have a pronounced influence on the onset of strain localization.
NASA Astrophysics Data System (ADS)
Williams, T. R. N.; Baxter, S.; Hartley, L.; Appleyard, P.; Koskinen, L.; Vanhanarkaus, O.; Selroos, J. O.; Munier, R.
2017-12-01
Discrete fracture network (DFN) models provide a natural analysis framework for rock conditions where flow is predominately through a series of connected discrete features. Mechanistic models to predict the structural patterns of networks are generally intractable due to inherent uncertainties (e.g. deformation history) and as such fracture characterisation typically involves empirical descriptions of fracture statistics for location, intensity, orientation, size, aperture etc. from analyses of field data. These DFN models are used to make probabilistic predictions of likely flow or solute transport conditions for a range of applications in underground resource and construction projects. However, there are many instances when the volumes in which predictions are most valuable are close to data sources. For example, in the disposal of hazardous materials such as radioactive waste, accurate predictions of flow-rates and network connectivity around disposal areas are required for long-term safety evaluation. The problem at hand is thus: how can probabilistic predictions be conditioned on local-scale measurements? This presentation demonstrates conditioning of a DFN model based on the current structural and hydraulic characterisation of the Demonstration Area at the ONKALO underground research facility. The conditioned realisations honour (to a required level of similarity) the locations, orientations and trace lengths of fractures mapped on the surfaces of the nearby ONKALO tunnels and pilot drillholes. Other data used as constraints include measurements from hydraulic injection tests performed in pilot drillholes and inflows to the subsequently reamed experimental deposition holes. Numerical simulations using this suite of conditioned DFN models provides a series of prediction-outcome exercises detailing the reliability of the DFN model to make local-scale predictions of measured geometric and hydraulic properties of the fracture system; and provides an understanding of the reduction in uncertainty in model predictions for conditioned DFN models honouring different aspects of this data.
A new approach to complete aircraft landing gear noise prediction
NASA Astrophysics Data System (ADS)
Lopes, Leonard V.
This thesis describes a new landing gear noise prediction system developed at The Pennsylvania State University, called Landing Gear Model and Acoustic Prediction code (LGMAP). LGMAP is used to predict the noise of an isolated or installed landing gear geometry. The predictions include several techniques to approximate the aeroacoustic and aerodynamic interactions of landing gear noise generation. These include (1) a method for approximating the shielding of noise caused by the landing gear geometry, (2) accounting for local flow variations due to the wing geometry, (3) the interaction of the landing gear wake with high-lift devices, and (4) a method for estimating the effect of gross landing gear design changes on local flow and acoustic radiation. The LGMAP aeroacoustic prediction system has been created to predict the noise generated by a given landing gear. The landing gear is modeled as a set of simple components that represent individual parts of the structure. Each component, ranging from large to small, is represented by a simple geometric shape and the unsteady flow on the component is modeled based on an individual characteristic length, local flow velocity, and the turbulent flow environment. A small set of universal models is developed and applied to a large range of similar components. These universal models, combined with the actual component geometry and local environment, give a unique loading spectrum and acoustic field for each component. Then, the sum of all the individual components in the complete configuration is used to model the high level of geometric complexity typical of current aircraft undercarriage designs. A line of sight shielding algorithm based on scattering by a two-dimensional cylinder approximates the effect of acoustic shielding caused by the landing gear. Using the scattering from a cylinder in two-dimensions at an observer position directly behind the cylinder, LGMAP is able to estimate the reduction in noise due to shielding by the landing gear geometry. This thesis compares predictions with data from a recent wind tunnel experiment conducted at NASA Langley Research Center, and demonstrates that including the acoustic scattering can improve the predictions by LGMAP at all observer positions. In this way, LGMAP provides more information about the actual noise propagation than simple empirical schemes. Two-dimensional FLUENT calculations of approximate wing cross-sections are used by LGMAP to compute the change in noise due to the change in local flow velocity in the vicinity of the landing gear due to circulation around the wing. By varying angle of attack and flap deflection angle in the CFD calculations, LGMAP is able to predict the noise level change due to the change in local flow velocity in the landing gear vicinity. A brief trade study is performed on the angle of attack of the wing and flap deflection angle of the flap system. It is shown that increasing the angle of attack or flap deflection angle reduces the flow velocity in the vicinity of the landing gear, and therefore the predicted noise. Predictions demonstrate the ability of the prediction system to quickly estimate the change in landing gear noise caused by a change in wing configuration. A three-dimensional immersed boundary CFD calculation of simplified landing gear geometries provides relatively quick estimates of the mean flow around the landing gear. The mean flow calculation provides the landing gear wake geometry for the prediction of trailing edge noise associated with the interaction of the landing gear wake with the high lift devices. Using wind tunnel experiments that relate turbulent intensity to wake size and the Ffowcs Williams and Hall trailing edge noise equation for the acoustic calculation, LGMAP is able to predict the landing gear wake generated trailing edge noise. In this manner, LGMAP includes the effect of the interaction of the landing gear's wake with the wing/flap system on the radiated noise. The final prediction technique implemented includes local flow calculations of a landing gear with various truck angles using the immersed boundary scheme. Using the mean flow calculation, LGMAP is able to predict noise changes caused by gross changes in landing gear design. Calculations of the mean flow around the landing gear show that the rear wheels of a six-wheel bogie experience significantly reduced mean flow velocity when the truck is placed in a toe-down configuration. This reduction in the mean flow results is a lower noise signature from the rear wheel. Since the noise from a six-wheel bogie at flyover observer positions is primarily composed of wheel noise, the reduced local flow velocity results in a reduced noise signature from the entire landing gear geometry. Comparisons with measurements show the accuracy of the predictions of landing gear noise levels and directivity. Airframe noise predictions for the landing gear of a complete aircraft are described including all of the above mentioned developments and prediction techniques. These show that the nose gear noise and the landing gear wake/flap interaction noise, while not significantly changing the overall shape of the radiated noise, do contribute to the overall noise from the installed landing gear.
Taylor, R Andrew; Pare, Joseph R; Venkatesh, Arjun K; Mowafi, Hani; Melnick, Edward R; Fleischman, William; Hall, M Kennedy
2016-03-01
Predictive analytics in emergency care has mostly been limited to the use of clinical decision rules (CDRs) in the form of simple heuristics and scoring systems. In the development of CDRs, limitations in analytic methods and concerns with usability have generally constrained models to a preselected small set of variables judged to be clinically relevant and to rules that are easily calculated. Furthermore, CDRs frequently suffer from questions of generalizability, take years to develop, and lack the ability to be updated as new information becomes available. Newer analytic and machine learning techniques capable of harnessing the large number of variables that are already available through electronic health records (EHRs) may better predict patient outcomes and facilitate automation and deployment within clinical decision support systems. In this proof-of-concept study, a local, big data-driven, machine learning approach is compared to existing CDRs and traditional analytic methods using the prediction of sepsis in-hospital mortality as the use case. This was a retrospective study of adult ED visits admitted to the hospital meeting criteria for sepsis from October 2013 to October 2014. Sepsis was defined as meeting criteria for systemic inflammatory response syndrome with an infectious admitting diagnosis in the ED. ED visits were randomly partitioned into an 80%/20% split for training and validation. A random forest model (machine learning approach) was constructed using over 500 clinical variables from data available within the EHRs of four hospitals to predict in-hospital mortality. The machine learning prediction model was then compared to a classification and regression tree (CART) model, logistic regression model, and previously developed prediction tools on the validation data set using area under the receiver operating characteristic curve (AUC) and chi-square statistics. There were 5,278 visits among 4,676 unique patients who met criteria for sepsis. Of the 4,222 patients in the training group, 210 (5.0%) died during hospitalization, and of the 1,056 patients in the validation group, 50 (4.7%) died during hospitalization. The AUCs with 95% confidence intervals (CIs) for the different models were as follows: random forest model, 0.86 (95% CI = 0.82 to 0.90); CART model, 0.69 (95% CI = 0.62 to 0.77); logistic regression model, 0.76 (95% CI = 0.69 to 0.82); CURB-65, 0.73 (95% CI = 0.67 to 0.80); MEDS, 0.71 (95% CI = 0.63 to 0.77); and mREMS, 0.72 (95% CI = 0.65 to 0.79). The random forest model AUC was statistically different from all other models (p ≤ 0.003 for all comparisons). In this proof-of-concept study, a local big data-driven, machine learning approach outperformed existing CDRs as well as traditional analytic techniques for predicting in-hospital mortality of ED patients with sepsis. Future research should prospectively evaluate the effectiveness of this approach and whether it translates into improved clinical outcomes for high-risk sepsis patients. The methods developed serve as an example of a new model for predictive analytics in emergency care that can be automated, applied to other clinical outcomes of interest, and deployed in EHRs to enable locally relevant clinical predictions. © 2015 by the Society for Academic Emergency Medicine.
Taylor, R. Andrew; Pare, Joseph R.; Venkatesh, Arjun K.; Mowafi, Hani; Melnick, Edward R.; Fleischman, William; Hall, M. Kennedy
2018-01-01
Objectives Predictive analytics in emergency care has mostly been limited to the use of clinical decision rules (CDRs) in the form of simple heuristics and scoring systems. In the development of CDRs, limitations in analytic methods and concerns with usability have generally constrained models to a preselected small set of variables judged to be clinically relevant and to rules that are easily calculated. Furthermore, CDRs frequently suffer from questions of generalizability, take years to develop, and lack the ability to be updated as new information becomes available. Newer analytic and machine learning techniques capable of harnessing the large number of variables that are already available through electronic health records (EHRs) may better predict patient outcomes and facilitate automation and deployment within clinical decision support systems. In this proof-of-concept study, a local, big data–driven, machine learning approach is compared to existing CDRs and traditional analytic methods using the prediction of sepsis in-hospital mortality as the use case. Methods This was a retrospective study of adult ED visits admitted to the hospital meeting criteria for sepsis from October 2013 to October 2014. Sepsis was defined as meeting criteria for systemic inflammatory response syndrome with an infectious admitting diagnosis in the ED. ED visits were randomly partitioned into an 80%/20% split for training and validation. A random forest model (machine learning approach) was constructed using over 500 clinical variables from data available within the EHRs of four hospitals to predict in-hospital mortality. The machine learning prediction model was then compared to a classification and regression tree (CART) model, logistic regression model, and previously developed prediction tools on the validation data set using area under the receiver operating characteristic curve (AUC) and chi-square statistics. Results There were 5,278 visits among 4,676 unique patients who met criteria for sepsis. Of the 4,222 patients in the training group, 210 (5.0%) died during hospitalization, and of the 1,056 patients in the validation group, 50 (4.7%) died during hospitalization. The AUCs with 95% confidence intervals (CIs) for the different models were as follows: random forest model, 0.86 (95% CI = 0.82 to 0.90); CART model, 0.69 (95% CI = 0.62 to 0.77); logistic regression model, 0.76 (95% CI = 0.69 to 0.82); CURB-65, 0.73 (95% CI = 0.67 to 0.80); MEDS, 0.71 (95% CI = 0.63 to 0.77); and mREMS, 0.72 (95% CI = 0.65 to 0.79). The random forest model AUC was statistically different from all other models (p ≤ 0.003 for all comparisons). Conclusions In this proof-of-concept study, a local big data–driven, machine learning approach outperformed existing CDRs as well as traditional analytic techniques for predicting in-hospital mortality of ED patients with sepsis. Future research should prospectively evaluate the effectiveness of this approach and whether it translates into improved clinical outcomes for high-risk sepsis patients. The methods developed serve as an example of a new model for predictive analytics in emergency care that can be automated, applied to other clinical outcomes of interest, and deployed in EHRs to enable locally relevant clinical predictions. PMID:26679719
Joint System of the National Hydrometeorology for disaster prevention
NASA Astrophysics Data System (ADS)
Lim, J.; Cho, K.; Lee, Y. S.; Jung, H. S.; Yoo, H. D.; Ryu, D.; Kwon, J.
2014-12-01
Hydrological disaster relief expenditure accounts for as much as 70 percent of total expenditure of disasters occurring in Korea. Since the response to and recovery of disasters are normally based on previous experiences, there have been limitations when dealing with ever-increasing localized heavy rainfall with short range in the era of climate change. Therefore, it became necessary to establish a system that can respond to a disaster in advance through the analysis and prediction of hydrometeorological information. Because a wide range of big data is essential, it cannot be done by a single agency only. That is why the three hydrometeorology-related agencies cooperated to establish a pilot (trial) system at Soemjingang basin in 2013. The three governmental agencies include the National Emergency Management Agency (NEMA) in charge of disaster prevention and public safety, the National Geographic Information Institute (NGII under Ministry of Land, Infrastructure and Transport) in charge of geographical data, and the Korea Meteorological Administration (KMA) in charge of weather information. This pilot system was designed to be able to respond to disasters in advance through providing a damage prediction information for flash flood to public officers for safety part using high resolution precipitation prediction data provided by the KMA and high precision geographic data by NGII. To produce precipitation prediction data with high resolution, the KMA conducted downscaling from 25km×25km global model to 3km×3km local model and is running the local model twice a day. To maximize the utility of weather prediction information, the KMA is providing the prediction information for 7 days with 1 hour interval at Soemjingang basin to monitor and predict not only flood but also drought. As no prediction is complete without a description of its uncertainty, it is planned to continuously develop the skills to improve the uncertainty of the prediction on weather and its impact. I will introduce more the flow chart to produce and provide the weather prediction information in AGU fall meeting.
The effect of using genealogy-based haplotypes for genomic prediction
2013-01-01
Background Genomic prediction uses two sources of information: linkage disequilibrium between markers and quantitative trait loci, and additive genetic relationships between individuals. One way to increase the accuracy of genomic prediction is to capture more linkage disequilibrium by regression on haplotypes instead of regression on individual markers. The aim of this study was to investigate the accuracy of genomic prediction using haplotypes based on local genealogy information. Methods A total of 4429 Danish Holstein bulls were genotyped with the 50K SNP chip. Haplotypes were constructed using local genealogical trees. Effects of haplotype covariates were estimated with two types of prediction models: (1) assuming that effects had the same distribution for all haplotype covariates, i.e. the GBLUP method and (2) assuming that a large proportion (π) of the haplotype covariates had zero effect, i.e. a Bayesian mixture method. Results About 7.5 times more covariate effects were estimated when fitting haplotypes based on local genealogical trees compared to fitting individuals markers. Genealogy-based haplotype clustering slightly increased the accuracy of genomic prediction and, in some cases, decreased the bias of prediction. With the Bayesian method, accuracy of prediction was less sensitive to parameter π when fitting haplotypes compared to fitting markers. Conclusions Use of haplotypes based on genealogy can slightly increase the accuracy of genomic prediction. Improved methods to cluster the haplotypes constructed from local genealogy could lead to additional gains in accuracy. PMID:23496971
The effect of using genealogy-based haplotypes for genomic prediction.
Edriss, Vahid; Fernando, Rohan L; Su, Guosheng; Lund, Mogens S; Guldbrandtsen, Bernt
2013-03-06
Genomic prediction uses two sources of information: linkage disequilibrium between markers and quantitative trait loci, and additive genetic relationships between individuals. One way to increase the accuracy of genomic prediction is to capture more linkage disequilibrium by regression on haplotypes instead of regression on individual markers. The aim of this study was to investigate the accuracy of genomic prediction using haplotypes based on local genealogy information. A total of 4429 Danish Holstein bulls were genotyped with the 50K SNP chip. Haplotypes were constructed using local genealogical trees. Effects of haplotype covariates were estimated with two types of prediction models: (1) assuming that effects had the same distribution for all haplotype covariates, i.e. the GBLUP method and (2) assuming that a large proportion (π) of the haplotype covariates had zero effect, i.e. a Bayesian mixture method. About 7.5 times more covariate effects were estimated when fitting haplotypes based on local genealogical trees compared to fitting individuals markers. Genealogy-based haplotype clustering slightly increased the accuracy of genomic prediction and, in some cases, decreased the bias of prediction. With the Bayesian method, accuracy of prediction was less sensitive to parameter π when fitting haplotypes compared to fitting markers. Use of haplotypes based on genealogy can slightly increase the accuracy of genomic prediction. Improved methods to cluster the haplotypes constructed from local genealogy could lead to additional gains in accuracy.
Gustafson, E.J.; Knutson, M.G.; Niemi, G.J.; Friberg, M.
2002-01-01
We constructed alternative spatial models at two scales to predict Brown-headed Cowbird (Molothrus ater) parasitism rates from land cover maps. The local-scale models tested competing hypotheses about the relationship between cowbird parasitism and distance of host nests from a forest edge (forest-nonforest boundary). The landscape models tested competing hypotheses about how landscape features (e.g., forests, agricultural fields) interact to determine rates of cowbird parasitism. The models incorporate spatial neighborhoods with a radius of 2.5 km in their formulation, reflecting the scale of the majority of cowbird commuting activity. Field data on parasitism by cowbirds (parasitism rate and number of cowbird eggs per nest) were collected at 28 sites in the Driftless Area Ecoregion of Wisconsin, Minnesota, and Iowa and were compared to the predictions of the alternative models. At the local scale, there was a significant positive relationship between cowbird parasitism and mean distance of nest sites from the forest edge. At the landscape scale, the best fitting models were the forest-dependent and forest-fragmentation-dependent models, in which more heavily forested and less fragmented landscapes had higher parasitism rates. However, much of the explanatory power of these models results from the inclusion of the local-scale relationship in these models. We found lower rates of cowbird parasitism than did most Midwestern studies, and we identified landscape patterns of cowbird parasitism that are opposite to those reported in several other studies of Midwestern songbirds. We caution that cowbird parasitism patterns can be unpredictable, depending upon ecoregional location and the spatial extent, and that our models should be tested in other ecoregions before they are applied there. Our study confirms that cowbird biology has a strong spatial component, and that improved spatial models applied at multiple spatial scales will be required to predict the effects of landscape and forest management on cowbird parasitism of forest birds.
Testing the consistency of three-point halo clustering in Fourier and configuration space
NASA Astrophysics Data System (ADS)
Hoffmann, K.; Gaztañaga, E.; Scoccimarro, R.; Crocce, M.
2018-05-01
We compare reduced three-point correlations Q of matter, haloes (as proxies for galaxies) and their cross-correlations, measured in a total simulated volume of ˜100 (h-1 Gpc)3, to predictions from leading order perturbation theory on a large range of scales in configuration space. Predictions for haloes are based on the non-local bias model, employing linear (b1) and non-linear (c2, g2) bias parameters, which have been constrained previously from the bispectrum in Fourier space. We also study predictions from two other bias models, one local (g2 = 0) and one in which c2 and g2 are determined by b1 via approximately universal relations. Overall, measurements and predictions agree when Q is derived for triangles with (r1r2r3)1/3 ≳60 h-1 Mpc, where r1 - 3 are the sizes of the triangle legs. Predictions for Qmatter, based on the linear power spectrum, show significant deviations from the measurements at the BAO scale (given our small measurement errors), which strongly decrease when adding a damping term or using the non-linear power spectrum, as expected. Predictions for Qhalo agree best with measurements at large scales when considering non-local contributions. The universal bias model works well for haloes and might therefore be also useful for tightening constraints on b1 from Q in galaxy surveys. Such constraints are independent of the amplitude of matter density fluctuation (σ8) and hence break the degeneracy between b1 and σ8, present in galaxy two-point correlations.
Beniczky, Sándor; Lantz, Göran; Rosenzweig, Ivana; Åkeson, Per; Pedersen, Birthe; Pinborg, Lars H; Ziebell, Morten; Jespersen, Bo; Fuglsang-Frederiksen, Anders
2013-10-01
Although precise identification of the seizure-onset zone is an essential element of presurgical evaluation, source localization of ictal electroencephalography (EEG) signals has received little attention. The aim of our study was to estimate the accuracy of source localization of rhythmic ictal EEG activity using a distributed source model. Source localization of rhythmic ictal scalp EEG activity was performed in 42 consecutive cases fulfilling inclusion criteria. The study was designed according to recommendations for studies on diagnostic accuracy (STARD). The initial ictal EEG signals were selected using a standardized method, based on frequency analysis and voltage distribution of the ictal activity. A distributed source model-local autoregressive average (LAURA)-was used for the source localization. Sensitivity, specificity, and measurement of agreement (kappa) were determined based on the reference standard-the consensus conclusion of the multidisciplinary epilepsy surgery team. Predictive values were calculated from the surgical outcome of the operated patients. To estimate the clinical value of the ictal source analysis, we compared the likelihood ratios of concordant and discordant results. Source localization was performed blinded to the clinical data, and before the surgical decision. Reference standard was available for 33 patients. The ictal source localization had a sensitivity of 70% and a specificity of 76%. The mean measurement of agreement (kappa) was 0.61, corresponding to substantial agreement (95% confidence interval (CI) 0.38-0.84). Twenty patients underwent resective surgery. The positive predictive value (PPV) for seizure freedom was 92% and the negative predictive value (NPV) was 43%. The likelihood ratio was nine times higher for the concordant results, as compared with the discordant ones. Source localization of rhythmic ictal activity using a distributed source model (LAURA) for the ictal EEG signals selected with a standardized method is feasible in clinical practice and has a good diagnostic accuracy. Our findings encourage clinical neurophysiologists assessing ictal EEGs to include this method in their armamentarium. Wiley Periodicals, Inc. © 2013 International League Against Epilepsy.
Parameterizing sorption isotherms using a hybrid global-local fitting procedure.
Matott, L Shawn; Singh, Anshuman; Rabideau, Alan J
2017-05-01
Predictive modeling of the transport and remediation of groundwater contaminants requires an accurate description of the sorption process, which is usually provided by fitting an isotherm model to site-specific laboratory data. Commonly used calibration procedures, listed in order of increasing sophistication, include: trial-and-error, linearization, non-linear regression, global search, and hybrid global-local search. Given the considerable variability in fitting procedures applied in published isotherm studies, we investigated the importance of algorithm selection through a series of numerical experiments involving 13 previously published sorption datasets. These datasets, considered representative of state-of-the-art for isotherm experiments, had been previously analyzed using trial-and-error, linearization, or non-linear regression methods. The isotherm expressions were re-fit using a 3-stage hybrid global-local search procedure (i.e. global search using particle swarm optimization followed by Powell's derivative free local search method and Gauss-Marquardt-Levenberg non-linear regression). The re-fitted expressions were then compared to previously published fits in terms of the optimized weighted sum of squared residuals (WSSR) fitness function, the final estimated parameters, and the influence on contaminant transport predictions - where easily computed concentration-dependent contaminant retardation factors served as a surrogate measure of likely transport behavior. Results suggest that many of the previously published calibrated isotherm parameter sets were local minima. In some cases, the updated hybrid global-local search yielded order-of-magnitude reductions in the fitness function. In particular, of the candidate isotherms, the Polanyi-type models were most likely to benefit from the use of the hybrid fitting procedure. In some cases, improvements in fitness function were associated with slight (<10%) changes in parameter values, but in other cases significant (>50%) changes in parameter values were noted. Despite these differences, the influence of isotherm misspecification on contaminant transport predictions was quite variable and difficult to predict from inspection of the isotherms. Copyright © 2017 Elsevier B.V. All rights reserved.
Automated antibody structure prediction using Accelrys tools: Results and best practices
Fasnacht, Marc; Butenhof, Ken; Goupil-Lamy, Anne; Hernandez-Guzman, Francisco; Huang, Hongwei; Yan, Lisa
2014-01-01
We describe the methodology and results from our participation in the second Antibody Modeling Assessment experiment. During the experiment we predicted the structure of eleven unpublished antibody Fv fragments. Our prediction methods centered on template-based modeling; potential templates were selected from an antibody database based on their sequence similarity to the target in the framework regions. Depending on the quality of the templates, we constructed models of the antibody framework regions either using a single, chimeric or multiple template approach. The hypervariable loop regions in the initial models were rebuilt by grafting the corresponding regions from suitable templates onto the model. For the H3 loop region, we further refined models using ab initio methods. The final models were subjected to constrained energy minimization to resolve severe local structural problems. The analysis of the models submitted show that Accelrys tools allow for the construction of quite accurate models for the framework and the canonical CDR regions, with RMSDs to the X-ray structure on average below 1 Å for most of these regions. The results show that accurate prediction of the H3 hypervariable loops remains a challenge. Furthermore, model quality assessment of the submitted models show that the models are of quite high quality, with local geometry assessment scores similar to that of the target X-ray structures. Proteins 2014; 82:1583–1598. © 2014 The Authors. Proteins published by Wiley Periodicals, Inc. PMID:24833271
Developing Local Scale, High Resolution, Data to Interface with Numerical Storm Models
NASA Astrophysics Data System (ADS)
Witkop, R.; Becker, A.; Stempel, P.
2017-12-01
High resolution, physical storm models that can rapidly predict storm surge, inundation, rainfall, wind velocity and wave height at the intra-facility scale for any storm affecting Rhode Island have been developed by Researchers at the University of Rhode Island's (URI's) Graduate School of Oceanography (GSO) (Ginis et al., 2017). At the same time, URI's Marine Affairs Department has developed methods that inhere individual geographic points into GSO's models and enable the models to accurately incorporate local scale, high resolution data (Stempel et al., 2017). This combination allows URI's storm models to predict any storm's impacts on individual Rhode Island facilities in near real time. The research presented here determines how a coastal Rhode Island town's critical facility managers (FMs) perceive their assets as being vulnerable to quantifiable hurricane-related forces at the individual facility scale and explores methods to elicit this information from FMs in a format usable for incorporation into URI's storm models.
Local relative density modulates failure and strength in vertically aligned carbon nanotubes.
Pathak, Siddhartha; Mohan, Nisha; Decolvenaere, Elizabeth; Needleman, Alan; Bedewy, Mostafa; Hart, A John; Greer, Julia R
2013-10-22
Micromechanical experiments, image analysis, and theoretical modeling revealed that local failure events and compressive stresses of vertically aligned carbon nanotubes (VACNTs) were uniquely linked to relative density gradients. Edge detection analysis of systematically obtained scanning electron micrographs was used to quantify a microstructural figure-of-merit related to relative local density along VACNT heights. Sequential bottom-to-top buckling and hardening in stress-strain response were observed in samples with smaller relative density at the bottom. When density gradient was insubstantial or reversed, bottom regions always buckled last, and a flat stress plateau was obtained. These findings were consistent with predictions of a 2D material model based on a viscoplastic solid with plastic non-normality and a hardening-softening-hardening plastic flow relation. The hardening slope in compression generated by the model was directly related to the stiffness gradient along the sample height, and hence to the local relative density. These results demonstrate that a microstructural figure-of-merit, the effective relative density, can be used to quantify and predict the mechanical response.
Seasonal Drought Prediction: Advances, Challenges, and Future Prospects
NASA Astrophysics Data System (ADS)
Hao, Zengchao; Singh, Vijay P.; Xia, Youlong
2018-03-01
Drought prediction is of critical importance to early warning for drought managements. This review provides a synthesis of drought prediction based on statistical, dynamical, and hybrid methods. Statistical drought prediction is achieved by modeling the relationship between drought indices of interest and a suite of potential predictors, including large-scale climate indices, local climate variables, and land initial conditions. Dynamical meteorological drought prediction relies on seasonal climate forecast from general circulation models (GCMs), which can be employed to drive hydrological models for agricultural and hydrological drought prediction with the predictability determined by both climate forcings and initial conditions. Challenges still exist in drought prediction at long lead time and under a changing environment resulting from natural and anthropogenic factors. Future research prospects to improve drought prediction include, but are not limited to, high-quality data assimilation, improved model development with key processes related to drought occurrence, optimal ensemble forecast to select or weight ensembles, and hybrid drought prediction to merge statistical and dynamical forecasts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dirian, Yves; Foffa, Stefano; Kunz, Martin
We study the cosmological predictions of two recently proposed non-local modifications of General Relativity. Both models have the same number of parameters as ΛCDM, with a mass parameter m replacing the cosmological constant. We implement the cosmological perturbations of the non-local models into a modification of the CLASS Boltzmann code, and we make a full comparison to CMB, BAO and supernova data. We find that the non-local models fit these datasets very well, at the same level as ΛCDM. Among the vast literature on modified gravity models, this is, to our knowledge, the only example which fits data as wellmore » as ΛCDM without requiring any additional parameter. For both non-local models parameter estimation using Planck +JLA+BAO data gives a value of H{sub 0} slightly higher than in ΛCDM.« less
NASA Astrophysics Data System (ADS)
Siami, Mohammad; Gholamian, Mohammad Reza; Basiri, Javad
2014-10-01
Nowadays, credit scoring is one of the most important topics in the banking sector. Credit scoring models have been widely used to facilitate the process of credit assessing. In this paper, an application of the locally linear model tree algorithm (LOLIMOT) was experimented to evaluate the superiority of its performance to predict the customer's credit status. The algorithm is improved with an aim of adjustment by credit scoring domain by means of data fusion and feature selection techniques. Two real world credit data sets - Australian and German - from UCI machine learning database were selected to demonstrate the performance of our new classifier. The analytical results indicate that the improved LOLIMOT significantly increase the prediction accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pokhrel, D; Sood, S; Shen, X
2016-06-15
Purpose: To present radiobiological modeling of TCP using tumor size-adjusted BED(s-BED)and PTV(D99) to lung SBRT patients treated with X-ray Voxel Monte Carlo(XVMC) algorithm, apply parameterized Lyman-NTCP model to predict grade-2 RP and subsequently, compare with clinical outcomes/observations. Methods: Dosimetric parameters and clinical follow-up for XVMC-based lung-SBRT patients were retrospectively evaluated. Patients were treated at Novalis-TX with hybrid(2 non-coplanar partial-arcs plus 3–6 static-beams)plan using HD-MLC/6MV-SRS-beam.For TCP,s-BED modelling was utilized: TCP=EXP[sBED-TCD50]/k/(1.0+EXP[sBED-TCD50]/k), where k=31Gy corresponding to TCD50=0Gy and s-BED was defined as BED10 minus 10 times the tumor diameter(in centimeters)by Ohri et al.(IJROBP,2012). For 2-yr local-control, we used more-realistic MC-computed PTVD99 as amore » predictive parameter, s-BED(D99).Due to relatively shorter median follow-up interval(12-months),Kaplan-Meier curves were generated to estimate 2-yr observed local-control and compared to predicted-rate by TCP modeling. For NTCP, we employed parameterized Lyman-NTCP model utilizing normal-lung DVH and α/β=3Gy fitted to predict grade-2 RP after lung-SBRT. Results: Total 108 patients (137 tumors) treated for 35–70Gy in 3–5 fractions, either primary-lung(n=74)or metastatic-lung(n=53)tumors were included.F or the given prescription dose with MC-computed MUs, 2-yr local-control rates with s-BED(D99) was 87±8%. Kaplan-Meier generated observed local-control rate at 2-yr was 87.5%,suggesting that PTV(D99) could be a potential predictor (p-value=0.38). Observed vs predicted TCP for primary-lung tumors and metastatic tumors were 97% vs 88±7% and 94% vs 86±9%.NTCP model predicted well for symptomatic-RP with predicted vs observed (3±5% vs 2%). Radiographic and clinically significant RP was observed in 13% and 2% of patients. Higher rates of radiographic change were observed in patients who received >50Gy compared to ≤50Gy(24% vs 10%). Conclusion: Utilizing MC-computed PTVD99, our TCP results were well correlated with clinical outcome. The predicted grade-2 RP rate was comparable to clinical observations. Clinical application of these radiobiological models may potentially allow for target dose escalation and/or lung-toxicity reduction. Further validation of these radiobiological models with longer follow up interval for large cohorts of lung-SBRT patients is anticipated.« less
Han, Dianwei; Zhang, Jun; Tang, Guiliang
2012-01-01
An accurate prediction of the pre-microRNA secondary structure is important in miRNA informatics. Based on a recently proposed model, nucleotide cyclic motifs (NCM), to predict RNA secondary structure, we propose and implement a Modified NCM (MNCM) model with a physics-based scoring strategy to tackle the problem of pre-microRNA folding. Our microRNAfold is implemented using a global optimal algorithm based on the bottom-up local optimal solutions. Our experimental results show that microRNAfold outperforms the current leading prediction tools in terms of True Negative rate, False Negative rate, Specificity, and Matthews coefficient ratio.
Song, Jingwei; He, Jiaying; Zhu, Menghua; Tan, Debao; Zhang, Yu; Ye, Song; Shen, Dingtao; Zou, Pengfei
2014-01-01
A simulated annealing (SA) based variable weighted forecast model is proposed to combine and weigh local chaotic model, artificial neural network (ANN), and partial least square support vector machine (PLS-SVM) to build a more accurate forecast model. The hybrid model was built and multistep ahead prediction ability was tested based on daily MSW generation data from Seattle, Washington, the United States. The hybrid forecast model was proved to produce more accurate and reliable results and to degrade less in longer predictions than three individual models. The average one-week step ahead prediction has been raised from 11.21% (chaotic model), 12.93% (ANN), and 12.94% (PLS-SVM) to 9.38%. Five-week average has been raised from 13.02% (chaotic model), 15.69% (ANN), and 15.92% (PLS-SVM) to 11.27%. PMID:25301508
Local-global analysis of crack growth in continuously reinfoced ceramic matrix composites
NASA Technical Reports Server (NTRS)
Ballarini, Roberto; Ahmed, Shamim
1989-01-01
This paper describes the development of a mathematical model for predicting the strength and micromechanical failure characteristics of continuously reinforced ceramic matrix composites. The local-global analysis models the vicinity of a propagating crack tip as a local heterogeneous region (LHR) consisting of spring-like representation of the matrix, fibers and interfaces. Parametric studies are conducted to investigate the effects of LHR size, component properties, and interface conditions on the strength and sequence of the failure processes in the unidirectional composite system.
The US EPA has a plan to leverage recent advances in meteorological modeling to develop a "Next-Generation" air quality modeling system that will allow consistent modeling of problems from global to local scale. The meteorological model of choice is the Model for Predic...
Prediction of lake depth across a 17-state region in the United States
Oliver, Samantha K.; Soranno, Patricia A.; Fergus, C. Emi; Wagner, Tyler; Winslow, Luke A.; Scott, Caren E.; Webster, Katherine E.; Downing, John A.; Stanley, Emily H.
2016-01-01
Lake depth is an important characteristic for understanding many lake processes, yet it is unknown for the vast majority of lakes globally. Our objective was to develop a model that predicts lake depth using map-derived metrics of lake and terrestrial geomorphic features. Building on previous models that use local topography to predict lake depth, we hypothesized that regional differences in topography, lake shape, or sedimentation processes could lead to region-specific relationships between lake depth and the mapped features. We therefore used a mixed modeling approach that included region-specific model parameters. We built models using lake and map data from LAGOS, which includes 8164 lakes with maximum depth (Zmax) observations. The model was used to predict depth for all lakes ≥4 ha (n = 42 443) in the study extent. Lake surface area and maximum slope in a 100 m buffer were the best predictors of Zmax. Interactions between surface area and topography occurred at both the local and regional scale; surface area had a larger effect in steep terrain, so large lakes embedded in steep terrain were much deeper than those in flat terrain. Despite a large sample size and inclusion of regional variability, model performance (R2 = 0.29, RMSE = 7.1 m) was similar to other published models. The relative error varied by region, however, highlighting the importance of taking a regional approach to lake depth modeling. Additionally, we provide the largest known collection of observed and predicted lake depth values in the United States.
Whittington, James C. R.; Bogacz, Rafal
2017-01-01
To efficiently learn from feedback, cortical networks need to update synaptic weights on multiple levels of cortical hierarchy. An effective and well-known algorithm for computing such changes in synaptic weights is the error backpropagation algorithm. However, in this algorithm, the change in synaptic weights is a complex function of weights and activities of neurons not directly connected with the synapse being modified, whereas the changes in biological synapses are determined only by the activity of presynaptic and postsynaptic neurons. Several models have been proposed that approximate the backpropagation algorithm with local synaptic plasticity, but these models require complex external control over the network or relatively complex plasticity rules. Here we show that a network developed in the predictive coding framework can efficiently perform supervised learning fully autonomously, employing only simple local Hebbian plasticity. Furthermore, for certain parameters, the weight change in the predictive coding model converges to that of the backpropagation algorithm. This suggests that it is possible for cortical networks with simple Hebbian synaptic plasticity to implement efficient learning algorithms in which synapses in areas on multiple levels of hierarchy are modified to minimize the error on the output. PMID:28333583
Whittington, James C R; Bogacz, Rafal
2017-05-01
To efficiently learn from feedback, cortical networks need to update synaptic weights on multiple levels of cortical hierarchy. An effective and well-known algorithm for computing such changes in synaptic weights is the error backpropagation algorithm. However, in this algorithm, the change in synaptic weights is a complex function of weights and activities of neurons not directly connected with the synapse being modified, whereas the changes in biological synapses are determined only by the activity of presynaptic and postsynaptic neurons. Several models have been proposed that approximate the backpropagation algorithm with local synaptic plasticity, but these models require complex external control over the network or relatively complex plasticity rules. Here we show that a network developed in the predictive coding framework can efficiently perform supervised learning fully autonomously, employing only simple local Hebbian plasticity. Furthermore, for certain parameters, the weight change in the predictive coding model converges to that of the backpropagation algorithm. This suggests that it is possible for cortical networks with simple Hebbian synaptic plasticity to implement efficient learning algorithms in which synapses in areas on multiple levels of hierarchy are modified to minimize the error on the output.
Chen, Mingchen; Lin, Xingcheng; Zheng, Weihua; Onuchic, José N; Wolynes, Peter G
2016-08-25
The associative memory, water mediated, structure and energy model (AWSEM) is a coarse-grained force field with transferable tertiary interactions that incorporates local in sequence energetic biases using bioinformatically derived structural information about peptide fragments with locally similar sequences that we call memories. The memory information from the protein data bank (PDB) database guides proper protein folding. The structural information about available sequences in the database varies in quality and can sometimes lead to frustrated free energy landscapes locally. One way out of this difficulty is to construct the input fragment memory information from all-atom simulations of portions of the complete polypeptide chain. In this paper, we investigate this approach first put forward by Kwac and Wolynes in a more complete way by studying the structure prediction capabilities of this approach for six α-helical proteins. This scheme which we call the atomistic associative memory, water mediated, structure and energy model (AAWSEM) amounts to an ab initio protein structure prediction method that starts from the ground up without using bioinformatic input. The free energy profiles from AAWSEM show that atomistic fragment memories are sufficient to guide the correct folding when tertiary forces are included. AAWSEM combines the efficiency of coarse-grained simulations on the full protein level with the local structural accuracy achievable from all-atom simulations of only parts of a large protein. The results suggest that a hybrid use of atomistic fragment memory and database memory in structural predictions may well be optimal for many practical applications.
Local gravity and large-scale structure
NASA Technical Reports Server (NTRS)
Juszkiewicz, Roman; Vittorio, Nicola; Wyse, Rosemary F. G.
1990-01-01
The magnitude and direction of the observed dipole anisotropy of the galaxy distribution can in principle constrain the amount of large-scale power present in the spectrum of primordial density fluctuations. This paper confronts the data, provided by a recent redshift survey of galaxies detected by the IRAS satellite, with the predictions of two cosmological models with very different levels of large-scale power: the biased Cold Dark Matter dominated model (CDM) and a baryon-dominated model (BDM) with isocurvature initial conditions. Model predictions are investigated for the Local Group peculiar velocity, v(R), induced by mass inhomogeneities distributed out to a given radius, R, for R less than about 10,000 km/s. Several convergence measures for v(R) are developed, which can become powerful cosmological tests when deep enough samples become available. For the present data sets, the CDM and BDM predictions are indistinguishable at the 2 sigma level and both are consistent with observations. A promising discriminant between cosmological models is the misalignment angle between v(R) and the apex of the dipole anisotropy of the microwave background.
Kloog, Itai; Nordio, Francesco; Coull, Brent A; Schwartz, Joel
2012-11-06
Satellite-derived aerosol optical depth (AOD) measurements have the potential to provide spatiotemporally resolved predictions of both long and short-term exposures, but previous studies have generally shown moderate predictive power and lacked detailed high spatio- temporal resolution predictions across large domains. We aimed at extending our previous work by validating our model in another region with different geographical and metrological characteristics, and incorporating fine scale land use regression and nonrandom missingness to better predict PM(2.5) concentrations for days with or without satellite AOD measures. We start by calibrating AOD data for 2000-2008 across the Mid-Atlantic. We used mixed models regressing PM(2.5) measurements against day-specific random intercepts, and fixed and random AOD and temperature slopes. We used inverse probability weighting to account for nonrandom missingness of AOD, nested regions within days to capture spatial variation in the daily calibration, and introduced a penalization method that reduces the dimensionality of the large number of spatial and temporal predictors without selecting different predictors in different locations. We then take advantage of the association between grid-cell specific AOD values and PM(2.5) monitoring data, together with associations between AOD values in neighboring grid cells to develop grid cell predictions when AOD is missing. Finally to get local predictions (at the resolution of 50 m), we regressed the residuals from the predictions for each monitor from these previous steps against the local land use variables specific for each monitor. "Out-of-sample" 10-fold cross-validation was used to quantify the accuracy of our predictions at each step. For all days without AOD values, model performance was excellent (mean "out-of-sample" R(2) = 0.81, year-to-year variation 0.79-0.84). Upon removal of outliers in the PM(2.5) monitoring data, the results of the cross validation procedure was even better (overall mean "out of sample"R(2) of 0.85). Further, cross validation results revealed no bias in the predicted concentrations (Slope of observed vs predicted = 0.97-1.01). Our model allows one to reliably assess short-term and long-term human exposures in order to investigate both the acute and effects of ambient particles, respectively.
NASA Astrophysics Data System (ADS)
Wayand, Nicholas E.; Stimberis, John; Zagrodnik, Joseph P.; Mass, Clifford F.; Lundquist, Jessica D.
2016-09-01
Low-level cold air from eastern Washington often flows westward through mountain passes in the Washington Cascades, creating localized inversions and locally reducing climatological temperatures. The persistence of this inversion during a frontal passage can result in complex patterns of snow and rain that are difficult to predict. Yet these predictions are critical to support highway avalanche control, ski resort operations, and modeling of headwater snowpack storage. In this study we used observations of precipitation phase from a disdrometer and snow depth sensors across Snoqualmie Pass, WA, to evaluate surface-air-temperature-based and mesoscale-model-based predictions of precipitation phase during the anomalously warm 2014-2015 winter. Correlations of phase between surface-based methods and observations were greatly improved (r2 from 0.45 to 0.66) and frozen precipitation biases reduced (+36% to -6% of accumulated snow water equivalent) by using air temperature from a nearby higher-elevation station, which was less impacted by low-level inversions. Alternatively, we found a hybrid method that combines surface-based predictions with output from the Weather Research and Forecasting mesoscale model to have improved skill (r2 = 0.61) over both parent models (r2 = 0.42 and 0.55). These results suggest that prediction of precipitation phase in mountain passes can be improved by incorporating observations or models from above the surface layer.
Tropini, Carolina; Huang, Kerwyn Casey
2012-01-01
Bacterial cells maintain sophisticated levels of intracellular organization that allow for signal amplification, response to stimuli, cell division, and many other critical processes. The mechanisms underlying localization and their contribution to fitness have been difficult to uncover, due to the often challenging task of creating mutants with systematically perturbed localization but normal enzymatic activity, and the lack of quantitative models through which to interpret subtle phenotypic changes. Focusing on the model bacterium Caulobacter crescentus, which generates two different types of daughter cells from an underlying asymmetric distribution of protein phosphorylation, we use mathematical modeling to investigate the contribution of the localization of histidine kinases to the establishment of cellular asymmetry and subsequent developmental outcomes. We use existing mutant phenotypes and fluorescence data to parameterize a reaction-diffusion model of the kinases PleC and DivJ and their cognate response regulator DivK. We then present a systematic computational analysis of the effects of changes in protein localization and abundance to determine whether PleC localization is required for correct developmental timing in Caulobacter. Our model predicts the developmental phenotypes of several localization mutants, and suggests that a novel strain with co-localization of PleC and DivJ could provide quantitative insight into the signaling threshold required for flagellar pole development. Our analysis indicates that normal development can be maintained through a wide range of localization phenotypes, and that developmental defects due to changes in PleC localization can be rescued by increased PleC expression. We also show that the system is remarkably robust to perturbation of the kinetic parameters, and while the localization of either PleC or DivJ is required for asymmetric development, the delocalization of one of these two components does not prevent flagellar pole development. We further find that allosteric regulation of PleC observed in vitro does not affect the predicted in vivo developmental phenotypes. Taken together, our model suggests that cells can tolerate perturbations to localization phenotypes, whose evolutionary origins may be connected with reducing protein expression or with decoupling pre- and post-division phenotypes. PMID:22876167
Lin, Lixin; Wang, Yunjia; Teng, Jiyao; Xi, Xiuxiu
2015-01-01
The measurement of soil total nitrogen (TN) by hyperspectral remote sensing provides an important tool for soil restoration programs in areas with subsided land caused by the extraction of natural resources. This study used the local correlation maximization-complementary superiority method (LCMCS) to establish TN prediction models by considering the relationship between spectral reflectance (measured by an ASD FieldSpec 3 spectroradiometer) and TN based on spectral reflectance curves of soil samples collected from subsided land which is determined by synthetic aperture radar interferometry (InSAR) technology. Based on the 1655 selected effective bands of the optimal spectrum (OSP) of the first derivate differential of reciprocal logarithm ([log{1/R}]′), (correlation coefficients, p < 0.01), the optimal model of LCMCS method was obtained to determine the final model, which produced lower prediction errors (root mean square error of validation [RMSEV] = 0.89, mean relative error of validation [MREV] = 5.93%) when compared with models built by the local correlation maximization (LCM), complementary superiority (CS) and partial least squares regression (PLS) methods. The predictive effect of LCMCS model was optional in Cangzhou, Renqiu and Fengfeng District. Results indicate that the LCMCS method has great potential to monitor TN in subsided lands caused by the extraction of natural resources including groundwater, oil and coal. PMID:26213935
McConville, Anna; Law, Bradley S.; Mahony, Michael J.
2013-01-01
Habitat modelling and predictive mapping are important tools for conservation planning, particularly for lesser known species such as many insectivorous bats. However, the scale at which modelling is undertaken can affect the predictive accuracy and restrict the use of the model at different scales. We assessed the validity of existing regional-scale habitat models at a local-scale and contrasted the habitat use of two morphologically similar species with differing conservation status (Mormopterus norfolkensis and Mormopterus species 2). We used negative binomial generalised linear models created from indices of activity and environmental variables collected from systematic acoustic surveys. We found that habitat type (based on vegetation community) best explained activity of both species, which were more active in floodplain areas, with most foraging activity recorded in the freshwater wetland habitat type. The threatened M. norfolkensis avoided urban areas, which contrasts with M. species 2 which occurred frequently in urban bushland. We found that the broad habitat types predicted from local-scale models were generally consistent with those from regional-scale models. However, threshold-dependent accuracy measures indicated a poor fit and we advise caution be applied when using the regional models at a fine scale, particularly when the consequences of false negatives or positives are severe. Additionally, our study illustrates that habitat type classifications can be important predictors and we suggest they are more practical for conservation than complex combinations of raw variables, as they are easily communicated to land managers. PMID:23977296
Roth, Christian J; Becher, Tobias; Frerichs, Inéz; Weiler, Norbert; Wall, Wolfgang A
2017-04-01
Providing optimal personalized mechanical ventilation for patients with acute or chronic respiratory failure is still a challenge within a clinical setting for each case anew. In this article, we integrate electrical impedance tomography (EIT) monitoring into a powerful patient-specific computational lung model to create an approach for personalizing protective ventilatory treatment. The underlying computational lung model is based on a single computed tomography scan and able to predict global airflow quantities, as well as local tissue aeration and strains for any ventilation maneuver. For validation, a novel "virtual EIT" module is added to our computational lung model, allowing to simulate EIT images based on the patient's thorax geometry and the results of our numerically predicted tissue aeration. Clinically measured EIT images are not used to calibrate the computational model. Thus they provide an independent method to validate the computational predictions at high temporal resolution. The performance of this coupling approach has been tested in an example patient with acute respiratory distress syndrome. The method shows good agreement between computationally predicted and clinically measured airflow data and EIT images. These results imply that the proposed framework can be used for numerical prediction of patient-specific responses to certain therapeutic measures before applying them to an actual patient. In the long run, definition of patient-specific optimal ventilation protocols might be assisted by computational modeling. NEW & NOTEWORTHY In this work, we present a patient-specific computational lung model that is able to predict global and local ventilatory quantities for a given patient and any selected ventilation protocol. For the first time, such a predictive lung model is equipped with a virtual electrical impedance tomography module allowing real-time validation of the computed results with the patient measurements. First promising results obtained in an acute respiratory distress syndrome patient show the potential of this approach for personalized computationally guided optimization of mechanical ventilation in future. Copyright © 2017 the American Physiological Society.
D.C. Bragg; K.M. McElligott
2013-01-01
Sequestration by Arkansas forests removes carbon dioxide from the atmosphere, storing this carbon in biomass that fills a number of critical ecological and socioeconomic functions. We need a better understanding of the contribution of forests to the carbon cycle, including the accurate quantification of tree biomass. Models have long been developed to predict...
Bayesian model checking: A comparison of tests
NASA Astrophysics Data System (ADS)
Lucy, L. B.
2018-06-01
Two procedures for checking Bayesian models are compared using a simple test problem based on the local Hubble expansion. Over four orders of magnitude, p-values derived from a global goodness-of-fit criterion for posterior probability density functions agree closely with posterior predictive p-values. The former can therefore serve as an effective proxy for the difficult-to-calculate posterior predictive p-values.
A Simulation Framework for Battery Cell Impact Safety Modeling Using LS-DYNA
Marcicki, James; Zhu, Min; Bartlett, Alexander; ...
2017-02-04
The development process of electrified vehicles can benefit significantly from computer-aided engineering tools that predict themultiphysics response of batteries during abusive events. A coupled structural, electrical, electrochemical, and thermal model framework has been developed within the commercially available LS-DYNA software. The finite element model leverages a three-dimensional mesh structure that fully resolves the unit cell components. The mechanical solver predicts the distributed stress and strain response with failure thresholds leading to the onset of an internal short circuit. In this implementation, an arbitrary compressive strain criterion is applied locally to each unit cell. A spatially distributed equivalent circuit model providesmore » an empirical representation of the electrochemical responsewith minimal computational complexity.The thermalmodel provides state information to index the electrical model parameters, while simultaneously accepting irreversible and reversible sources of heat generation. The spatially distributed models of the electrical and thermal dynamics allow for the localization of current density and corresponding temperature response. The ability to predict the distributed thermal response of the cell as its stored energy is completely discharged through the short circuit enables an engineering safety assessment. A parametric analysis of an exemplary model is used to demonstrate the simulation capabilities.« less
Users guide for the hydroacoustic coverage assessment model (HydroCAM)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farrell, T., LLNL
1997-12-01
A model for predicting the detection and localization performance of hydroacoustic monitoring networks has been developed. The model accounts for major factors affecting global-scale acoustic propagation in the ocean. including horizontal refraction, travel time variability due to spatial and temporal fluctuations in the ocean, and detailed characteristics of the source. Graphical user interfaces are provided to setup the models and visualize the results. The model produces maps of network detection coverage and localization area of uncertainty, as well as intermediate results such as predicted path amplitudes, travel time and travel time variance. This Users Guide for the model is organizedmore » into three sections. First a summary of functionality available in the model is presented, including example output products. The second section provides detailed descriptions of each of models contained in the system. The last section describes how to run the model, including a summary of each data input form in the user interface.« less
ILS Localizer Performance Study for Dallas/Fort Worth Airport, Part 2
DOT National Transportation Integrated Search
1974-02-01
The Transportation Systems Center electromagnetic scattering model was used to predict the course deviation indication (CDI) at the Dallas/Fort Worth Airport in the presence of several derogating structures in the report FAA-RD-72-96 'ILS Localizer P...
Forman, Jason L.; Kent, Richard W.; Mroz, Krystoffer; Pipkorn, Bengt; Bostrom, Ola; Segui-Gomez, Maria
2012-01-01
This study sought to develop a strain-based probabilistic method to predict rib fracture risk with whole-body finite element (FE) models, and to describe a method to combine the results with collision exposure information to predict injury risk and potential intervention effectiveness in the field. An age-adjusted ultimate strain distribution was used to estimate local rib fracture probabilities within an FE model. These local probabilities were combined to predict injury risk and severity within the whole ribcage. The ultimate strain distribution was developed from a literature dataset of 133 tests. Frontal collision simulations were performed with the THUMS (Total HUman Model for Safety) model with four levels of delta-V and two restraints: a standard 3-point belt and a progressive 3.5–7 kN force-limited, pretensioned (FL+PT) belt. The results of three simulations (29 km/h standard, 48 km/h standard, and 48 km/h FL+PT) were compared to matched cadaver sled tests. The numbers of fractures predicted for the comparison cases were consistent with those observed experimentally. Combining these results with field exposure informantion (ΔV, NASS-CDS 1992–2002) suggests a 8.9% probability of incurring AIS3+ rib fractures for a 60 year-old restrained by a standard belt in a tow-away frontal collision with this restraint, vehicle, and occupant configuration, compared to 4.6% for the FL+PT belt. This is the first study to describe a probabilistic framework to predict rib fracture risk based on strains observed in human-body FE models. Using this analytical framework, future efforts may incorporate additional subject or collision factors for multi-variable probabilistic injury prediction. PMID:23169122
Erosion of lizard diversity by climate change and altered thermal niches.
Sinervo, Barry; Méndez-de-la-Cruz, Fausto; Miles, Donald B; Heulin, Benoit; Bastiaans, Elizabeth; Villagrán-Santa Cruz, Maricela; Lara-Resendiz, Rafael; Martínez-Méndez, Norberto; Calderón-Espinosa, Martha Lucía; Meza-Lázaro, Rubi Nelsi; Gadsden, Héctor; Avila, Luciano Javier; Morando, Mariana; De la Riva, Ignacio J; Victoriano Sepulveda, Pedro; Rocha, Carlos Frederico Duarte; Ibargüengoytía, Nora; Aguilar Puntriano, César; Massot, Manuel; Lepetz, Virginie; Oksanen, Tuula A; Chapple, David G; Bauer, Aaron M; Branch, William R; Clobert, Jean; Sites, Jack W
2010-05-14
It is predicted that climate change will cause species extinctions and distributional shifts in coming decades, but data to validate these predictions are relatively scarce. Here, we compare recent and historical surveys for 48 Mexican lizard species at 200 sites. Since 1975, 12% of local populations have gone extinct. We verified physiological models of extinction risk with observed local extinctions and extended projections worldwide. Since 1975, we estimate that 4% of local populations have gone extinct worldwide, but by 2080 local extinctions are projected to reach 39% worldwide, and species extinctions may reach 20%. Global extinction projections were validated with local extinctions observed from 1975 to 2009 for regional biotas on four other continents, suggesting that lizards have already crossed a threshold for extinctions caused by climate change.
Magliocca, Nicholas R; Brown, Daniel G; Ellis, Erle C
2014-01-01
Local changes in land use result from the decisions and actions of land-users within land systems, which are structured by local and global environmental, economic, political, and cultural contexts. Such cross-scale causation presents a major challenge for developing a general understanding of how local decision-making shapes land-use changes at the global scale. This paper implements a generalized agent-based model (ABM) as a virtual laboratory to explore how global and local processes influence the land-use and livelihood decisions of local land-users, operationalized as settlement-level agents, across the landscapes of six real-world test sites. Test sites were chosen in USA, Laos, and China to capture globally-significant variation in population density, market influence, and environmental conditions, with land systems ranging from swidden to commercial agriculture. Publicly available global data were integrated into the ABM to model cross-scale effects of economic globalization on local land-use decisions. A suite of statistics was developed to assess the accuracy of model-predicted land-use outcomes relative to observed and random (i.e. null model) landscapes. At four of six sites, where environmental and demographic forces were important constraints on land-use choices, modeled land-use outcomes were more similar to those observed across sites than the null model. At the two sites in which market forces significantly influenced land-use and livelihood decisions, the model was a poorer predictor of land-use outcomes than the null model. Model successes and failures in simulating real-world land-use patterns enabled the testing of hypotheses on land-use decision-making and yielded insights on the importance of missing mechanisms. The virtual laboratory approach provides a practical framework for systematic improvement of both theory and predictive skill in land change science based on a continual process of experimentation and model enhancement.
Magliocca, Nicholas R.; Brown, Daniel G.; Ellis, Erle C.
2014-01-01
Local changes in land use result from the decisions and actions of land-users within land systems, which are structured by local and global environmental, economic, political, and cultural contexts. Such cross-scale causation presents a major challenge for developing a general understanding of how local decision-making shapes land-use changes at the global scale. This paper implements a generalized agent-based model (ABM) as a virtual laboratory to explore how global and local processes influence the land-use and livelihood decisions of local land-users, operationalized as settlement-level agents, across the landscapes of six real-world test sites. Test sites were chosen in USA, Laos, and China to capture globally-significant variation in population density, market influence, and environmental conditions, with land systems ranging from swidden to commercial agriculture. Publicly available global data were integrated into the ABM to model cross-scale effects of economic globalization on local land-use decisions. A suite of statistics was developed to assess the accuracy of model-predicted land-use outcomes relative to observed and random (i.e. null model) landscapes. At four of six sites, where environmental and demographic forces were important constraints on land-use choices, modeled land-use outcomes were more similar to those observed across sites than the null model. At the two sites in which market forces significantly influenced land-use and livelihood decisions, the model was a poorer predictor of land-use outcomes than the null model. Model successes and failures in simulating real-world land-use patterns enabled the testing of hypotheses on land-use decision-making and yielded insights on the importance of missing mechanisms. The virtual laboratory approach provides a practical framework for systematic improvement of both theory and predictive skill in land change science based on a continual process of experimentation and model enhancement. PMID:24489696
NASA Astrophysics Data System (ADS)
Micic, Miroslav; Holley-Bockelmann, Kelly; Sigurdsson, Steinn
2011-06-01
We explore the growth of ≤107 M⊙ black holes that reside at the centres of spiral and field dwarf galaxies in a Local Group type of environment. We use merger trees from a cosmological N-body simulation known as Via Lactea 2 (VL-2) as a framework to test two merger-driven semi-analytic recipes for black hole growth that include dynamical friction, tidal stripping and gravitational wave recoil in over 20 000 merger tree realizations. First, we apply a Fundamental Plane limited (FPL) model to the growth of Sgr A*, which drives the central black hole to a maximum mass limited by the black hole Fundamental Plane after every merger. Next, we present a new model that allows for low-level prolonged gas accretion (PGA) during the merger. We find that both models can generate an Sgr A* mass black hole. We predict a population of massive black holes in local field dwarf galaxies - if the VL-2 simulation is representative of the growth of the Local Group, we predict up to 35 massive black holes (≤106 M⊙) in Local Group field dwarfs. We also predict that hundreds of ≤105 M⊙ black holes fail to merge, and instead populate the Milky Way halo, with the most massive of them at roughly the virial radius. In addition, we find that there may be hundreds of massive black holes ejected from their hosts into the nearby intergalactic medium due to gravitational wave recoil. We discuss how the black hole population in the Local Group field dwarfs may help to constrain the growth mechanism for Sgr A*.
Theoretical and Experimental Study of Bacterial Colony Growth in 3D
NASA Astrophysics Data System (ADS)
Shao, Xinxian; Mugler, Andrew; Nemenman, Ilya
2014-03-01
Bacterial cells growing in liquid culture have been well studied and modeled. However, in nature, bacteria often grow as biofilms or colonies in physically structured habitats. A comprehensive model for population growth in such conditions has not yet been developed. Based on the well-established theory for bacterial growth in liquid culture, we develop a model for colony growth in 3D in which a homogeneous colony of cells locally consume a diffusing nutrient. We predict that colony growth is initially exponential, as in liquid culture, but quickly slows to sub-exponential after nutrient is locally depleted. This prediction is consistent with our experiments performed with E. coli in soft agar. Our model provides a baseline to which studies of complex growth process, such as such as spatially and phenotypically heterogeneous colonies, must be compared.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sherlock, M.; Brodrick, J. P.; Ridgers, C. P.
Here, we compare the reduced non-local electron transport model developed to Vlasov-Fokker-Planck simulations. Two new test cases are considered: the propagation of a heat wave through a high density region into a lower density gas, and a one-dimensional hohlraum ablation problem. We find that the reduced model reproduces the peak heat flux well in the ablation region but significantly over-predicts the coronal preheat. The suitability of the reduced model for computing non-local transport effects other than thermal conductivity is considered by comparing the computed distribution function to the Vlasov-Fokker-Planck distribution function. It is shown that even when the reduced modelmore » reproduces the correct heat flux, the distribution function is significantly different to the Vlasov-Fokker-Planck prediction. Two simple modifications are considered which improve agreement between models in the coronal region.« less
Route Prediction on Tracking Data to Location-Based Services
NASA Astrophysics Data System (ADS)
Petróczi, Attila István; Gáspár-Papanek, Csaba
Wireless networks have become so widespread, it is beneficial to determine the ability of cellular networks for localization. This property enables the development of location-based services, providing useful information. These services can be improved by route prediction under the condition of using simple algorithms, because of the limited capabilities of mobile stations. This study gives alternative solutions for this problem of route prediction based on a specific graph model. Our models provide the opportunity to reach our destinations with less effort.
Spatial variation in the climatic predictors of species compositional turnover and endemism.
Di Virgilio, Giovanni; Laffan, Shawn W; Ebach, Malte C; Chapple, David G
2014-08-01
Previous research focusing on broad-scale or geographically invariant species-environment dependencies suggest that temperature-related variables explain more of the variation in reptile distributions than precipitation. However, species-environment relationships may exhibit considerable spatial variation contingent upon the geographic nuances that vary between locations. Broad-scale, geographically invariant analyses may mask this local variation and their findings may not generalize to different locations at local scales. We assess how reptile-climatic relationships change with varying spatial scale, location, and direction. Since the spatial distributions of diversity and endemism hotspots differ for other species groups, we also assess whether reptile species turnover and endemism hotspots are influenced differently by climatic predictors. Using New Zealand reptiles as an example, the variation in species turnover, endemism and turnover in climatic variables was measured using directional moving window analyses, rotated through 360°. Correlations between the species turnover, endemism and climatic turnover results generated by each rotation of the moving window were analysed using multivariate generalized linear models applied at national, regional, and local scales. At national-scale, temperature turnover consistently exhibited the greatest influence on species turnover and endemism, but model predictive capacity was low (typically r (2) = 0.05, P < 0.001). At regional scales the relative influence of temperature and precipitation turnover varied between regions, although model predictive capacity was also generally low. Climatic turnover was considerably more predictive of species turnover and endemism at local scales (e.g., r (2) = 0.65, P < 0.001). While temperature turnover had the greatest effect in one locale (the northern North Island), there was substantial variation in the relative influence of temperature and precipitation predictors in the remaining four locales. Species turnover and endemism hotspots often occurred in different locations. Climatic predictors had a smaller influence on endemism. Our results caution against assuming that variability in temperature will always be most predictive of reptile biodiversity across different spatial scales, locations and directions. The influence of climatic turnover on the species turnover and endemism of other taxa may exhibit similar patterns of spatial variation. Such intricate variation might be discerned more readily if studies at broad scales are complemented by geographically variant, local-scale analyses.
Ghorbani Moghaddam, Masoud; Achuthan, Ajit; Bednarcyk, Brett A; Arnold, Steven M; Pineda, Evan J
2016-05-04
A multiscale computational model is developed for determining the elasto-plastic behavior of polycrystal metals by employing a single crystal plasticity constitutive model that can capture the microstructural scale stress field on a finite element analysis (FEA) framework. The generalized method of cells (GMC) micromechanics model is used for homogenizing the local field quantities. At first, the stand-alone GMC is applied for studying simple material microstructures such as a repeating unit cell (RUC) containing single grain or two grains under uniaxial loading conditions. For verification, the results obtained by the stand-alone GMC are compared to those from an analogous FEA model incorporating the same single crystal plasticity constitutive model. This verification is then extended to samples containing tens to hundreds of grains. The results demonstrate that the GMC homogenization combined with the crystal plasticity constitutive framework is a promising approach for failure analysis of structures as it allows for properly predicting the von Mises stress in the entire RUC, in an average sense, as well as in the local microstructural level, i.e. , each individual grain. Two-three orders of saving in computational cost, at the expense of some accuracy in prediction, especially in the prediction of the components of local tensor field quantities and the quantities near the grain boundaries, was obtained with GMC. Finally, the capability of the developed multiscale model linking FEA and GMC to solve real-life-sized structures is demonstrated by successfully analyzing an engine disc component and determining the microstructural scale details of the field quantities.
Ghorbani Moghaddam, Masoud; Achuthan, Ajit; Bednarcyk, Brett A.; Arnold, Steven M.; Pineda, Evan J.
2016-01-01
A multiscale computational model is developed for determining the elasto-plastic behavior of polycrystal metals by employing a single crystal plasticity constitutive model that can capture the microstructural scale stress field on a finite element analysis (FEA) framework. The generalized method of cells (GMC) micromechanics model is used for homogenizing the local field quantities. At first, the stand-alone GMC is applied for studying simple material microstructures such as a repeating unit cell (RUC) containing single grain or two grains under uniaxial loading conditions. For verification, the results obtained by the stand-alone GMC are compared to those from an analogous FEA model incorporating the same single crystal plasticity constitutive model. This verification is then extended to samples containing tens to hundreds of grains. The results demonstrate that the GMC homogenization combined with the crystal plasticity constitutive framework is a promising approach for failure analysis of structures as it allows for properly predicting the von Mises stress in the entire RUC, in an average sense, as well as in the local microstructural level, i.e., each individual grain. Two–three orders of saving in computational cost, at the expense of some accuracy in prediction, especially in the prediction of the components of local tensor field quantities and the quantities near the grain boundaries, was obtained with GMC. Finally, the capability of the developed multiscale model linking FEA and GMC to solve real-life-sized structures is demonstrated by successfully analyzing an engine disc component and determining the microstructural scale details of the field quantities. PMID:28773458
Modeling Source Water TOC Using Hydroclimate Variables and Local Polynomial Regression.
Samson, Carleigh C; Rajagopalan, Balaji; Summers, R Scott
2016-04-19
To control disinfection byproduct (DBP) formation in drinking water, an understanding of the source water total organic carbon (TOC) concentration variability can be critical. Previously, TOC concentrations in water treatment plant source waters have been modeled using streamflow data. However, the lack of streamflow data or unimpaired flow scenarios makes it difficult to model TOC. In addition, TOC variability under climate change further exacerbates the problem. Here we proposed a modeling approach based on local polynomial regression that uses climate, e.g. temperature, and land surface, e.g., soil moisture, variables as predictors of TOC concentration, obviating the need for streamflow. The local polynomial approach has the ability to capture non-Gaussian and nonlinear features that might be present in the relationships. The utility of the methodology is demonstrated using source water quality and climate data in three case study locations with surface source waters including river and reservoir sources. The models show good predictive skill in general at these locations, with lower skills at locations with the most anthropogenic influences in their streams. Source water TOC predictive models can provide water treatment utilities important information for making treatment decisions for DBP regulation compliance under future climate scenarios.
Reuzé, Sylvain; Orlhac, Fanny; Chargari, Cyrus; Nioche, Christophe; Limkin, Elaine; Riet, François; Escande, Alexandre; Haie-Meder, Christine; Dercle, Laurent; Gouy, Sébastien; Buvat, Irène; Deutsch, Eric; Robert, Charlotte
2017-06-27
To identify an imaging signature predicting local recurrence for locally advanced cervical cancer (LACC) treated by chemoradiation and brachytherapy from baseline 18F-FDG PET images, and to evaluate the possibility of gathering images from two different PET scanners in a radiomic study. 118 patients were included retrospectively. Two groups (G1, G2) were defined according to the PET scanner used for image acquisition. Eleven radiomic features were extracted from delineated cervical tumors to evaluate: (i) the predictive value of features for local recurrence of LACC, (ii) their reproducibility as a function of the scanner within a hepatic reference volume, (iii) the impact of voxel size on feature values. Eight features were statistically significant predictors of local recurrence in G1 (p < 0.05). The multivariate signature trained in G2 was validated in G1 (AUC=0.76, p<0.001) and identified local recurrence more accurately than SUVmax (p=0.022). Four features were significantly different between G1 and G2 in the liver. Spatial resampling was not sufficient to explain the stratification effect. This study showed that radiomic features could predict local recurrence of LACC better than SUVmax. Further investigation is needed before applying a model designed using data from one PET scanner to another.
DOT National Transportation Integrated Search
2013-06-01
This report summarizes a research project aimed at developing degradation models for bridge decks in the state of Michigan based on durability mechanics. A probabilistic framework to implement local-level mechanistic-based models for predicting the c...
Zhang, Hua; Kurgan, Lukasz
2014-12-01
Knowledge of protein flexibility is vital for deciphering the corresponding functional mechanisms. This knowledge would help, for instance, in improving computational drug design and refinement in homology-based modeling. We propose a new predictor of the residue flexibility, which is expressed by B-factors, from protein chains that use local (in the chain) predicted (or native) relative solvent accessibility (RSA) and custom-derived amino acid (AA) alphabets. Our predictor is implemented as a two-stage linear regression model that uses RSA-based space in a local sequence window in the first stage and a reduced AA pair-based space in the second stage as the inputs. This method is easy to comprehend explicit linear form in both stages. Particle swarm optimization was used to find an optimal reduced AA alphabet to simplify the input space and improve the prediction performance. The average correlation coefficients between the native and predicted B-factors measured on a large benchmark dataset are improved from 0.65 to 0.67 when using the native RSA values and from 0.55 to 0.57 when using the predicted RSA values. Blind tests that were performed on two independent datasets show consistent improvements in the average correlation coefficients by a modest value of 0.02 for both native and predicted RSA-based predictions.
Prediction of high-energy radiation belt electron fluxes using a combined VERB-NARMAX model
NASA Astrophysics Data System (ADS)
Pakhotin, I. P.; Balikhin, M. A.; Shprits, Y.; Subbotin, D.; Boynton, R.
2013-12-01
This study is concerned with the modelling and forecasting of energetic electron fluxes that endanger satellites in space. By combining data-driven predictions from the NARMAX methodology with the physics-based VERB code, it becomes possible to predict electron fluxes with a high level of accuracy and across a radial distance from inside the local acceleration region to out beyond geosynchronous orbit. The model coupling also makes is possible to avoid accounting for seed electron variations at the outer boundary. Conversely, combining a convection code with the VERB and NARMAX models has the potential to provide even greater accuracy in forecasting that is not limited to geostationary orbit but makes predictions across the entire outer radiation belt region.
NASA Astrophysics Data System (ADS)
Samuels, Rana
Water issues are a source of tension between Israelis and Palestinians. In the and region of the Middle East, water supply is not just scarce but also uncertain: It is not uncommon for annual rainfall to be as little as 60% or as much as 125% of the multiannual average. This combination of scarcity and uncertainty exacerbates the already strained economy and the already tensed political situation. The uncertainty could be alleviated if it were possible to better forecast water availability. Such forecasting is key not only for water planning and management, but also for economic policy and for political decision making. Water forecasts at multiple time scales are necessary for crop choice, aquifer operation and investments in desalination infrastructure. The unequivocal warming of the climate system adds another level of uncertainty as global and regional water cycles change. This makes the prediction of water availability an even greater challenge. Understanding the impact of climate change on precipitation can provide the information necessary for appropriate risk assessment and water planning. Unfortunately, current global circulation models (GCMs) are only able to predict long term climatic evolution at large scales but not local rainfall. The statistics of local precipitation are traditionally predicted using historical rainfall data. Obviously these data cannot anticipate changes that result from climate change. It is therefore clear that integration of the global information about climate evolution and local historical data is needed to provide the much needed predictions of regional water availability. Currently, there is no theoretical or computational framework that enables such integration for this region. In this dissertation both a conceptual framework and a computational platform for such integration are introduced. In particular, suite of models that link forecasts of climatic evolution under different CO2 emissions scenarios to observed rainfall data from local stations are developed. These are used to develop scenarios for local rainfall statistics such as average annual amounts, dry spells, wet spells and drought persistence. This suite of models can provide information that is not attainable from existing tools in terms of its spatial and temporal resolution. Specifically, the goal is to project the impact of established global climate change scenarios in this region and, how much of the change might be mitigated by proposed CO2 reduction strategies. A major problem in this enterprise is to find the best way to integrate global climatic information with local rainfall data. From the climatologic perspective the problem is to find the right teleconnections. That is, non local or global measurable phenomena that influence local rainfall in a way that could be characterized and quantified statistically. From the computational perspective the challenge is to model these subtle, nonlinear relationships and to downscale the global effects into local predictions. Climate simulations to the year 2100 under selected climate change scenarios are used. Overall, the suite of models developed and presented can be applied to answer most questions from the different water users and planners. Farmers and the irrigation community can ask "What is the probability of rain over the next week?" Policy makers can ask "How much desalination capacity will I need to meet demand 90% of the time in the climate change scenario over the next 20 years?" Aquifer managers can ask "What is the expected recharge rate of the aquifers over the next decade?" The use of climate driven answers to these questions will help the region better prepare and adapt to future shifts in water resources and availability.
Internal Physical Features of a Land Surface Model Employing a Tangent Linear Model
NASA Technical Reports Server (NTRS)
Yang, Runhua; Cohn, Stephen E.; daSilva, Arlindo; Joiner, Joanna; Houser, Paul R.
1997-01-01
The Earth's land surface, including its biomass, is an integral part of the Earth's weather and climate system. Land surface heterogeneity, such as the type and amount of vegetative covering., has a profound effect on local weather variability and therefore on regional variations of the global climate. Surface conditions affect local weather and climate through a number of mechanisms. First, they determine the re-distribution of the net radiative energy received at the surface, through the atmosphere, from the sun. A certain fraction of this energy increases the surface ground temperature, another warms the near-surface atmosphere, and the rest evaporates surface water, which in turn creates clouds and causes precipitation. Second, they determine how much rainfall and snowmelt can be stored in the soil and how much instead runs off into waterways. Finally, surface conditions influence the near-surface concentration and distribution of greenhouse gases such as carbon dioxide. The processes through which these mechanisms interact with the atmosphere can be modeled mathematically, to within some degree of uncertainty, on the basis of underlying physical principles. Such a land surface model provides predictive capability for surface variables including ground temperature, surface humidity, and soil moisture and temperature. This information is important for agriculture and industry, as well as for addressing fundamental scientific questions concerning global and local climate change. In this study we apply a methodology known as tangent linear modeling to help us understand more deeply, the behavior of the Mosaic land surface model, a model that has been developed over the past several years at NASA/GSFC. This methodology allows us to examine, directly and quantitatively, the dependence of prediction errors in land surface variables upon different vegetation conditions. The work also highlights the importance of accurate soil moisture information. Although surface variables are predicted imperfectly due to inherent uncertainties in the modeling process, our study suggests how satellite observations can be combined with the model, through land surface data assimilation, to improve their prediction.
Wave models for turbulent free shear flows
NASA Technical Reports Server (NTRS)
Liou, W. W.; Morris, P. J.
1991-01-01
New predictive closure models for turbulent free shear flows are presented. They are based on an instability wave description of the dominant large scale structures in these flows using a quasi-linear theory. Three model were developed to study the structural dynamics of turbulent motions of different scales in free shear flows. The local characteristics of the large scale motions are described using linear theory. Their amplitude is determined from an energy integral analysis. The models were applied to the study of an incompressible free mixing layer. In all cases, predictions are made for the development of the mean flow field. In the last model, predictions of the time dependent motion of the large scale structure of the mixing region are made. The predictions show good agreement with experimental observations.
NASA Astrophysics Data System (ADS)
Tsimpidi, A. P.; Karydis, V. A.; Zavala, M.; Lei, W.; Bei, N.; Molina, L.; Pandis, S. N.
2011-06-01
Urban areas are large sources of organic aerosols and their precursors. Nevertheless, the contributions of primary (POA) and secondary organic aerosol (SOA) to the observed particulate matter levels have been difficult to quantify. In this study the three-dimensional chemical transport model PMCAMx-2008 is used to investigate the temporal and geographic variability of organic aerosol in the Mexico City Metropolitan Area (MCMA) during the MILAGRO campaign that took place in the spring of 2006. The organic module of PMCAMx-2008 includes the recently developed volatility basis-set framework in which both primary and secondary organic components are assumed to be semi-volatile and photochemically reactive and are distributed in logarithmically spaced volatility bins. The MCMA emission inventory is modified and the POA emissions are distributed by volatility based on dilution experiments. The model predictions are compared with observations from four different types of sites, an urban (T0), a suburban (T1), a rural (T2), and an elevated site in Pico de Tres Padres (PTP). The performance of the model in reproducing organic mass concentrations in these sites is encouraging. The average predicted PM1 organic aerosol (OA) concentration in T0, T1, and T2 is 18 μg m-3, 11.7 μg m-3, and 10.5 μg m-3 respectively, while the corresponding measured values are 17.2 μg m-3, 11 μg m-3, and 9 μg m-3. The average predicted locally-emitted primary OA concentrations, 4.4 μg m-3 at T0, 1.2 μg m-3 at T1 and 1.7 μg m-3 at PTP, are in reasonably good agreement with the corresponding PMF analysis estimates based on the Aerosol Mass Spectrometer (AMS) observations of 4.5, 1.3, and 2.9 μg m-3 respectively. The model reproduces reasonably well the average oxygenated OA (OOA) levels in T0 (7.5 μg m-3 predicted versus 7.5 μg m-3 measured), in T1 (6.3 μg m-3 predicted versus 4.6 μg m-3 measured) and in PTP (6.6 μg m-3 predicted versus 5.9 μg m-3 measured). The rest of the OA mass (6.1 μg m-3 and 4.2 μg m-3 in T0 and T1 respectively) is assumed to originate from biomass burning activities and is introduced to the model as part of the boundary conditions. Inside Mexico City (at T0), the locally-produced OA is predicted to be on average 60 % locally-emitted primary (POA), 6 % semi-volatile (S-SOA) and intermediate volatile (I-SOA) organic aerosol, and 34 % traditional SOA from the oxidation of VOCs (V-SOA). The average contributions of the OA components to the locally-produced OA for the entire modelling domain are predicted to be 32 % POA, 10 % S-SOA and I-SOA, and 58 % V-SOA. The long range transport from biomass burning activities and other sources in Mexico is predicted to contribute on average almost as much as the local sources during the MILAGRO period.
On the predictive ability of mechanistic models for the Haitian cholera epidemic.
Mari, Lorenzo; Bertuzzo, Enrico; Finger, Flavio; Casagrandi, Renato; Gatto, Marino; Rinaldo, Andrea
2015-03-06
Predictive models of epidemic cholera need to resolve at suitable aggregation levels spatial data pertaining to local communities, epidemiological records, hydrologic drivers, waterways, patterns of human mobility and proxies of exposure rates. We address the above issue in a formal model comparison framework and provide a quantitative assessment of the explanatory and predictive abilities of various model settings with different spatial aggregation levels and coupling mechanisms. Reference is made to records of the recent Haiti cholera epidemics. Our intensive computations and objective model comparisons show that spatially explicit models accounting for spatial connections have better explanatory power than spatially disconnected ones for short-to-intermediate calibration windows, while parsimonious, spatially disconnected models perform better with long training sets. On average, spatially connected models show better predictive ability than disconnected ones. We suggest limits and validity of the various approaches and discuss the pathway towards the development of case-specific predictive tools in the context of emergency management. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
NASA Technical Reports Server (NTRS)
Stouffer, D. C.; Sheh, M. Y.
1988-01-01
A micromechanical model based on crystallographic slip theory was formulated for nickel-base single crystal superalloys. The current equations include both drag stress and back stress state variables to model the local inelastic flow. Specially designed experiments have been conducted to evaluate the effect of back stress in single crystals. The results showed that (1) the back stress is orientation dependent; and (2) the back stress state variable in the inelastic flow equation is necessary for predicting anelastic behavior of the material. The model also demonstrated improved fatigue predictive capability. Model predictions and experimental data are presented for single crystal superalloy Rene N4 at 982 C.
Scaling depth-induced wave-breaking in two-dimensional spectral wave models
NASA Astrophysics Data System (ADS)
Salmon, J. E.; Holthuijsen, L. H.; Zijlema, M.; van Vledder, G. Ph.; Pietrzak, J. D.
2015-03-01
Wave breaking in shallow water is still poorly understood and needs to be better parameterized in 2D spectral wave models. Significant wave heights over horizontal bathymetries are typically under-predicted in locally generated wave conditions and over-predicted in non-locally generated conditions. A joint scaling dependent on both local bottom slope and normalized wave number is presented and is shown to resolve these issues. Compared to the 12 wave breaking parameterizations considered in this study, this joint scaling demonstrates significant improvements, up to ∼50% error reduction, over 1D horizontal bathymetries for both locally and non-locally generated waves. In order to account for the inherent differences between uni-directional (1D) and directionally spread (2D) wave conditions, an extension of the wave breaking dissipation models is presented. By including the effects of wave directionality, rms-errors for the significant wave height are reduced for the best performing parameterizations in conditions with strong directional spreading. With this extension, our joint scaling improves modeling skill for significant wave heights over a verification data set of 11 different 1D laboratory bathymetries, 3 shallow lakes and 4 coastal sites. The corresponding averaged normalized rms-error for significant wave height in the 2D cases varied between 8% and 27%. In comparison, using the default setting with a constant scaling, as used in most presently operating 2D spectral wave models, gave equivalent errors between 15% and 38%.
Surface Temperature Prediction of a Bridge for Tactical Decision Aide Modelling
1988-01-01
Roadway And Piling Surface Temperature Predictions (No Radiosity Incident on Lower Surface) Compared to Temperature Estimates...Heat gained from water = Heat lost by long wave radiosity radiation. Algebraically, with the conduction term expressed in the same manner as for...5 10 15 20 LOCAL TIME (hrs.) Figure 8. Effect of No Radiosity Incident on Lower Surface. 37 U 8a M OT U% 60-- 0- o.. 20- 0- 1 T I I 5 10 15 20 LOCAL
Localized magnetism in liquid Al80Mn20 alloys: A first-principles investigation
NASA Astrophysics Data System (ADS)
Jakse, N.; LeBacq, O.; Pasturel, A.
2006-04-01
We present first-principles investigations of the formation of magnetic moments in liquid Al80Mn20 alloys as a function of temperature. We predict the existence of large magnetic moments on Mn atoms which are close to that of the single-impurity limit. The wide distribution of moments can be understood in terms of fluctuations in the local environment. Our calculations also predict that thermal expansion effects within the single-impurity model mainly explain the striking increase of magnetism with temperature.
State-space prediction model for chaotic time series
NASA Astrophysics Data System (ADS)
Alparslan, A. K.; Sayar, M.; Atilgan, A. R.
1998-08-01
A simple method for predicting the continuation of scalar chaotic time series ahead in time is proposed. The false nearest neighbors technique in connection with the time-delayed embedding is employed so as to reconstruct the state space. A local forecasting model based upon the time evolution of the topological neighboring in the reconstructed phase space is suggested. A moving root-mean-square error is utilized in order to monitor the error along the prediction horizon. The model is tested for the convection amplitude of the Lorenz model. The results indicate that for approximately 100 cycles of the training data, the prediction follows the actual continuation very closely about six cycles. The proposed model, like other state-space forecasting models, captures the long-term behavior of the system due to the use of spatial neighbors in the state space.
Bringing modeling to the masses: A web based system to predict potential species distributions
Graham, Jim; Newman, Greg; Kumar, Sunil; Jarnevich, Catherine S.; Young, Nick; Crall, Alycia W.; Stohlgren, Thomas J.; Evangelista, Paul
2010-01-01
Predicting current and potential species distributions and abundance is critical for managing invasive species, preserving threatened and endangered species, and conserving native species and habitats. Accurate predictive models are needed at local, regional, and national scales to guide field surveys, improve monitoring, and set priorities for conservation and restoration. Modeling capabilities, however, are often limited by access to software and environmental data required for predictions. To address these needs, we built a comprehensive web-based system that: (1) maintains a large database of field data; (2) provides access to field data and a wealth of environmental data; (3) accesses values in rasters representing environmental characteristics; (4) runs statistical spatial models; and (5) creates maps that predict the potential species distribution. The system is available online at www.niiss.org, and provides web-based tools for stakeholders to create potential species distribution models and maps under current and future climate scenarios.
Fumiaki Funahashi; Jennifer L. Parke
2017-01-01
Soil solarization has been shown to be an effective tool to manage Phytophthora spp. within surface soils, but estimating the minimum time required to complete local eradication under variable weather conditions remains unknown. A mathematical model could help predict the effectiveness of solarization at different sites and soil depths....
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ohri, Nitin, E-mail: ohri.nitin@gmail.com; Bodner, William R.; Halmos, Balazs
Background: We previously reported that pretreatment positron emission tomography (PET) identifies lesions at high risk for progression after concurrent chemoradiation therapy (CRT) for locally advanced non-small cell lung cancer (NSCLC). Here we validate those findings and generate tumor control probability (TCP) models. Methods: We identified patients treated with definitive, concurrent CRT for locally advanced NSCLC who underwent staging {sup 18}F-fluorodeoxyglucose/PET/computed tomography. Visible hypermetabolic lesions (primary tumors and lymph nodes) were delineated on each patient's pretreatment PET scan. Posttreatment imaging was reviewed to identify locations of disease progression. Competing risks analyses were performed to examine metabolic tumor volume (MTV) and radiation therapymore » dose as predictors of local disease progression. TCP modeling was performed to describe the likelihood of local disease control as a function of lesion size. Results: Eighty-nine patients with 259 hypermetabolic lesions (83 primary tumors and 176 regional lymph nodes) met the inclusion criteria. Twenty-eight patients were included in our previous report, and the remaining 61 constituted our validation cohort. The median follow-up time was 22.7 months for living patients. In 20 patients, the first site of progression was a primary tumor or lymph node treated with radiation therapy. The median time to progression for those patients was 11.5 months. Data from our validation cohort confirmed that lesion MTV predicts local progression, with a 30-month cumulative incidence rate of 23% for lesions above 25 cc compared with 4% for lesions below 25 cc (P=.008). We found no evidence that radiation therapy dose was associated with local progression risk. TCP modeling yielded predicted 30-month local control rates of 98% for a 1-cc lesion, 94% for a 10-cc lesion, and 74% for a 50-cc lesion. Conclusion: Pretreatment FDG-PET identifies lesions at risk for progression after CRT for locally advanced NSCLC. Strategies to improve local control should be tested on high-risk lesions, and treatment deintensification for low-risk lesions should be explored.« less
NASA Astrophysics Data System (ADS)
Cocciaro, B.; Faetti, S.; Fronzoni, L.
2017-08-01
As shown in the EPR paper (Einstein, Podolsky e Rosen, 1935), Quantum Mechanics is a non-local Theory. The Bell theorem and the successive experiments ruled out the possibility of explaining quantum correlations using only local hidden variables models. Some authors suggested that quantum correlations could be due to superluminal communications that propagate isotropically with velocity vt > c in a preferred reference frame. For finite values of vt and in some special cases, Quantum Mechanics and superluminal models lead to different predictions. So far, no deviations from the predictions of Quantum Mechanics have been detected and only lower bounds for the superluminal velocities vt have been established. Here we describe a new experiment that increases the maximum detectable superluminal velocities and we give some preliminary results.
Tokudome, Yoshihiro; Katayanagi, Mishina; Hashimoto, Fumie
2015-06-01
Reconstructed human epidermal culture skin models have been developed for cosmetic and pharmaceutical research. This study evaluated the total and carboxyl esterase activities (i.e., Km and Vmax , respectively) and localization in two reconstructed human epidermal culture skin models (LabCyte EPI-MODEL [Japan Tissue Engineering] and EpiDerm [MatTek/Kurabo]). The usefulness of the reconstruction cultured epidermis was also verified by comparison with human and rat epidermis. Homogenized epidermal samples were fractioned by centrifugation. p-nitrophenyl acetate and 4-methylumbelliferyl acetate were used as substrates of total esterase and carboxyl esterase, respectively. Total and carboxyl esterase activities were present in the reconstructed human epidermal culture skin models and were localized in the cytosol. Moreover, the activities and localization were the same as those in human and rat epidermis. LabCyte EPI-MODEL and EpiDerm are potentially useful for esterase activity prediction in human epidermis.
Katayanagi, Mishina; Hashimoto, Fumie
2015-01-01
Background Reconstructed human epidermal culture skin models have been developed for cosmetic and pharmaceutical research. Objective This study evaluated the total and carboxyl esterase activities (i.e., Km and Vmax, respectively) and localization in two reconstructed human epidermal culture skin models (LabCyte EPI-MODEL [Japan Tissue Engineering] and EpiDerm [MatTek/Kurabo]). The usefulness of the reconstruction cultured epidermis was also verified by comparison with human and rat epidermis. Methods Homogenized epidermal samples were fractioned by centrifugation. p-nitrophenyl acetate and 4-methylumbelliferyl acetate were used as substrates of total esterase and carboxyl esterase, respectively. Results Total and carboxyl esterase activities were present in the reconstructed human epidermal culture skin models and were localized in the cytosol. Moreover, the activities and localization were the same as those in human and rat epidermis. Conclusion LabCyte EPI-MODEL and EpiDerm are potentially useful for esterase activity prediction in human epidermis. PMID:26082583
Montgomery, D.R.; Schmidt, K.M.; Dietrich, W.E.; McKean, J.
2009-01-01
The middle of a hillslope hollow in the Oregon Coast Range failed and mobilized as a debris flow during heavy rainfall in November 1996. Automated pressure transducers recorded high spatial variability of pore water pressure within the area that mobilized as a debris flow, which initiated where local upward flow from bedrock developed into overlying colluvium. Postfailure observations of the bedrock surface exposed in the debris flow scar reveal a strong spatial correspondence between elevated piezometric response and water discharging from bedrock fractures. Measurements of apparent root cohesion on the basal (Cb) and lateral (Cl) scarp demonstrate substantial local variability, with areally weighted values of Cb = 0.1 and Cl = 4.6 kPa. Using measured soil properties and basal root strength, the widely used infinite slope model, employed assuming slope parallel groundwater flow, provides a poor prediction of hydrologie conditions at failure. In contrast, a model including lateral root strength (but neglecting lateral frictional strength) gave a predicted critical value of relative soil saturation that fell within the range defined by the arithmetic and geometric mean values at the time of failure. The 3-D slope stability model CLARA-W, used with locally observed pore water pressure, predicted small areas with lower factors of safety within the overall slide mass at sites consistent with field observations of where the failure initiated. This highly variable and localized nature of small areas of high pore pressure that can trigger slope failure means, however, that substantial uncertainty appears inevitable for estimating hydrologie conditions within incipient debris flows under natural conditions. Copyright 2009 by the American Geophysical Union.
Adaptation of Mesoscale Weather Models to Local Forecasting
NASA Technical Reports Server (NTRS)
Manobianco, John T.; Taylor, Gregory E.; Case, Jonathan L.; Dianic, Allan V.; Wheeler, Mark W.; Zack, John W.; Nutter, Paul A.
2003-01-01
Methodologies have been developed for (1) configuring mesoscale numerical weather-prediction models for execution on high-performance computer workstations to make short-range weather forecasts for the vicinity of the Kennedy Space Center (KSC) and the Cape Canaveral Air Force Station (CCAFS) and (2) evaluating the performances of the models as configured. These methodologies have been implemented as part of a continuing effort to improve weather forecasting in support of operations of the U.S. space program. The models, methodologies, and results of the evaluations also have potential value for commercial users who could benefit from tailoring their operations and/or marketing strategies based on accurate predictions of local weather. More specifically, the purpose of developing the methodologies for configuring the models to run on computers at KSC and CCAFS is to provide accurate forecasts of winds, temperature, and such specific thunderstorm-related phenomena as lightning and precipitation. The purpose of developing the evaluation methodologies is to maximize the utility of the models by providing users with assessments of the capabilities and limitations of the models. The models used in this effort thus far include the Mesoscale Atmospheric Simulation System (MASS), the Regional Atmospheric Modeling System (RAMS), and the National Centers for Environmental Prediction Eta Model ( Eta for short). The configuration of the MASS and RAMS is designed to run the models at very high spatial resolution and incorporate local data to resolve fine-scale weather features. Model preprocessors were modified to incorporate surface, ship, buoy, and rawinsonde data as well as data from local wind towers, wind profilers, and conventional or Doppler radars. The overall evaluation of the MASS, Eta, and RAMS was designed to assess the utility of these mesoscale models for satisfying the weather-forecasting needs of the U.S. space program. The evaluation methodology includes objective and subjective verification methodologies. Objective (e.g., statistical) verification of point forecasts is a stringent measure of model performance, but when used alone, it is not usually sufficient for quantifying the value of the overall contribution of the model to the weather-forecasting process. This is especially true for mesoscale models with enhanced spatial and temporal resolution that may be capable of predicting meteorologically consistent, though not necessarily accurate, fine-scale weather phenomena. Therefore, subjective (phenomenological) evaluation, focusing on selected case studies and specific weather features, such as sea breezes and precipitation, has been performed to help quantify the added value that cannot be inferred solely from objective evaluation.
Billings, John; Georghiou, Theo; Blunt, Ian; Bardsley, Martin
2013-01-01
Objectives To test the performance of new variants of models to identify people at risk of an emergency hospital admission. We compared (1) the impact of using alternative data sources (hospital inpatient, A&E, outpatient and general practitioner (GP) electronic medical records) (2) the effects of local calibration on the performance of the models and (3) the choice of population denominators. Design Multivariate logistic regressions using person-level data adding each data set sequentially to test value of additional variables and denominators. Setting 5 Primary Care Trusts within England. Participants 1 836 099 people aged 18–95 registered with GPs on 31 July 2009. Main outcome measures Models to predict hospital admission and readmission were compared in terms of the positive predictive value and sensitivity for various risk strata and with the receiver operating curve C statistic. Results The addition of each data set showed moderate improvement in the number of patients identified with little or no loss of positive predictive value. However, even with inclusion of GP electronic medical record information, the algorithms identified only a small number of patients with no emergency hospital admissions in the previous 2 years. The model pooled across all sites performed almost as well as the models calibrated to local data from just one site. Using population denominators from GP registers led to better case finding. Conclusions These models provide a basis for wider application in the National Health Service. Each of the models examined produces reasonably robust performance and offers some predictive value. The addition of more complex data adds some value, but we were unable to conclude that pooled models performed less well than those in individual sites. Choices about model should be linked to the intervention design. Characteristics of patients identified by the algorithms provide useful information in the design/costing of intervention strategies to improve care coordination/outcomes for these patients. PMID:23980068
The Space Shuttle Orbiter molecular environment induced by the supplemental flash evaporator system
NASA Technical Reports Server (NTRS)
Ehlers, H. K. F.
1985-01-01
The water vapor environment of the Space Shuttle Orbiter induced by the supplemental flash evaporator during the on-orbit flight phase has been analyzed based on Space II model predictions and orbital flight measurements. Model data of local density, column density, and return flux are presented. Results of return flux measurements with a mass spectrometer during STS-2 and of direct flux measurements during STS-4 are discussed and compared with model predictions.
Liu, Tao; Zhu, Guanghu; Lin, Hualiang; Zhang, Yonghui; He, Jianfeng; Deng, Aiping; Peng, Zhiqiang; Xiao, Jianpeng; Rutherford, Shannon; Xie, Runsheng; Zeng, Weilin; Li, Xing; Ma, Wenjun
2017-01-01
Background Dengue fever (DF) in Guangzhou, Guangdong province in China is an important public health issue. The problem was highlighted in 2014 by a large, unprecedented outbreak. In order to respond in a more timely manner and hence better control such potential outbreaks in the future, this study develops an early warning model that integrates internet-based query data into traditional surveillance data. Methodology and principal findings A Dengue Baidu Search Index (DBSI) was collected from the Baidu website for developing a predictive model of dengue fever in combination with meteorological and demographic factors. Generalized additive models (GAM) with or without DBSI were established. The generalized cross validation (GCV) score and deviance explained indexes, intraclass correlation coefficient (ICC) and root mean squared error (RMSE), were respectively applied to measure the fitness and the prediction capability of the models. Our results show that the DBSI with one-week lag has a positive linear relationship with the local DF occurrence, and the model with DBSI (ICC:0.94 and RMSE:59.86) has a better prediction capability than the model without DBSI (ICC:0.72 and RMSE:203.29). Conclusions Our study suggests that a DSBI combined with traditional disease surveillance and meteorological data can improve the dengue early warning system in Guangzhou. PMID:28263988
Predicting the dynamic fracture of steel via a non-local strain-energy density failure criterion.
DOT National Transportation Integrated Search
2014-06-01
Predicting the onset of fracture in a material subjected to dynamic loading conditions has typically been heavily mesh-dependent, and often must be specifically calibrated for each geometric design. This can lead to costly models and even : costlier ...
Gille, Laure-Anne; Marquis-Favre, Catherine; Morel, Julien
2016-09-01
An in situ survey was performed in 8 French cities in 2012 to study the annoyance due to combined transportation noises. As the European Commission recommends to use the exposure-response relationships suggested by Miedema and Oudshoorn [Environmental Health Perspective, 2001] to predict annoyance due to single transportation noise, these exposure-response relationships were tested using the annoyance due to each transportation noise measured during the French survey. These relationships only enabled a good prediction in terms of the percentages of people highly annoyed by road traffic noise. For the percentages of people annoyed and a little annoyed by road traffic noise, the quality of prediction is weak. For aircraft and railway noises, prediction of annoyance is not satisfactory either. As a consequence, the annoyance equivalents model of Miedema [The Journal of the Acoustical Society of America, 2004], based on these exposure-response relationships did not enable a good prediction of annoyance due to combined transportation noises. Local exposure-response relationships were derived, following the whole computation suggested by Miedema and Oudshoorn [Environmental Health Perspective, 2001]. They led to a better calculation of annoyance due to each transportation noise in the French cities. A new version of the annoyance equivalents model was proposed using these new exposure-response relationships. This model enabled a better prediction of the total annoyance due to the combined transportation noises. These results encourage therefore to improve the annoyance prediction for noise in isolation with local or revised exposure-response relationships, which will also contribute to improve annoyance modeling for combined noises. With this aim in mind, a methodology is proposed to consider noise sensitivity in exposure-response relationships and in the annoyance equivalents model. The results showed that taking into account such variable did not enable to enhance both exposure-response relationships and the annoyance equivalents model. Copyright © 2016 Elsevier Ltd. All rights reserved.
Năpăruş, Magdalena; Kuntner, Matjaž
2012-01-01
Although numerous studies model species distributions, these models are almost exclusively on single species, while studies of evolutionary lineages are preferred as they by definition study closely related species with shared history and ecology. Hermit spiders, genus Nephilengys, represent an ecologically important but relatively species-poor lineage with a globally allopatric distribution. Here, we model Nephilengys global habitat suitability based on known localities and four ecological parameters. We geo-referenced 751 localities for the four most studied Nephilengys species: N. cruentata (Africa, New World), N. livida (Madagascar), N. malabarensis (S-SE Asia), and N. papuana (Australasia). For each locality we overlaid four ecological parameters: elevation, annual mean temperature, annual mean precipitation, and land cover. We used linear backward regression within ArcGIS to select two best fit parameters per species model, and ModelBuilder to map areas of high, moderate and low habitat suitability for each species within its directional distribution. For Nephilengys cruentata suitable habitats are mid elevation tropics within Africa (natural range), a large part of Brazil and the Guianas (area of synanthropic spread), and even North Africa, Mediterranean, and Arabia. Nephilengys livida is confined to its known range with suitable habitats being mid-elevation natural and cultivated lands. Nephilengys malabarensis, however, ranges across the Equator throughout Asia where the model predicts many areas of high ecological suitability in the wet tropics. Its directional distribution suggests the species may potentially spread eastwards to New Guinea where the suitable areas of N. malabarensis largely surpass those of the native N. papuana, a species that prefers dry forests of Australian (sub)tropics. Our model is a customizable GIS tool intended to predict current and future potential distributions of globally distributed terrestrial lineages. Its predictive potential may be tested in foreseeing species distribution shifts due to habitat destruction and global climate change.
Năpăruş, Magdalena; Kuntner, Matjaž
2012-01-01
Background Although numerous studies model species distributions, these models are almost exclusively on single species, while studies of evolutionary lineages are preferred as they by definition study closely related species with shared history and ecology. Hermit spiders, genus Nephilengys, represent an ecologically important but relatively species-poor lineage with a globally allopatric distribution. Here, we model Nephilengys global habitat suitability based on known localities and four ecological parameters. Methodology/Principal Findings We geo-referenced 751 localities for the four most studied Nephilengys species: N. cruentata (Africa, New World), N. livida (Madagascar), N. malabarensis (S-SE Asia), and N. papuana (Australasia). For each locality we overlaid four ecological parameters: elevation, annual mean temperature, annual mean precipitation, and land cover. We used linear backward regression within ArcGIS to select two best fit parameters per species model, and ModelBuilder to map areas of high, moderate and low habitat suitability for each species within its directional distribution. For Nephilengys cruentata suitable habitats are mid elevation tropics within Africa (natural range), a large part of Brazil and the Guianas (area of synanthropic spread), and even North Africa, Mediterranean, and Arabia. Nephilengys livida is confined to its known range with suitable habitats being mid-elevation natural and cultivated lands. Nephilengys malabarensis, however, ranges across the Equator throughout Asia where the model predicts many areas of high ecological suitability in the wet tropics. Its directional distribution suggests the species may potentially spread eastwards to New Guinea where the suitable areas of N. malabarensis largely surpass those of the native N. papuana, a species that prefers dry forests of Australian (sub)tropics. Conclusions Our model is a customizable GIS tool intended to predict current and future potential distributions of globally distributed terrestrial lineages. Its predictive potential may be tested in foreseeing species distribution shifts due to habitat destruction and global climate change. PMID:22238692
Estimating the remaining useful life of bearings using a neuro-local linear estimator-based method.
Ahmad, Wasim; Ali Khan, Sheraz; Kim, Jong-Myon
2017-05-01
Estimating the remaining useful life (RUL) of a bearing is required for maintenance scheduling. While the degradation behavior of a bearing changes during its lifetime, it is usually assumed to follow a single model. In this letter, bearing degradation is modeled by a monotonically increasing function that is globally non-linear and locally linearized. The model is generated using historical data that is smoothed with a local linear estimator. A neural network learns this model and then predicts future levels of vibration acceleration to estimate the RUL of a bearing. The proposed method yields reasonably accurate estimates of the RUL of a bearing at different points during its operational life.
Marrero-Ponce, Yovani; Contreras-Torres, Ernesto; García-Jacas, César R; Barigye, Stephen J; Cubillán, Néstor; Alvarado, Ysaías J
2015-06-07
In the present study, we introduce novel 3D protein descriptors based on the bilinear algebraic form in the ℝ(n) space on the coulombic matrix. For the calculation of these descriptors, macromolecular vectors belonging to ℝ(n) space, whose components represent certain amino acid side-chain properties, were used as weighting schemes. Generalization approaches for the calculation of inter-amino acidic residue spatial distances based on Minkowski metrics are proposed. The simple- and double-stochastic schemes were defined as approaches to normalize the coulombic matrix. The local-fragment indices for both amino acid-types and amino acid-groups are presented in order to permit characterizing fragments of interest in proteins. On the other hand, with the objective of taking into account specific interactions among amino acids in global or local indices, geometric and topological cut-offs are defined. To assess the utility of global and local indices a classification model for the prediction of the major four protein structural classes, was built with the Linear Discriminant Analysis (LDA) technique. The developed LDA-model correctly classifies the 92.6% and 92.7% of the proteins on the training and test sets, respectively. The obtained model showed high values of the generalized square correlation coefficient (GC(2)) on both the training and test series. The statistical parameters derived from the internal and external validation procedures demonstrate the robustness, stability and the high predictive power of the proposed model. The performance of the LDA-model demonstrates the capability of the proposed indices not only to codify relevant biochemical information related to the structural classes of proteins, but also to yield suitable interpretability. It is anticipated that the current method will benefit the prediction of other protein attributes or functions. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Chardon, Jérémy; Hingray, Benoit; Favre, Anne-Catherine
2016-04-01
Scenarios of surface weather required for the impact studies have to be unbiased and adapted to the space and time scales of the considered hydro-systems. Hence, surface weather scenarios obtained from global climate models and/or numerical weather prediction models are not really appropriated. Outputs of these models have to be post-processed, which is often carried out thanks to Statistical Downscaling Methods (SDMs). Among those SDMs, approaches based on regression are often applied. For a given station, a regression link can be established between a set of large scale atmospheric predictors and the surface weather variable. These links are then used for the prediction of the latter. However, physical processes generating surface weather vary in time. This is well known for precipitation for instance. The most relevant predictors and the regression link are also likely to vary in time. A better prediction skill is thus classically obtained with a seasonal stratification of the data. Another strategy is to identify the most relevant predictor set and establish the regression link from dates that are similar - or analog - to the target date. In practice, these dates can be selected thanks to an analog model. In this study, we explore the possibility of improving the local performance of an analog model - where the analogy is applied to the geopotential heights 1000 and 500 hPa - using additional local scale predictors for the probabilistic prediction of the Safran precipitation over France. For each prediction day, the prediction is obtained from two GLM regression models - for both the occurrence and the quantity of precipitation - for which predictors and parameters are estimated from the analog dates. Firstly, the resulting combined model noticeably allows increasing the prediction performance by adapting the downscaling link for each prediction day. Secondly, the selected predictors for a given prediction depend on the large scale situation and on the considered region. Finally, even with such an adaptive predictor identification, the downscaling link appears to be robust: for a same prediction day, predictors selected for different locations of a given region are similar and the regression parameters are consistent within the region of interest.
Simón, Luis; Afonin, Alexandr; López-Díez, Lucía Isabel; González-Miguel, Javier; Morchón, Rodrigo; Carretón, Elena; Montoya-Alonso, José Alberto; Kartashev, Vladimir; Simón, Fernando
2014-03-01
Zoonotic filarioses caused by Dirofilaria immitis and Dirofilaria repens are transmitted by culicid mosquitoes. Therefore Dirofilaria transmission depends on climatic factors like temperature and humidity. In spite of the dry climate of most of the Spanish territory, there are extensive irrigated crops areas providing moist habitats favourable for mosquito breeding. A GIS model to predict the risk of Dirofilaria transmission in Spain, based on temperatures and rainfall data as well as in the distribution of irrigated crops areas, is constructed. The model predicts that potential risk of Dirofilaria transmission exists in all the Spanish territory. Highest transmission risk exists in several areas of Andalucía, Extremadura, Castilla-La Mancha, Murcia, Valencia, Aragón and Cataluña, where moderate/high temperatures coincide with extensive irrigated crops. High risk in Balearic Islands and in some points of Canary Islands, is also predicted. The lowest risk is predicted in Northern cold and scarcely or non-irrigated dry Southeastern areas. The existence of irrigations locally increases transmission risk in low rainfall areas of the Spanish territory. The model can contribute to implement rational preventive therapy guidelines in accordance with the transmission characteristics of each local area. Moreover, the use of humidity-related factors could be of interest in future predictions to be performed in countries with similar environmental characteristics. Copyright © 2014 Elsevier B.V. All rights reserved.
Lustgarten, Jonathan Lyle; Balasubramanian, Jeya Balaji; Visweswaran, Shyam; Gopalakrishnan, Vanathi
2017-03-01
The comprehensibility of good predictive models learned from high-dimensional gene expression data is attractive because it can lead to biomarker discovery. Several good classifiers provide comparable predictive performance but differ in their abilities to summarize the observed data. We extend a Bayesian Rule Learning (BRL-GSS) algorithm, previously shown to be a significantly better predictor than other classical approaches in this domain. It searches a space of Bayesian networks using a decision tree representation of its parameters with global constraints, and infers a set of IF-THEN rules. The number of parameters and therefore the number of rules are combinatorial to the number of predictor variables in the model. We relax these global constraints to a more generalizable local structure (BRL-LSS). BRL-LSS entails more parsimonious set of rules because it does not have to generate all combinatorial rules. The search space of local structures is much richer than the space of global structures. We design the BRL-LSS with the same worst-case time-complexity as BRL-GSS while exploring a richer and more complex model space. We measure predictive performance using Area Under the ROC curve (AUC) and Accuracy. We measure model parsimony performance by noting the average number of rules and variables needed to describe the observed data. We evaluate the predictive and parsimony performance of BRL-GSS, BRL-LSS and the state-of-the-art C4.5 decision tree algorithm, across 10-fold cross-validation using ten microarray gene-expression diagnostic datasets. In these experiments, we observe that BRL-LSS is similar to BRL-GSS in terms of predictive performance, while generating a much more parsimonious set of rules to explain the same observed data. BRL-LSS also needs fewer variables than C4.5 to explain the data with similar predictive performance. We also conduct a feasibility study to demonstrate the general applicability of our BRL methods on the newer RNA sequencing gene-expression data.
Scheel, Ida; Ferkingstad, Egil; Frigessi, Arnoldo; Haug, Ola; Hinnerichsen, Mikkel; Meze-Hausken, Elisabeth
2013-01-01
Climate change will affect the insurance industry. We develop a Bayesian hierarchical statistical approach to explain and predict insurance losses due to weather events at a local geographic scale. The number of weather-related insurance claims is modelled by combining generalized linear models with spatially smoothed variable selection. Using Gibbs sampling and reversible jump Markov chain Monte Carlo methods, this model is fitted on daily weather and insurance data from each of the 319 municipalities which constitute southern and central Norway for the period 1997–2006. Precise out-of-sample predictions validate the model. Our results show interesting regional patterns in the effect of different weather covariates. In addition to being useful for insurance pricing, our model can be used for short-term predictions based on weather forecasts and for long-term predictions based on downscaled climate models. PMID:23396890
Developing a predictive tropospheric ozone model for Tabriz
NASA Astrophysics Data System (ADS)
Khatibi, Rahman; Naghipour, Leila; Ghorbani, Mohammad A.; Smith, Michael S.; Karimi, Vahid; Farhoudi, Reza; Delafrouz, Hadi; Arvanaghi, Hadi
2013-04-01
Predictive ozone models are becoming indispensable tools by providing a capability for pollution alerts to serve people who are vulnerable to the risks. We have developed a tropospheric ozone prediction capability for Tabriz, Iran, by using the following five modeling strategies: three regression-type methods: Multiple Linear Regression (MLR), Artificial Neural Networks (ANNs), and Gene Expression Programming (GEP); and two auto-regression-type models: Nonlinear Local Prediction (NLP) to implement chaos theory and Auto-Regressive Integrated Moving Average (ARIMA) models. The regression-type modeling strategies explain the data in terms of: temperature, solar radiation, dew point temperature, and wind speed, by regressing present ozone values to their past values. The ozone time series are available at various time intervals, including hourly intervals, from August 2010 to March 2011. The results for MLR, ANN and GEP models are not overly good but those produced by NLP and ARIMA are promising for the establishing a forecasting capability.
Arctic Sea Ice Predictability and the Sea Ice Prediction Network
NASA Astrophysics Data System (ADS)
Wiggins, H. V.; Stroeve, J. C.
2014-12-01
Drastic reductions in Arctic sea ice cover have increased the demand for Arctic sea ice predictions by a range of stakeholders, including local communities, resource managers, industry and the public. The science of sea-ice prediction has been challenged to keep up with these developments. Efforts such as the SEARCH Sea Ice Outlook (SIO; http://www.arcus.org/sipn/sea-ice-outlook) and the Sea Ice for Walrus Outlook have provided a forum for the international sea-ice prediction and observing community to explore and compare different approaches. The SIO, originally organized by the Study of Environmental Change (SEARCH), is now managed by the new Sea Ice Prediction Network (SIPN), which is building a collaborative network of scientists and stakeholders to improve arctic sea ice prediction. The SIO synthesizes predictions from a variety of methods, including heuristic and from a statistical and/or dynamical model. In a recent study, SIO data from 2008 to 2013 were analyzed. The analysis revealed that in some years the predictions were very successful, in other years they were not. Years that were anomalous compared to the long-term trend have proven more difficult to predict, regardless of which method was employed. This year, in response to feedback from users and contributors to the SIO, several enhancements have been made to the SIO reports. One is to encourage contributors to provide spatial probability maps of sea ice cover in September and the first day each location becomes ice-free; these are an example of subseasonal to seasonal, local-scale predictions. Another enhancement is a separate analysis of the modeling contributions. In the June 2014 SIO report, 10 of 28 outlooks were produced from models that explicitly simulate sea ice from dynamic-thermodynamic sea ice models. Half of the models included fully-coupled (atmosphere, ice, and ocean) models that additionally employ data assimilation. Both of these subsets (models and coupled models with data assimilation) have a far narrower spread in their prediction, indicating that the results of these more sophisticated methods are converging. Here we summarize and synthesize the 2014 contributions to the SIO, highlight the important questions and challenges that remain to be addressed, and present data on stakeholder uses of the SIO and related SIPN products.
Florio, C S
2018-06-01
A computational model was used to compare the local bone strengthening effectiveness of various isometric exercises that may reduce the likelihood of distal tibial stress fractures. The developed model predicts local endosteal and periosteal cortical accretion and resorption based on relative local and global measures of the tibial stress state and its surface variation. Using a multisegment 3-dimensional leg model, tibia shape adaptations due to 33 combinations of hip, knee, and ankle joint angles and the direction of a single or sequential series of generated isometric resultant forces were predicted. The maximum stress at a common fracture-prone region in each optimized geometry was compared under likely stress fracture-inducing midstance jogging conditions. No direct correlations were found between stress reductions over an initially uniform circular hollow cylindrical geometry under these critical design conditions and the exercise-based sets of active muscles, joint angles, or individual muscle force and local stress magnitudes. Additionally, typically favorable increases in cross-sectional geometric measures did not guarantee stress decreases at these locations. Instead, tibial stress distributions under the exercise conditions best predicted strengthening ability. Exercises producing larger anterior distal stresses created optimized tibia shapes that better resisted the high midstance jogging bending stresses. Bent leg configurations generating anteriorly directed or inferiorly directed resultant forces created favorable adaptations. None of the studied loads produced by a straight leg was significantly advantageous. These predictions and the insight gained can provide preliminary guidance in the screening and development of targeted bone strengthening techniques for those susceptible to distal tibial stress fractures. Copyright © 2018 John Wiley & Sons, Ltd.
Perez-Guaita, David; Kuligowski, Julia; Quintás, Guillermo; Garrigues, Salvador; Guardia, Miguel de la
2013-03-30
Locally weighted partial least squares regression (LW-PLSR) has been applied to the determination of four clinical parameters in human serum samples (total protein, triglyceride, glucose and urea contents) by Fourier transform infrared (FTIR) spectroscopy. Classical LW-PLSR models were constructed using different spectral regions. For the selection of parameters by LW-PLSR modeling, a multi-parametric study was carried out employing the minimum root-mean square error of cross validation (RMSCV) as objective function. In order to overcome the effect of strong matrix interferences on the predictive accuracy of LW-PLSR models, this work focuses on sample selection. Accordingly, a novel strategy for the development of local models is proposed. It was based on the use of: (i) principal component analysis (PCA) performed on an analyte specific spectral region for identifying most similar sample spectra and (ii) partial least squares regression (PLSR) constructed using the whole spectrum. Results found by using this strategy were compared to those provided by PLSR using the same spectral intervals as for LW-PLSR. Prediction errors found by both, classical and modified LW-PLSR improved those obtained by PLSR. Hence, both proposed approaches were useful for the determination of analytes present in a complex matrix as in the case of human serum samples. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Stauch, V. J.; Gwerder, M.; Gyalistras, D.; Oldewurtel, F.; Schubiger, F.; Steiner, P.
2010-09-01
The high proportion of the total primary energy consumption by buildings has increased the public interest in the optimisation of buildings' operation and is also driving the development of novel control approaches for the indoor climate. In this context, the use of weather forecasts presents an interesting and - thanks to advances in information and predictive control technologies and the continuous improvement of numerical weather prediction (NWP) models - an increasingly attractive option for improved building control. Within the research project OptiControl (www.opticontrol.ethz.ch) predictive control strategies for a wide range of buildings, heating, ventilation and air conditioning (HVAC) systems, and representative locations in Europe are being investigated with the aid of newly developed modelling and simulation tools. Grid point predictions for radiation, temperature and humidity of the high-resolution limited area NWP model COSMO-7 (see www.cosmo-model.org) and local measurements are used as disturbances and inputs into the building system. The control task considered consists in minimizing energy consumption whilst maintaining occupant comfort. In this presentation, we use the simulation-based OptiControl methodology to investigate the impact of COSMO-7 forecasts on the performance of predictive building control and the resulting energy savings. For this, we have selected building cases that were shown to benefit from a prediction horizon of up to 3 days and therefore, are particularly suitable for the use of numerical weather forecasts. We show that the controller performance is sensitive to the quality of the weather predictions, most importantly of the incident radiation on differently oriented façades. However, radiation is characterised by a high temporal and spatial variability in part caused by small scale and fast changing cloud formation and dissolution processes being only partially represented in the COSMO-7 grid point predictions. On the other hand, buildings are affected by particularly local weather conditions at the building site. To overcome this discrepancy, we make use of local measurements to statistically adapt the COSMO-7 model output to the meteorological conditions at the building. For this, we have developed a general correction algorithm that exploits systematic properties of the COSMO-7 prediction error and explicitly estimates the degree of temporal autocorrelation using online recursive estimation. The resulting corrected predictions are improved especially for the first few hours being the most crucial for the predictive controller and, ultimately for the reduction of primary energy consumption using predictive control. The use of numerical weather forecasts in predictive building automation is one example in a wide field of weather dependent advanced energy saving technologies. Our work particularly highlights the need for the development of specifically tailored weather forecast products by (statistical) postprocessing in order to meet the requirements of specific applications.
Assessing the formability of metallic sheets by means of localized and diffuse necking models
NASA Astrophysics Data System (ADS)
Comşa, Dan-Sorin; Lǎzǎrescu, Lucian; Banabic, Dorel
2016-10-01
The main objective of the paper consists in elaborating a unified framework that allows the theoretical assessment of sheet metal formability. Hill's localized necking model and the Extended Maximum Force Criterion proposed by Mattiasson, Sigvant, and Larsson have been selected for this purpose. Both models are thoroughly described together with their solution procedures. A comparison of the theoretical predictions with experimental data referring to the formability of a DP600 steel sheet is also presented by the authors.
Hatala, J.A.; Dietze, M.C.; Crabtree, R.L.; Kendall, Katherine C.; Six, D.; Moorcroft, P.R.
2011-01-01
The introduction of nonnative pathogens is altering the scale, magnitude, and persistence of forest disturbance regimes in the western United States. In the high-altitude whitebark pine (Pinus albicaulis) forests of the Greater Yellowstone Ecosystem (GYE), white pine blister rust (Cronartium ribicola) is an introduced fungal pathogen that is now the principal cause of tree mortality in many locations. Although blister rust eradication has failed in the past, there is nonetheless substantial interest in monitoring the disease and its rate of progression in order to predict the future impact of forest disturbances within this critical ecosystem.This study integrates data from five different field-monitoring campaigns from 1968 to 2008 to create a blister rust infection model for sites located throughout the GYE. Our model parameterizes the past rates of blister rust spread in order to project its future impact on high-altitude whitebark pine forests. Because the process of blister rust infection and mortality of individuals occurs over the time frame of many years, the model in this paper operates on a yearly time step and defines a series of whitebark pine infection classes: susceptible, slightly infected, moderately infected, and dead. In our analysis, we evaluate four different infection models that compare local vs. global density dependence on the dynamics of blister rust infection. We compare models in which blister rust infection is: (1) independent of the density of infected trees, (2) locally density-dependent, (3) locally density-dependent with a static global infection rate among all sites, and (4) both locally and globally density-dependent. Model evaluation through the predictive loss criterion for Bayesian analysis supports the model that is both locally and globally density-dependent. Using this best-fit model, we predicted the average residence times for the four stages of blister rust infection in our model, and we found that, on average, whitebark pine trees within the GYE remain susceptible for 6.7 years, take 10.9 years to transition from slightly infected to moderately infected, and take 9.4 years to transition from moderately infected to dead. Using our best-fit model, we project the future levels of blister rust infestation in the GYE at critical sites over the next 20 years.
New developments in tribomechanical modeling of automotive sheet steel forming
NASA Astrophysics Data System (ADS)
Khandeparkar, Tushar; Chezan, Toni; van Beeck, Jeroen
2018-05-01
Forming of automotive sheet metal body panels is a complex process influenced by both the material properties and contact conditions in the forming tooling. Material properties are described by the material constitutive behavior and the material flow into the forming die can be described by the tribological system. This paper investigates the prediction accuracy of the forming process using the Tata Steel state of the art description of the material constitutive behavior in combination with different friction models. A cross-die experiment is used to investigate the accuracy of local deformation modes typically seen in automotive sheet metal forming operations. Results of advanced friction models as well as the classical Coulomb friction description are compared to the experimentally measured strain distribution and material draw-in. Two hot-dip galvanized coated steel forming grades were used for the investigations. The results show that the accuracy of the simulation is not guaranteed by the advanced friction models for the entire investigated blank holder force range, both globally and locally. A measurable difference between the calculated and measured local strains is seen for both studied models even in the case where the global indicator, i.e. the draw-in, is well predicted.
A Predictive Model of Anesthesia Depth Based on SVM in the Primary Visual Cortex
Shi, Li; Li, Xiaoyuan; Wan, Hong
2013-01-01
In this paper, a novel model for predicting anesthesia depth is put forward based on local field potentials (LFPs) in the primary visual cortex (V1 area) of rats. The model is constructed using a Support Vector Machine (SVM) to realize anesthesia depth online prediction and classification. The raw LFP signal was first decomposed into some special scaling components. Among these components, those containing higher frequency information were well suited for more precise analysis of the performance of the anesthetic depth by wavelet transform. Secondly, the characteristics of anesthetized states were extracted by complexity analysis. In addition, two frequency domain parameters were selected. The above extracted features were used as the input vector of the predicting model. Finally, we collected the anesthesia samples from the LFP recordings under the visual stimulus experiments of Long Evans rats. Our results indicate that the predictive model is accurate and computationally fast, and that it is also well suited for online predicting. PMID:24044024
A generative, probabilistic model of local protein structure.
Boomsma, Wouter; Mardia, Kanti V; Taylor, Charles C; Ferkinghoff-Borg, Jesper; Krogh, Anders; Hamelryck, Thomas
2008-07-01
Despite significant progress in recent years, protein structure prediction maintains its status as one of the prime unsolved problems in computational biology. One of the key remaining challenges is an efficient probabilistic exploration of the structural space that correctly reflects the relative conformational stabilities. Here, we present a fully probabilistic, continuous model of local protein structure in atomic detail. The generative model makes efficient conformational sampling possible and provides a framework for the rigorous analysis of local sequence-structure correlations in the native state. Our method represents a significant theoretical and practical improvement over the widely used fragment assembly technique by avoiding the drawbacks associated with a discrete and nonprobabilistic approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nguyen, Ba Nghiep; Kunc, Vlastimil; Jin, Xiaoshi
2013-12-18
This article illustrates the predictive capabilities for long-fiber thermoplastic (LFT) composites that first simulate the injection molding of LFT structures by Autodesk® Simulation Moldflow® Insight (ASMI) to accurately predict fiber orientation and length distributions in these structures. After validating fiber orientation and length predictions against the experimental data, the predicted results are used by ASMI to compute distributions of elastic properties in the molded structures. In addition, local stress-strain responses and damage accumulation under tensile loading are predicted by an elastic-plastic damage model of EMTA-NLA, a nonlinear analysis tool implemented in ABAQUS® via user-subroutines using an incremental Eshelby-Mori-Tanaka approach. Predictedmore » stress-strain responses up to failure and damage accumulations are compared to the experimental results to validate the model.« less
Lara-Reséndiz, Rafael A; Gadsden, Héctor; Rosen, Philip C; Sinervo, Barry; Méndez-De la Cruz, Fausto R
2015-02-01
Thermoregulatory studies of ectothermic organisms are an important tool for ecological physiology, evolutionary ecology and behavior, and recently have become central for evaluating and predicting global climate change impacts. Here, we present a novel combination of field, laboratory, and modeling approaches to examine body temperature regulation, habitat thermal quality, and hours of thermal restriction on the activity of two sympatric, aridlands horned lizards (Phrynosoma cornutum and Phrynosoma modestum) at three contrasting Chihuahuan Desert sites in Mexico. Using these physiological data, we estimate local extinction risk under predicted climate change within their current geographical distribution. We followed the Hertz et al. (1993, Am. Nat., 142, 796-818) protocol for evaluating thermoregulation and the Sinervo et al. (2010, Science, 328, 894-899) eco-physiological model of extinction under climatic warming. Thermoregulatory indices suggest that both species thermoregulate effectively despite living in habitats of low thermal quality, although high environmental temperatures restrict the activity period of both species. Based on our measurements, if air temperature rises as predicted by climate models, the extinction model projects that P. cornutum will become locally extinct at 6% of sites by 2050 and 18% by 2080 and P. modestum will become extinct at 32% of sites by 2050 and 60% by 2080. The method we apply, using widely available or readily acquired thermal data, along with the modeling, appeared to identify several unique ecological traits that seemingly exacerbate climate sensitivity of P. modestum. Copyright © 2014 Elsevier Ltd. All rights reserved.
Protein Sub-Nuclear Localization Prediction Using SVM and Pfam Domain Information
Kumar, Ravindra; Jain, Sohni; Kumari, Bandana; Kumar, Manish
2014-01-01
The nucleus is the largest and the highly organized organelle of eukaryotic cells. Within nucleus exist a number of pseudo-compartments, which are not separated by any membrane, yet each of them contains only a specific set of proteins. Understanding protein sub-nuclear localization can hence be an important step towards understanding biological functions of the nucleus. Here we have described a method, SubNucPred developed by us for predicting the sub-nuclear localization of proteins. This method predicts protein localization for 10 different sub-nuclear locations sequentially by combining presence or absence of unique Pfam domain and amino acid composition based SVM model. The prediction accuracy during leave-one-out cross-validation for centromeric proteins was 85.05%, for chromosomal proteins 76.85%, for nuclear speckle proteins 81.27%, for nucleolar proteins 81.79%, for nuclear envelope proteins 79.37%, for nuclear matrix proteins 77.78%, for nucleoplasm proteins 76.98%, for nuclear pore complex proteins 88.89%, for PML body proteins 75.40% and for telomeric proteins it was 83.33%. Comparison with other reported methods showed that SubNucPred performs better than existing methods. A web-server for predicting protein sub-nuclear localization named SubNucPred has been established at http://14.139.227.92/mkumar/subnucpred/. Standalone version of SubNucPred can also be downloaded from the web-server. PMID:24897370
Memetic Approaches for Optimizing Hidden Markov Models: A Case Study in Time Series Prediction
NASA Astrophysics Data System (ADS)
Bui, Lam Thu; Barlow, Michael
We propose a methodology for employing memetics (local search) within the framework of evolutionary algorithms to optimize parameters of hidden markov models. With this proposal, the rate and frequency of using local search are automatically changed over time either at a population or individual level. At the population level, we allow the rate of using local search to decay over time to zero (at the final generation). At the individual level, each individual is equipped with information of when it will do local search and for how long. This information evolves over time alongside the main elements of the chromosome representing the individual.
Deep convolutional neural networks for pan-specific peptide-MHC class I binding prediction.
Han, Youngmahn; Kim, Dongsup
2017-12-28
Computational scanning of peptide candidates that bind to a specific major histocompatibility complex (MHC) can speed up the peptide-based vaccine development process and therefore various methods are being actively developed. Recently, machine-learning-based methods have generated successful results by training large amounts of experimental data. However, many machine learning-based methods are generally less sensitive in recognizing locally-clustered interactions, which can synergistically stabilize peptide binding. Deep convolutional neural network (DCNN) is a deep learning method inspired by visual recognition process of animal brain and it is known to be able to capture meaningful local patterns from 2D images. Once the peptide-MHC interactions can be encoded into image-like array(ILA) data, DCNN can be employed to build a predictive model for peptide-MHC binding prediction. In this study, we demonstrated that DCNN is able to not only reliably predict peptide-MHC binding, but also sensitively detect locally-clustered interactions. Nonapeptide-HLA-A and -B binding data were encoded into ILA data. A DCNN, as a pan-specific prediction model, was trained on the ILA data. The DCNN showed higher performance than other prediction tools for the latest benchmark datasets, which consist of 43 datasets for 15 HLA-A alleles and 25 datasets for 10 HLA-B alleles. In particular, the DCNN outperformed other tools for alleles belonging to the HLA-A3 supertype. The F1 scores of the DCNN were 0.86, 0.94, and 0.67 for HLA-A*31:01, HLA-A*03:01, and HLA-A*68:01 alleles, respectively, which were significantly higher than those of other tools. We found that the DCNN was able to recognize locally-clustered interactions that could synergistically stabilize peptide binding. We developed ConvMHC, a web server to provide user-friendly web interfaces for peptide-MHC class I binding predictions using the DCNN. ConvMHC web server can be accessible via http://jumong.kaist.ac.kr:8080/convmhc . We developed a novel method for peptide-HLA-I binding predictions using DCNN trained on ILA data that encode peptide binding data and demonstrated the reliable performance of the DCNN in nonapeptide binding predictions through the independent evaluation on the latest IEDB benchmark datasets. Our approaches can be applied to characterize locally-clustered patterns in molecular interactions, such as protein/DNA, protein/RNA, and drug/protein interactions.
Experimental Observation of Two-Dimensional Anderson Localization with the Atomic Kicked Rotor.
Manai, Isam; Clément, Jean-François; Chicireanu, Radu; Hainaut, Clément; Garreau, Jean Claude; Szriftgiser, Pascal; Delande, Dominique
2015-12-11
Dimension 2 is expected to be the lower critical dimension for Anderson localization in a time-reversal-invariant disordered quantum system. Using an atomic quasiperiodic kicked rotor-equivalent to a two-dimensional Anderson-like model-we experimentally study Anderson localization in dimension 2 and we observe localized wave function dynamics. We also show that the localization length depends exponentially on the disorder strength and anisotropy and is in quantitative agreement with the predictions of the self-consistent theory for the 2D Anderson localization.
EXCLUSION OF RARE TAXA AFFECTS PERFORMANCE OF THE O/E INDEX IN BIOASSESSMENTS
The contribution of rare taxa to bioassessments based on multispecies assemblages is the subject of continued debate. As a result, users of predictive models such as River InVertebrate Prediction and Classification System (RIVPACS) disagree on whether to exclude locally rare taxa...
Pretreatment tables predicting pathologic stage of locally advanced prostate cancer.
Joniau, Steven; Spahn, Martin; Briganti, Alberto; Gandaglia, Giorgio; Tombal, Bertrand; Tosco, Lorenzo; Marchioro, Giansilvio; Hsu, Chao-Yu; Walz, Jochen; Kneitz, Burkhard; Bader, Pia; Frohneberg, Detlef; Tizzani, Alessandro; Graefen, Markus; van Cangh, Paul; Karnes, R Jeffrey; Montorsi, Francesco; van Poppel, Hein; Gontero, Paolo
2015-02-01
Pretreatment tables for the prediction of pathologic stage have been published and validated for localized prostate cancer (PCa). No such tables are available for locally advanced (cT3a) PCa. To construct tables predicting pathologic outcome after radical prostatectomy (RP) for patients with cT3a PCa with the aim to help guide treatment decisions in clinical practice. This was a multicenter retrospective cohort study including 759 consecutive patients with cT3a PCa treated with RP between 1987 and 2010. Retropubic RP and pelvic lymphadenectomy. Patients were divided into pretreatment prostate-specific antigen (PSA) and biopsy Gleason score (GS) subgroups. These parameters were used to construct tables predicting pathologic outcome and the presence of positive lymph nodes (LNs) after RP for cT3a PCa using ordinal logistic regression. In the model predicting pathologic outcome, the main effects of biopsy GS and pretreatment PSA were significant. A higher GS and/or higher PSA level was associated with a more unfavorable pathologic outcome. The validation procedure, using a repeated split-sample method, showed good predictive ability. Regression analysis also showed an increasing probability of positive LNs with increasing PSA levels and/or higher GS. Limitations of the study are the retrospective design and the long study period. These novel tables predict pathologic stage after RP for patients with cT3a PCa based on pretreatment PSA level and biopsy GS. They can be used to guide decision making in men with locally advanced PCa. Our study might provide physicians with a useful tool to predict pathologic stage in locally advanced prostate cancer that might help select patients who may need multimodal treatment. Copyright © 2014 European Association of Urology. Published by Elsevier B.V. All rights reserved.
Luck, Margaux; Bertho, Gildas; Bateson, Mathilde; Karras, Alexandre; Yartseva, Anastasia; Thervet, Eric
2016-01-01
1H Nuclear Magnetic Resonance (NMR)-based metabolic profiling is very promising for the diagnostic of the stages of chronic kidney disease (CKD). Because of the high dimension of NMR spectra datasets and the complex mixture of metabolites in biological samples, the identification of discriminant biomarkers of a disease is challenging. None of the widely used chemometric methods in NMR metabolomics performs a local exhaustive exploration of the data. We developed a descriptive and easily understandable approach that searches for discriminant local phenomena using an original exhaustive rule-mining algorithm in order to predict two groups of patients: 1) patients having low to mild CKD stages with no renal failure and 2) patients having moderate to established CKD stages with renal failure. Our predictive algorithm explores the m-dimensional variable space to capture the local overdensities of the two groups of patients under the form of easily interpretable rules. Afterwards, a L2-penalized logistic regression on the discriminant rules was used to build predictive models of the CKD stages. We explored a complex multi-source dataset that included the clinical, demographic, clinical chemistry, renal pathology and urine metabolomic data of a cohort of 110 patients. Given this multi-source dataset and the complex nature of metabolomic data, we analyzed 1- and 2-dimensional rules in order to integrate the information carried by the interactions between the variables. The results indicated that our local algorithm is a valuable analytical method for the precise characterization of multivariate CKD stage profiles and as efficient as the classical global model using chi2 variable section with an approximately 70% of good classification level. The resulting predictive models predominantly identify urinary metabolites (such as 3-hydroxyisovalerate, carnitine, citrate, dimethylsulfone, creatinine and N-methylnicotinamide) as relevant variables indicating that CKD significantly affects the urinary metabolome. In addition, the simple knowledge of the concentration of urinary metabolites classifies the CKD stage of the patients correctly. PMID:27861591
NASA Astrophysics Data System (ADS)
Sanders, B. F.; Gallegos, H. A.; Schubert, J. E.
2011-12-01
The Baldwin Hills dam-break flood and associated structural damage is investigated in this study. The flood caused high velocity flows exceeding 5 m/s which destroyed 41 wood-framed residential structures, 16 of which were completed washed out. Damage is predicted by coupling a calibrated hydrodynamic flood model based on the shallow-water equations to structural damage models. The hydrodynamic and damage models are two-way coupled so building failure is predicted upon exceedance of a hydraulic intensity parameter, which in turn triggers a localized reduction in flow resistance which affects flood intensity predictions. Several established damage models and damage correlations reported in the literature are tested to evaluate the predictive skill for two damage states defined by destruction (Level 2) and washout (Level 3). Results show that high-velocity structural damage can be predicted with a remarkable level of skill using established damage models, but only with two-way coupling of the hydrodynamic and damage models. In contrast, when structural failure predictions have no influence on flow predictions, there is a significant reduction in predictive skill. Force-based damage models compare well with a subset of the damage models which were devised for similar types of structures. Implications for emergency planning and preparedness as well as monetary damage estimation are discussed.
Configuration of the thermal landscape determines thermoregulatory performance of ectotherms
Sears, Michael W.; Angilletta, Michael J.; Schuler, Matthew S.; Borchert, Jason; Dilliplane, Katherine F.; Stegman, Monica; Rusch, Travis W.; Mitchell, William A.
2016-01-01
Although most organisms thermoregulate behaviorally, biologists still cannot easily predict whether mobile animals will thermoregulate in natural environments. Current models fail because they ignore how the spatial distribution of thermal resources constrains thermoregulatory performance over space and time. To overcome this limitation, we modeled the spatially explicit movements of animals constrained by access to thermal resources. Our models predict that ectotherms thermoregulate more accurately when thermal resources are dispersed throughout space than when these resources are clumped. This prediction was supported by thermoregulatory behaviors of lizards in outdoor arenas with known distributions of environmental temperatures. Further, simulations showed how the spatial structure of the landscape qualitatively affects responses of animals to climate. Biologists will need spatially explicit models to predict impacts of climate change on local scales. PMID:27601639
Fire frequency in the Interior Columbia River Basin: Building regional models from fire history data
McKenzie, D.; Peterson, D.L.; Agee, James K.
2000-01-01
Fire frequency affects vegetation composition and successional pathways; thus it is essential to understand fire regimes in order to manage natural resources at broad spatial scales. Fire history data are lacking for many regions for which fire management decisions are being made, so models are needed to estimate past fire frequency where local data are not yet available. We developed multiple regression models and tree-based (classification and regression tree, or CART) models to predict fire return intervals across the interior Columbia River basin at 1-km resolution, using georeferenced fire history, potential vegetation, cover type, and precipitation databases. The models combined semiqualitative methods and rigorous statistics. The fire history data are of uneven quality; some estimates are based on only one tree, and many are not cross-dated. Therefore, we weighted the models based on data quality and performed a sensitivity analysis of the effects on the models of estimation errors that are due to lack of cross-dating. The regression models predict fire return intervals from 1 to 375 yr for forested areas, whereas the tree-based models predict a range of 8 to 150 yr. Both types of models predict latitudinal and elevational gradients of increasing fire return intervals. Examination of regional-scale output suggests that, although the tree-based models explain more of the variation in the original data, the regression models are less likely to produce extrapolation errors. Thus, the models serve complementary purposes in elucidating the relationships among fire frequency, the predictor variables, and spatial scale. The models can provide local managers with quantitative information and provide data to initialize coarse-scale fire-effects models, although predictions for individual sites should be treated with caution because of the varying quality and uneven spatial coverage of the fire history database. The models also demonstrate the integration of qualitative and quantitative methods when requisite data for fully quantitative models are unavailable. They can be tested by comparing new, independent fire history reconstructions against their predictions and can be continually updated, as better fire history data become available.
Mondal Roy, Sutapa
2018-08-01
The quantum chemical descriptors based on density functional theory (DFT) are applied to predict the biological activity (log IC 50 ) of one class of acyl-CoA: cholesterol O-acyltransferase (ACAT) inhibitors, viz. aminosulfonyl ureas. ACAT are very effective agents for reduction of triglyceride and cholesterol levels in human body. Successful two parameter quantitative structure-activity relationship (QSAR) models are developed with a combination of relevant global and local DFT based descriptors for prediction of biological activity of aminosulfonyl ureas. The global descriptors, electron affinity of the ACAT inhibitors (EA) and/or charge transfer (ΔN) between inhibitors and model biosystems (NA bases and DNA base pairs) along with the local group atomic charge on sulfonyl moiety (∑Q Sul ) of the inhibitors reveals more than 90% efficacy of the selected descriptors for predicting the experimental log (IC 50 ) values. Copyright © 2018 Elsevier Ltd. All rights reserved.
Ryser, Marc D; Lee, Walter T; Ready, Neal E; Leder, Kevin Z; Foo, Jasmine
2016-12-15
High rates of local recurrence in tobacco-related head and neck squamous cell carcinoma (HNSCC) are commonly attributed to unresected fields of precancerous tissue. Because they are not easily detectable at the time of surgery without additional biopsies, there is a need for noninvasive methods to predict the extent and dynamics of these fields. Here, we developed a spatial stochastic model of tobacco-related HNSCC at the tissue level and calibrated the model using a Bayesian framework and population-level incidence data from the Surveillance, Epidemiology, and End Results (SEER) registry. Probabilistic model analyses were performed to predict the field geometry at time of diagnosis, and model predictions of age-specific recurrence risks were tested against outcome data from SEER. The calibrated models predicted a strong dependence of the local field size on age at diagnosis, with a doubling of the expected field diameter between ages at diagnosis of 50 and 90 years, respectively. Similarly, the probability of harboring multiple, clonally unrelated fields at the time of diagnosis was found to increase substantially with patient age. On the basis of these findings, we hypothesized a higher recurrence risk in older than in younger patients when treated by surgery alone; we successfully tested this hypothesis using age-stratified outcome data. Further clinical studies are needed to validate the model predictions in a patient-specific setting. This work highlights the importance of spatial structure in models of epithelial carcinogenesis and suggests that patient age at diagnosis may be a critical predictor of the size and multiplicity of precancerous lesions. Cancer Res; 76(24); 7078-88. ©2016 AACR. ©2016 American Association for Cancer Research.
Ryser, Marc D.; Lee, Walter T.; Readyz, Neal E.; Leder, Kevin Z.; Foo, Jasmine
2017-01-01
High rates of local recurrence in tobacco-related head and neck squamous cell carcinoma (HNSCC) are commonly attributed to unresected fields of precancerous tissue. Since they are not easily detectable at the time of surgery without additional biopsies, there is a need for non-invasive methods to predict the extent and dynamics of these fields. Here we developed a spatial stochastic model of tobacco-related HNSCC at the tissue level and calibrated the model using a Bayesian framework and population-level incidence data from the Surveillance, Epidemiology, and End Results (SEER) registry. Probabilistic model analyses were performed to predict the field geometry at time of diagnosis, and model predictions of age-specific recurrence risks were tested against outcome data from SEER. The calibrated models predicted a strong dependence of the local field size on age at diagnosis, with a doubling of the expected field diameter between ages at diagnosis of 50 and 90 years, respectively. Similarly, the probability of harboring multiple, clonally unrelated fields at the time of diagnosis were found to increase substantially with patient age. Based on these findings, we hypothesized a higher recurrence risk in older compared to younger patients when treated by surgery alone; we successfully tested this hypothesis using age-stratified outcome data. Further clinical studies are needed to validate the model predictions in a patient-specific setting. This work highlights the importance of spatial structure in models of epithelial carcinogenesis, and suggests that patient age at diagnosis may be a critical predictor of the size and multiplicity of precancerous lesions. Major Findings Patient age at diagnosis was found to be a critical predictor of the size and multiplicity of precancerous lesions. This finding challenges the current one-size-fits-all approach to surgical excision margins. PMID:27913438
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Chao; Xu, Zhijie; Lai, Canhai
The standard two-film theory (STFT) is a diffusion-based mechanism that can be used to describe gas mass transfer across liquid film. Fundamental assumptions of the STFT impose serious limitations on its ability to predict mass transfer coefficients. To better understand gas absorption across liquid film in practical situations, a multiphase computational fluid dynamics (CFD) model fully equipped with mass transport and chemistry capabilities has been developed for solvent-based carbon dioxide (CO 2) capture to predict the CO 2 mass transfer coefficient in a wetted wall column. The hydrodynamics is modeled using a volume of fluid method, and the diffusive andmore » reactive mass transfer between the two phases is modeled by adopting a one-fluid formulation. We demonstrate that the proposed CFD model can naturally account for the influence of many important factors on the overall mass transfer that cannot be quantitatively explained by the STFT, such as the local variation in fluid velocities and properties, flow instabilities, and complex geometries. The CFD model also can predict the local mass transfer coefficient variation along the column height, which the STFT typically does not consider.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Chao; Xu, Zhijie; Lai, Canhai
The standard two-film theory (STFT) is a diffusion-based mechanism that can be used to describe gas mass transfer across liquid film. Fundamental assumptions of the STFT impose serious limitations on its ability to predict mass transfer coefficients. To better understand gas absorption across liquid film in practical situations, a multiphase computational fluid dynamics (CFD) model fully equipped with mass transport and chemistry capabilities has been developed for solvent-based carbon dioxide (CO2) capture to predict the CO2 mass transfer coefficient in a wetted wall column. The hydrodynamics is modeled using a volume of fluid method, and the diffusive and reactive massmore » transfer between the two phases is modeled by adopting a one-fluid formulation. We demonstrate that the proposed CFD model can naturally account for the influence of many important factors on the overall mass transfer that cannot be quantitatively explained by the STFT, such as the local variation in fluid velocities and properties, flow instabilities, and complex geometries. The CFD model also can predict the local mass transfer coefficient variation along the column height, which the STFT typically does not consider.« less
NASA Technical Reports Server (NTRS)
Ahumada, Albert J.; Beard, B. L.; Stone, Leland (Technical Monitor)
1997-01-01
We have been developing a simplified spatial-temporal discrimination model similar to our simplified spatial model in that masking is assumed to be a function of the local visible contrast energy. The overall spatial-temporal sensitivity of the model is calibrated to predict the detectability of targets on a uniform background. To calibrate the spatial-temporal integration functions that define local visible contrast energy, spatial-temporal masking data are required. Observer thresholds were measured (2IFC) for the detection of a 12 msec target stimulus in the presence of a 700 msec mask. Targets were 1, 3 or 9 c/deg sine wave gratings. Masks were either one of these gratings or two of them combined. The target was presented in 17 temporal positions with respect to the mask, including positions before, during and after the mask. Peak masking was found near mask onset and offset for 1 and 3 c/deg targets, while masking effects were more nearly uniform during the mask for the 9 c/deg target. As in the purely spatial case, the simplified model can not predict all the details of masking as a function of masking component spatial frequencies, but overall the prediction errors are small.
Wang, Chao; Xu, Zhijie; Lai, Canhai; ...
2018-03-27
The standard two-film theory (STFT) is a diffusion-based mechanism that can be used to describe gas mass transfer across liquid film. Fundamental assumptions of the STFT impose serious limitations on its ability to predict mass transfer coefficients. To better understand gas absorption across liquid film in practical situations, a multiphase computational fluid dynamics (CFD) model fully equipped with mass transport and chemistry capabilities has been developed for solvent-based carbon dioxide (CO 2) capture to predict the CO 2 mass transfer coefficient in a wetted wall column. The hydrodynamics is modeled using a volume of fluid method, and the diffusive andmore » reactive mass transfer between the two phases is modeled by adopting a one-fluid formulation. We demonstrate that the proposed CFD model can naturally account for the influence of many important factors on the overall mass transfer that cannot be quantitatively explained by the STFT, such as the local variation in fluid velocities and properties, flow instabilities, and complex geometries. The CFD model also can predict the local mass transfer coefficient variation along the column height, which the STFT typically does not consider.« less
Robust face alignment under occlusion via regional predictive power estimation.
Heng Yang; Xuming He; Xuhui Jia; Patras, Ioannis
2015-08-01
Face alignment has been well studied in recent years, however, when a face alignment model is applied on facial images with heavy partial occlusion, the performance deteriorates significantly. In this paper, instead of training an occlusion-aware model with visibility annotation, we address this issue via a model adaptation scheme that uses the result of a local regression forest (RF) voting method. In the proposed scheme, the consistency of the votes of the local RF in each of several oversegmented regions is used to determine the reliability of predicting the location of the facial landmarks. The latter is what we call regional predictive power (RPP). Subsequently, we adapt a holistic voting method (cascaded pose regression based on random ferns) by putting weights on the votes of each fern according to the RPP of the regions used in the fern tests. The proposed method shows superior performance over existing face alignment models in the most challenging data sets (COFW and 300-W). Moreover, it can also estimate with high accuracy (72.4% overlap ratio) which image areas belong to the face or nonface objects, on the heavily occluded images of the COFW data set, without explicit occlusion modeling.
Hybrid Scheme for Modeling Local Field Potentials from Point-Neuron Networks.
Hagen, Espen; Dahmen, David; Stavrinou, Maria L; Lindén, Henrik; Tetzlaff, Tom; van Albada, Sacha J; Grün, Sonja; Diesmann, Markus; Einevoll, Gaute T
2016-12-01
With rapidly advancing multi-electrode recording technology, the local field potential (LFP) has again become a popular measure of neuronal activity in both research and clinical applications. Proper understanding of the LFP requires detailed mathematical modeling incorporating the anatomical and electrophysiological features of neurons near the recording electrode, as well as synaptic inputs from the entire network. Here we propose a hybrid modeling scheme combining efficient point-neuron network models with biophysical principles underlying LFP generation by real neurons. The LFP predictions rely on populations of network-equivalent multicompartment neuron models with layer-specific synaptic connectivity, can be used with an arbitrary number of point-neuron network populations, and allows for a full separation of simulated network dynamics and LFPs. We apply the scheme to a full-scale cortical network model for a ∼1 mm 2 patch of primary visual cortex, predict laminar LFPs for different network states, assess the relative LFP contribution from different laminar populations, and investigate effects of input correlations and neuron density on the LFP. The generic nature of the hybrid scheme and its public implementation in hybridLFPy form the basis for LFP predictions from other and larger point-neuron network models, as well as extensions of the current application with additional biological detail. © The Author 2016. Published by Oxford University Press.
The predictive power of local properties of financial networks
NASA Astrophysics Data System (ADS)
Caraiani, Petre
2017-01-01
The literature on analyzing the dynamics of financial networks has focused so far on the predictive power of global measures of networks like entropy or index cohesive force. In this paper, I show that the local network properties have similar predictive power. I focus on key network measures like average path length, average degree or cluster coefficient, and also consider the diameter and the s-metric. Using Granger causality tests, I show that some of these measures have statistically significant prediction power with respect to the dynamics of aggregate stock market. Average path length is most robust relative to the frequency of data used or specification (index or growth rate). Most measures are found to have predictive power only for monthly frequency. Further evidences that support this view are provided through a simple regression model.
Steen, P.J.; Zorn, T.G.; Seelbach, P.W.; Schaeffer, J.S.
2008-01-01
Traditionally, fish habitat requirements have been described from local-scale environmental variables. However, recent studies have shown that studying landscape-scale processes improves our understanding of what drives species assemblages and distribution patterns across the landscape. Our goal was to learn more about constraints on the distribution of Michigan stream fish by examining landscape-scale habitat variables. We used classification trees and landscape-scale habitat variables to create and validate presence-absence models and relative abundance models for Michigan stream fishes. We developed 93 presence-absence models that on average were 72% correct in making predictions for an independent data set, and we developed 46 relative abundance models that were 76% correct in making predictions for independent data. The models were used to create statewide predictive distribution and abundance maps that have the potential to be used for a variety of conservation and scientific purposes. ?? Copyright by the American Fisheries Society 2008.
An intermittency model for predicting roughness induced transition
NASA Astrophysics Data System (ADS)
Ge, Xuan; Durbin, Paul
2014-11-01
An extended model for roughness-induced transition is proposed based on an intermittency transport equation for RANS modeling formulated in local variables. To predict roughness effects in the fully turbulent boundary layer, published boundary conditions for k and ω are used, which depend on the equivalent sand grain roughness height, and account for the effective displacement of wall distance origin. Similarly in our approach, wall distance in the transition model for smooth surfaces is modified by an effective origin, which depends on roughness. Flat plate test cases are computed to show that the proposed model is able to predict the transition onset in agreement with a data correlation of transition location versus roughness height, Reynolds number, and inlet turbulence intensity. Experimental data for a turbine cascade are compared with the predicted results to validate the applicability of the proposed model. Supported by NSF Award Number 1228195.
Anazawa, Takayuki; Paruch, Jennifer L; Miyata, Hiroaki; Gotoh, Mitsukazu; Ko, Clifford Y; Cohen, Mark E; Hirahara, Norimichi; Zhou, Lynn; Konno, Hiroyuki; Wakabayashi, Go; Sugihara, Kenichi; Mori, Masaki
2015-12-01
International collaboration is important in healthcare quality evaluation; however, few international comparisons of general surgery outcomes have been accomplished. Furthermore, predictive model application for risk stratification has not been internationally evaluated. The National Clinical Database (NCD) in Japan was developed in collaboration with the American College of Surgeons National Surgical Quality Improvement Program (ACS-NSQIP), with a goal of creating a standardized surgery database for quality improvement. The study aimed to compare the consistency and impact of risk factors of 3 major gastroenterological surgical procedures in Japan and the United States (US) using web-based prospective data entry systems: right hemicolectomy (RH), low anterior resection (LAR), and pancreaticoduodenectomy (PD).Data from NCD and ACS-NSQIP, collected over 2 years, were examined. Logistic regression models were used for predicting 30-day mortality for both countries. Models were exchanged and evaluated to determine whether the models built for one population were accurate for the other population.We obtained data for 113,980 patients; 50,501 (Japan: 34,638; US: 15,863), 42,770 (Japan: 35,445; US: 7325), and 20,709 (Japan: 15,527; US: 5182) underwent RH, LAR, and, PD, respectively. Thirty-day mortality rates for RH were 0.76% (Japan) and 1.88% (US); rates for LAR were 0.43% versus 1.08%; and rates for PD were 1.35% versus 2.57%. Patient background, comorbidities, and practice style were different between Japan and the US. In the models, the odds ratio for each variable was similar between NCD and ACS-NSQIP. Local risk models could predict mortality using local data, but could not accurately predict mortality using data from other countries.We demonstrated the feasibility and efficacy of the international collaborative research between Japan and the US, but found that local risk models remain essential for quality improvement.
The Prediction of Noise Due to Jet Turbulence Convecting Past Flight Vehicle Trailing Edges
NASA Technical Reports Server (NTRS)
Miller, Steven A. E.
2014-01-01
High intensity acoustic radiation occurs when turbulence convects past airframe trailing edges. A mathematical model is developed to predict this acoustic radiation. The model is dependent on the local flow and turbulent statistics above the trailing edge of the flight vehicle airframe. These quantities are dependent on the jet and flight vehicle Mach numbers and jet temperature. A term in the model approximates the turbulent statistics of single-stream heated jet flows and is developed based upon measurement. The developed model is valid for a wide range of jet Mach numbers, jet temperature ratios, and flight vehicle Mach numbers. The model predicts traditional trailing edge noise if the jet is not interacting with the airframe. Predictions of mean-flow quantities and the cross-spectrum of static pressure near the airframe trailing edge are compared with measurement. Finally, predictions of acoustic intensity are compared with measurement and the model is shown to accurately capture the phenomenon.
Gross, Eliza L.; Low, Dennis J.
2013-01-01
Logistic regression models were created to predict and map the probability of elevated arsenic concentrations in groundwater statewide in Pennsylvania and in three intrastate regions to further improve predictions for those three regions (glacial aquifer system, Gettysburg Basin, Newark Basin). Although the Pennsylvania and regional predictive models retained some different variables, they have common characteristics that can be grouped by (1) geologic and soils variables describing arsenic sources and mobilizers, (2) geochemical variables describing the geochemical environment of the groundwater, and (3) locally specific variables that are unique to each of the three regions studied and not applicable to statewide analysis. Maps of Pennsylvania and the three intrastate regions were produced that illustrate that areas most at risk are those with geology and soils capable of functioning as an arsenic source or mobilizer and geochemical groundwater conditions able to facilitate redox reactions. The models have limitations because they may not characterize areas that have localized controls on arsenic mobility. The probability maps associated with this report are intended for regional-scale use and may not be accurate for use at the field scale or when considering individual wells.
Determination of riverbank erosion probability using Locally Weighted Logistic Regression
NASA Astrophysics Data System (ADS)
Ioannidou, Elena; Flori, Aikaterini; Varouchakis, Emmanouil A.; Giannakis, Georgios; Vozinaki, Anthi Eirini K.; Karatzas, George P.; Nikolaidis, Nikolaos
2015-04-01
Riverbank erosion is a natural geomorphologic process that affects the fluvial environment. The most important issue concerning riverbank erosion is the identification of the vulnerable locations. An alternative to the usual hydrodynamic models to predict vulnerable locations is to quantify the probability of erosion occurrence. This can be achieved by identifying the underlying relations between riverbank erosion and the geomorphological or hydrological variables that prevent or stimulate erosion. Thus, riverbank erosion can be determined by a regression model using independent variables that are considered to affect the erosion process. The impact of such variables may vary spatially, therefore, a non-stationary regression model is preferred instead of a stationary equivalent. Locally Weighted Regression (LWR) is proposed as a suitable choice. This method can be extended to predict the binary presence or absence of erosion based on a series of independent local variables by using the logistic regression model. It is referred to as Locally Weighted Logistic Regression (LWLR). Logistic regression is a type of regression analysis used for predicting the outcome of a categorical dependent variable (e.g. binary response) based on one or more predictor variables. The method can be combined with LWR to assign weights to local independent variables of the dependent one. LWR allows model parameters to vary over space in order to reflect spatial heterogeneity. The probabilities of the possible outcomes are modelled as a function of the independent variables using a logistic function. Logistic regression measures the relationship between a categorical dependent variable and, usually, one or several continuous independent variables by converting the dependent variable to probability scores. Then, a logistic regression is formed, which predicts success or failure of a given binary variable (e.g. erosion presence or absence) for any value of the independent variables. The erosion occurrence probability can be calculated in conjunction with the model deviance regarding the independent variables tested. The most straightforward measure for goodness of fit is the G statistic. It is a simple and effective way to study and evaluate the Logistic Regression model efficiency and the reliability of each independent variable. The developed statistical model is applied to the Koiliaris River Basin on the island of Crete, Greece. Two datasets of river bank slope, river cross-section width and indications of erosion were available for the analysis (12 and 8 locations). Two different types of spatial dependence functions, exponential and tricubic, were examined to determine the local spatial dependence of the independent variables at the measurement locations. The results show a significant improvement when the tricubic function is applied as the erosion probability is accurately predicted at all eight validation locations. Results for the model deviance show that cross-section width is more important than bank slope in the estimation of erosion probability along the Koiliaris riverbanks. The proposed statistical model is a useful tool that quantifies the erosion probability along the riverbanks and can be used to assist managing erosion and flooding events. Acknowledgements This work is part of an on-going THALES project (CYBERSENSORS - High Frequency Monitoring System for Integrated Water Resources Management of Rivers). The project has been co-financed by the European Union (European Social Fund - ESF) and Greek national funds through the Operational Program "Education and Lifelong Learning" of the National Strategic Reference Framework (NSRF) - Research Funding Program: THALES. Investing in knowledge society through the European Social Fund.
Unexpected but Incidental Positive Outcomes Predict Real-World Gambling.
Otto, A Ross; Fleming, Stephen M; Glimcher, Paul W
2016-03-01
Positive mood can affect a person's tendency to gamble, possibly because positive mood fosters unrealistic optimism. At the same time, unexpected positive outcomes, often called prediction errors, influence mood. However, a linkage between positive prediction errors-the difference between expected and obtained outcomes-and consequent risk taking has yet to be demonstrated. Using a large data set of New York City lottery gambling and a model inspired by computational accounts of reward learning, we found that people gamble more when incidental outcomes in the environment (e.g., local sporting events and sunshine) are better than expected. When local sports teams performed better than expected, or a sunny day followed a streak of cloudy days, residents gambled more. The observed relationship between prediction errors and gambling was ubiquitous across the city's socioeconomically diverse neighborhoods and was specific to sports and weather events occurring locally in New York City. Our results suggest that unexpected but incidental positive outcomes influence risk taking. © The Author(s) 2016.
A structural model decomposition framework for systems health management
NASA Astrophysics Data System (ADS)
Roychoudhury, I.; Daigle, M.; Bregon, A.; Pulido, B.
Systems health management (SHM) is an important set of technologies aimed at increasing system safety and reliability by detecting, isolating, and identifying faults; and predicting when the system reaches end of life (EOL), so that appropriate fault mitigation and recovery actions can be taken. Model-based SHM approaches typically make use of global, monolithic system models for online analysis, which results in a loss of scalability and efficiency for large-scale systems. Improvement in scalability and efficiency can be achieved by decomposing the system model into smaller local submodels and operating on these submodels instead. In this paper, the global system model is analyzed offline and structurally decomposed into local submodels. We define a common model decomposition framework for extracting submodels from the global model. This framework is then used to develop algorithms for solving model decomposition problems for the design of three separate SHM technologies, namely, estimation (which is useful for fault detection and identification), fault isolation, and EOL prediction. We solve these model decomposition problems using a three-tank system as a case study.
A Structural Model Decomposition Framework for Systems Health Management
NASA Technical Reports Server (NTRS)
Roychoudhury, Indranil; Daigle, Matthew J.; Bregon, Anibal; Pulido, Belamino
2013-01-01
Systems health management (SHM) is an important set of technologies aimed at increasing system safety and reliability by detecting, isolating, and identifying faults; and predicting when the system reaches end of life (EOL), so that appropriate fault mitigation and recovery actions can be taken. Model-based SHM approaches typically make use of global, monolithic system models for online analysis, which results in a loss of scalability and efficiency for large-scale systems. Improvement in scalability and efficiency can be achieved by decomposing the system model into smaller local submodels and operating on these submodels instead. In this paper, the global system model is analyzed offline and structurally decomposed into local submodels. We define a common model decomposition framework for extracting submodels from the global model. This framework is then used to develop algorithms for solving model decomposition problems for the design of three separate SHM technologies, namely, estimation (which is useful for fault detection and identification), fault isolation, and EOL prediction. We solve these model decomposition problems using a three-tank system as a case study.
Moghtadaei, Motahareh; Hashemi Golpayegani, Mohammad Reza; Malekzadeh, Reza
2013-02-07
Identification of squamous dysplasia and esophageal squamous cell carcinoma (ESCC) is of great importance in prevention of cancer incidence. Computer aided algorithms can be very useful for identification of people with higher risks of squamous dysplasia, and ESCC. Such method can limit the clinical screenings to people with higher risks. Different regression methods have been used to predict ESCC and dysplasia. In this paper, a Fuzzy Neural Network (FNN) model is selected for ESCC and dysplasia prediction. The inputs to the classifier are the risk factors. Since the relation between risk factors in the tumor system has a complex nonlinear behavior, in comparison to most of ordinary data, the cost function of its model can have more local optimums. Thus the need for global optimization methods is more highlighted. The proposed method in this paper is a Chaotic Optimization Algorithm (COA) proceeding by the common Error Back Propagation (EBP) local method. Since the model has many parameters, we use a strategy to reduce the dependency among parameters caused by the chaotic series generator. This dependency was not considered in the previous COA methods. The algorithm is compared with logistic regression model as the latest successful methods of ESCC and dysplasia prediction. The results represent a more precise prediction with less mean and variance of error. Copyright © 2012 Elsevier Ltd. All rights reserved.
Predictive testing to characterize substances for their skin sensitization potential has historically been based on animal models such as the Local Lymph Node Assay (LLNA) and the Guinea Pig Maximization Test (GPMT). In recent years, EU regulations have provided a strong incentiv...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Hesheng, E-mail: hesheng@umich.edu; Feng, Mary; Frey, Kirk A.
2013-08-01
Purpose: High-dose radiation therapy (RT) for intrahepatic cancer is limited by the development of liver injury. This study investigated whether regional hepatic function assessed before and during the course of RT using 99mTc-labeled iminodiacetic acid (IDA) single photon emission computed tomography (SPECT) could predict regional liver function reserve after RT. Methods and Materials: Fourteen patients treated with RT for intrahepatic cancers underwent dynamic 99mTc-IDA SPECT scans before RT, during, and 1 month after completion of RT. Indocyanine green (ICG) tests, a measure of overall liver function, were performed within 1 day of each scan. Three-dimensional volumetric hepatic extraction fraction (HEF)more » images of the liver were estimated by deconvolution analysis. After coregistration of the CT/SPECT and the treatment planning CT, HEF dose–response functions during and after RT were generated. The volumetric mean of the HEFs in the whole liver was correlated with ICG clearance time. Three models, dose, priori, and adaptive models, were developed using multivariate linear regression to assess whether the regional HEFs measured before and during RT helped predict regional hepatic function after RT. Results: The mean of the volumetric liver HEFs was significantly correlated with ICG clearance half-life time (r=−0.80, P<.0001), for all time points. Linear correlations between local doses and regional HEFs 1 month after RT were significant in 12 patients. In the priori model, regional HEF after RT was predicted by the planned dose and regional HEF assessed before RT (R=0.71, P<.0001). In the adaptive model, regional HEF after RT was predicted by regional HEF reassessed during RT and the remaining planned local dose (R=0.83, P<.0001). Conclusions: 99mTc-IDA SPECT obtained during RT could be used to assess regional hepatic function and helped predict post-RT regional liver function reserve. This could support individualized adaptive radiation treatment strategies to maximize tumor control and minimize the risk of liver damage.« less
Wang, Hesheng; Feng, Mary; Frey, Kirk A; Ten Haken, Randall K; Lawrence, Theodore S; Cao, Yue
2013-08-01
High-dose radiation therapy (RT) for intrahepatic cancer is limited by the development of liver injury. This study investigated whether regional hepatic function assessed before and during the course of RT using 99mTc-labeled iminodiacetic acid (IDA) single photon emission computed tomography (SPECT) could predict regional liver function reserve after RT. Fourteen patients treated with RT for intrahepatic cancers underwent dynamic 99mTc-IDA SPECT scans before RT, during, and 1 month after completion of RT. Indocyanine green (ICG) tests, a measure of overall liver function, were performed within 1 day of each scan. Three-dimensional volumetric hepatic extraction fraction (HEF) images of the liver were estimated by deconvolution analysis. After coregistration of the CT/SPECT and the treatment planning CT, HEF dose-response functions during and after RT were generated. The volumetric mean of the HEFs in the whole liver was correlated with ICG clearance time. Three models, dose, priori, and adaptive models, were developed using multivariate linear regression to assess whether the regional HEFs measured before and during RT helped predict regional hepatic function after RT. The mean of the volumetric liver HEFs was significantly correlated with ICG clearance half-life time (r=-0.80, P<.0001), for all time points. Linear correlations between local doses and regional HEFs 1 month after RT were significant in 12 patients. In the priori model, regional HEF after RT was predicted by the planned dose and regional HEF assessed before RT (R=0.71, P<.0001). In the adaptive model, regional HEF after RT was predicted by regional HEF reassessed during RT and the remaining planned local dose (R=0.83, P<.0001). 99mTc-IDA SPECT obtained during RT could be used to assess regional hepatic function and helped predict post-RT regional liver function reserve. This could support individualized adaptive radiation treatment strategies to maximize tumor control and minimize the risk of liver damage. Published by Elsevier Inc.
High accuracy satellite drag model (HASDM)
NASA Astrophysics Data System (ADS)
Storz, M.; Bowman, B.; Branson, J.
The dominant error source in the force models used to predict low perigee satellite trajectories is atmospheric drag. Errors in operational thermospheric density models cause significant errors in predicted satellite positions, since these models do not account for dynamic changes in atmospheric drag for orbit predictions. The Air Force Space Battlelab's High Accuracy Satellite Drag Model (HASDM) estimates and predicts (out three days) a dynamically varying high-resolution density field. HASDM includes the Dynamic Calibration Atmosphere (DCA) algorithm that solves for the phases and amplitudes of the diurnal, semidiurnal and terdiurnal variations of thermospheric density near real-time from the observed drag effects on a set of Low Earth Orbit (LEO) calibration satellites. The density correction is expressed as a function of latitude, local solar time and altitude. In HASDM, a time series prediction filter relates the extreme ultraviolet (EUV) energy index E10.7 and the geomagnetic storm index a p to the DCA density correction parameters. The E10.7 index is generated by the SOLAR2000 model, the first full spectrum model of solar irradiance. The estimated and predicted density fields will be used operationally to significantly improve the accuracy of predicted trajectories for all low perigee satellites.
High accuracy satellite drag model (HASDM)
NASA Astrophysics Data System (ADS)
Storz, Mark F.; Bowman, Bruce R.; Branson, Major James I.; Casali, Stephen J.; Tobiska, W. Kent
The dominant error source in force models used to predict low-perigee satellite trajectories is atmospheric drag. Errors in operational thermospheric density models cause significant errors in predicted satellite positions, since these models do not account for dynamic changes in atmospheric drag for orbit predictions. The Air Force Space Battlelab's High Accuracy Satellite Drag Model (HASDM) estimates and predicts (out three days) a dynamically varying global density field. HASDM includes the Dynamic Calibration Atmosphere (DCA) algorithm that solves for the phases and amplitudes of the diurnal and semidiurnal variations of thermospheric density near real-time from the observed drag effects on a set of Low Earth Orbit (LEO) calibration satellites. The density correction is expressed as a function of latitude, local solar time and altitude. In HASDM, a time series prediction filter relates the extreme ultraviolet (EUV) energy index E10.7 and the geomagnetic storm index ap, to the DCA density correction parameters. The E10.7 index is generated by the SOLAR2000 model, the first full spectrum model of solar irradiance. The estimated and predicted density fields will be used operationally to significantly improve the accuracy of predicted trajectories for all low-perigee satellites.
Predators modify biogeographic constraints on species distributions in an insect metacommunity.
Grainger, Tess Nahanni; Germain, Rachel M; Jones, Natalie T; Gilbert, Benjamin
2017-03-01
Theory describing the positive effects of patch size and connectivity on diversity in fragmented systems has stimulated a large body of empirical work, yet predicting when and how local species interactions mediate these responses remains challenging. We used insects that specialize on milkweed plants as a model metacommunity to investigate how local predation alters the effects of biogeographic constraints on species distributions. Species-specific dispersal ability and susceptibility to predation were used to predict when patch size and connectivity should shape species distributions, and when these should be modified by local predator densities. We surveyed specialist herbivores and their predators in milkweed patches in two matrix types, a forest and an old field. Predator-resistant species showed the predicted direct positive effects of patch size and connectivity on occupancy rates. For predator-susceptible species, predators consistently altered the impact of biogeographic constraints, rather than acting independently. Finally, differences between matrix types in species' responses and overall occupancy rates indicate a potential role of the inter-patch environment in mediating the joint effects of predators and spatial drivers. Together, these results highlight the importance of local top-down pressure in mediating classic biogeographic relationships, and demonstrate how species-specific responses to local and regional constraints can be used to predict these effects. © 2017 by the Ecological Society of America.
Development of a Predictive Corrosion Model Using Locality-Specific Corrosion Indices
2017-09-12
6 3.2.1 Statistical data analysis methods ...6 3.2.2 Algorithm development method ...components, and method ) were compiled into an executable program that uses mathematical models of materials degradation, and statistical calcula- tions
Local Debonding and Fiber Breakage in Composite Materials Modeled Accurately
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett A.; Arnold, Steven M.
2001-01-01
A prerequisite for full utilization of composite materials in aerospace components is accurate design and life prediction tools that enable the assessment of component performance and reliability. Such tools assist both structural analysts, who design and optimize structures composed of composite materials, and materials scientists who design and optimize the composite materials themselves. NASA Glenn Research Center's Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) software package (http://www.grc.nasa.gov/WWW/LPB/mac) addresses this need for composite design and life prediction tools by providing a widely applicable and accurate approach to modeling composite materials. Furthermore, MAC/GMC serves as a platform for incorporating new local models and capabilities that are under development at NASA, thus enabling these new capabilities to progress rapidly to a stage in which they can be employed by the code's end users.
NASA Astrophysics Data System (ADS)
Foufoula-Georgiou, E.; Ganti, V. K.; Dietrich, W. E.
2009-12-01
Sediment transport on hillslopes can be thought of as a hopping process, where the sediment moves in a series of jumps. A wide range of processes shape the hillslopes which can move sediment to a large distance in the downslope direction, thus, resulting in a broad-tail in the probability density function (PDF) of hopping lengths. Here, we argue that such a broad-tailed distribution calls for a non-local computation of sediment flux, where the sediment flux is not only a function of local topographic quantities but is an integral flux which takes into account the upslope topographic “memory” of the point of interest. We encapsulate this non-local behavior into a simple fractional diffusive model that involves fractional (non-integer) derivatives. We present theoretical predictions from this nonlocal model and demonstrate a nonlinear dependence of sediment flux on local gradient, consistent with observations. Further, we demonstrate that the non-local model naturally eliminates the scale-dependence exhibited by any local (linear or nonlinear) sediment transport model. An extension to a 2-D framework, where the fractional derivative can be cast into a mixture of directional derivatives, is discussed together with the implications of introducing non-locality into existing landscape evolution models.
NASA Astrophysics Data System (ADS)
Ueberham, Maximilian; Hertel, Daniel; Schlink, Uwe
2017-04-01
Deeper knowledge about urban climate conditions is getting more important in the context of climate change, urban population growth, urban compaction and continued surface sealing. Especially the urban heat island effect (UHI) is one of the most significant human induced alterations of Earth's surface climate. According to this the appearance frequency of heat waves in cities will increase with deep impacts on personal thermal comfort, human health and local residential quality of citizens. UHI can be very heterogenic within a city and research needs to focus more on the neighborhood scale perspective to get further insights about the heat burden of individuals. However, up to now, few is known about local thermal environmental variances and personal exposure loads. To monitor these processes and the impact on individuals, improved monitoring approaches are crucial, complementing data recorded at conventional fixed stations. Therefore we emphasize the importance of micro-meteorological modelling and mobile measurements to shed new light on the nexus of urban human-climate interactions. Contributing to this research we jointly present the approaches of our two PhD-projects. Firstly we illustrate on the basis of an example site, how local thermal conditions in an urban district can be simulated and predicted by a micro-meteorological model. Secondly we highlight the potentials of personal exposure measurements based on an evaluation of mobile micro-sensing devices (MSDs) and analyze and explain differences between model predictions and mobile records. For the examination of local thermal conditions we calculated ENVI-met simulations within the "Bayerischer Bahnhof" quarter in Leipzig (Saxony, Germany; 51°20', 12°22'). To accomplish the maximum temperature contrasts within the diverse built-up structures we chose a hot summer day (25 Aug 2016) under autochthonous weather conditions. From these simulations we analyzed a UHI effect between the model core (urban area) and the surrounding nesting area (rural area). Preparing for the outdoor application of mobile MSDs we tested their accuracy and performance between several MSDs and reliable sophisticated devices under laboratory conditions. We found that variations mainly depend on the device design and technology (e.g. active/passive ventilation). The standard deviation of the temperature records was quite stable over the whole range of values and the MSDs proved to be applicable for the purpose of our study. In conclusion the benefit of integrating mobile data and micrometeorological predictions is manifold. Mobile data can be used for the investigation of personal exposure in the context of heat stress and for the verification and training of micrometeorological models. Otherwise, model predictions can identify local areas of special climate interest where additional mobile measurements would be beneficial to provide new information for mitigation and adaptation actions.
Cosmological velocity correlations - Observations and model predictions
NASA Technical Reports Server (NTRS)
Gorski, Krzysztof M.; Davis, Marc; Strauss, Michael A.; White, Simon D. M.; Yahil, Amos
1989-01-01
By applying the present simple statistics for two-point cosmological peculiar velocity-correlation measurements to the actual data sets of the Local Supercluster spiral galaxy of Aaronson et al. (1982) and the elliptical galaxy sample of Burstein et al. (1987), as well as to the velocity field predicted by the distribution of IRAS galaxies, a coherence length of 1100-1600 km/sec is obtained. Coherence length is defined as that separation at which the correlations drop to half their zero-lag value. These results are compared with predictions from two models of large-scale structure formation: that of cold dark matter and that of baryon isocurvature proposed by Peebles (1980). N-body simulations of these models are performed to check the linear theory predictions and measure sampling fluctuations.
NASA Astrophysics Data System (ADS)
Casini, Leonardo; Funedda, Antonio
2014-09-01
The mylonites of the Baccu Locci Shear Zone (BLSZ), Sardinia (Italy), were deformed during thrusting along a bottom-to-top strain gradient in lower greenschist facies. The microstructure of metavolcanic protoliths shows evidence for composite deformation accommodated by dislocation creep within strong quartz porphyroclasts, and pressure solution in the finer grained matrix. The evolution of mylonite is simulated in two sets of numerical experiments, assuming either a constant width of the deforming zone (model 1) or a narrowing shear zone (model 2). A 2-5 mm y-1 constant-external-velocity boundary condition is applied on the basis of geologic constraints. Inputs to the models are provided by inverting paleostress values obtained from quartz recrystallized grain-size paleopiezometry. Both models predict a significant stress drop across the shear zone. However, model 1 involves a dramatic decrease in strain rate towards the zone of apparent strain localization. In contrast, model 2 predicts an increase in strain rate with time (from 10-14 to 10-12 s-1), which is consistent with stabilization of the shear zone profile and localization of deformation near the hanging wall. Extrapolating these results to the general context of crust strength suggests that pressure-solution creep may be a critical process for strain softening and for the stabilization of deformation within shear zones.
Nam, Vu Thanh; van Kuijk, Marijke; Anten, Niels P R
2016-01-01
Allometric regression models are widely used to estimate tropical forest biomass, but balancing model accuracy with efficiency of implementation remains a major challenge. In addition, while numerous models exist for aboveground mass, very few exist for roots. We developed allometric equations for aboveground biomass (AGB) and root biomass (RB) based on 300 (of 45 species) and 40 (of 25 species) sample trees respectively, in an evergreen forest in Vietnam. The biomass estimations from these local models were compared to regional and pan-tropical models. For AGB we also compared local models that distinguish functional types to an aggregated model, to assess the degree of specificity needed in local models. Besides diameter at breast height (DBH) and tree height (H), wood density (WD) was found to be an important parameter in AGB models. Existing pan-tropical models resulted in up to 27% higher estimates of AGB, and overestimated RB by nearly 150%, indicating the greater accuracy of local models at the plot level. Our functional group aggregated local model which combined data for all species, was as accurate in estimating AGB as functional type specific models, indicating that a local aggregated model is the best choice for predicting plot level AGB in tropical forests. Finally our study presents the first allometric biomass models for aboveground and root biomass in forests in Vietnam.
Nam, Vu Thanh; van Kuijk, Marijke; Anten, Niels P. R.
2016-01-01
Allometric regression models are widely used to estimate tropical forest biomass, but balancing model accuracy with efficiency of implementation remains a major challenge. In addition, while numerous models exist for aboveground mass, very few exist for roots. We developed allometric equations for aboveground biomass (AGB) and root biomass (RB) based on 300 (of 45 species) and 40 (of 25 species) sample trees respectively, in an evergreen forest in Vietnam. The biomass estimations from these local models were compared to regional and pan-tropical models. For AGB we also compared local models that distinguish functional types to an aggregated model, to assess the degree of specificity needed in local models. Besides diameter at breast height (DBH) and tree height (H), wood density (WD) was found to be an important parameter in AGB models. Existing pan-tropical models resulted in up to 27% higher estimates of AGB, and overestimated RB by nearly 150%, indicating the greater accuracy of local models at the plot level. Our functional group aggregated local model which combined data for all species, was as accurate in estimating AGB as functional type specific models, indicating that a local aggregated model is the best choice for predicting plot level AGB in tropical forests. Finally our study presents the first allometric biomass models for aboveground and root biomass in forests in Vietnam. PMID:27309718
LRSSLMDA: Laplacian Regularized Sparse Subspace Learning for MiRNA-Disease Association prediction
Huang, Li
2017-01-01
Predicting novel microRNA (miRNA)-disease associations is clinically significant due to miRNAs’ potential roles of diagnostic biomarkers and therapeutic targets for various human diseases. Previous studies have demonstrated the viability of utilizing different types of biological data to computationally infer new disease-related miRNAs. Yet researchers face the challenge of how to effectively integrate diverse datasets and make reliable predictions. In this study, we presented a computational model named Laplacian Regularized Sparse Subspace Learning for MiRNA-Disease Association prediction (LRSSLMDA), which projected miRNAs/diseases’ statistical feature profile and graph theoretical feature profile to a common subspace. It used Laplacian regularization to preserve the local structures of the training data and a L1-norm constraint to select important miRNA/disease features for prediction. The strength of dimensionality reduction enabled the model to be easily extended to much higher dimensional datasets than those exploited in this study. Experimental results showed that LRSSLMDA outperformed ten previous models: the AUC of 0.9178 in global leave-one-out cross validation (LOOCV) and the AUC of 0.8418 in local LOOCV indicated the model’s superior prediction accuracy; and the average AUC of 0.9181+/-0.0004 in 5-fold cross validation justified its accuracy and stability. In addition, three types of case studies further demonstrated its predictive power. Potential miRNAs related to Colon Neoplasms, Lymphoma, Kidney Neoplasms, Esophageal Neoplasms and Breast Neoplasms were predicted by LRSSLMDA. Respectively, 98%, 88%, 96%, 98% and 98% out of the top 50 predictions were validated by experimental evidences. Therefore, we conclude that LRSSLMDA would be a valuable computational tool for miRNA-disease association prediction. PMID:29253885
Modeling Kelvin Wave Cascades in Superfluid Helium
NASA Astrophysics Data System (ADS)
Boffetta, G.; Celani, A.; Dezzani, D.; Laurie, J.; Nazarenko, S.
2009-09-01
We study two different types of simplified models for Kelvin wave turbulence on quantized vortex lines in superfluids near zero temperature. Our first model is obtained from a truncated expansion of the Local Induction Approximation (Truncated-LIA) and it is shown to possess the same scalings and the essential behaviour as the full Biot-Savart model, being much simpler than the later and, therefore, more amenable to theoretical and numerical investigations. The Truncated-LIA model supports six-wave interactions and dual cascades, which are clearly demonstrated via the direct numerical simulation of this model in the present paper. In particular, our simulations confirm presence of the weak turbulence regime and the theoretically predicted spectra for the direct energy cascade and the inverse wave action cascade. The second type of model we study, the Differential Approximation Model (DAM), takes a further drastic simplification by assuming locality of interactions in k-space via using a differential closure that preserves the main scalings of the Kelvin wave dynamics. DAMs are even more amenable to study and they form a useful tool by providing simple analytical solutions in the cases when extra physical effects are present, e.g. forcing by reconnections, friction dissipation and phonon radiation. We study these models numerically and test their theoretical predictions, in particular the formation of the stationary spectra, and closeness of numerics for the higher-order DAM to the analytical predictions for the lower-order DAM.
Local competition increases people's willingness to harm others
Barker, Jessica L.; Barclay, Pat
2016-01-01
Why should organisms incur a cost in order to inflict a (usually greater) cost on others? Such costly harming behavior may be favored when competition for resources occurs locally, because it increases individuals’ fitness relative to close competitors. However, there is no explicit experimental evidence supporting the prediction that people are more willing to harm others under local versus global competition. We illustrate this prediction with a game theoretic model, and then test it in a series of economic games. In these experiments, players could spend money to make others lose more. We manipulated the scale of competition by awarding cash prizes to the players with the highest payoffs per set of social partners (local competition) or in all the participants in a session (global competition). We found that, as predicted, people were more harmful to others when competition was local (Study 1). This result still held when people “earned” (rather than were simply given) their money (Study 2). In addition, when competition was local, people were more willing to harm ingroup members than outgroup members (Study 3), because ingroup members were the relevant competitive targets. Together, our results suggest that local competition in human groups not only promotes willingness to harm others in general, but also causes ingroup hostility. PMID:29805247
Aqil, Muhammad; Kita, Ichiro; Yano, Akira; Nishiyama, Soichi
2007-10-01
Traditionally, the multiple linear regression technique has been one of the most widely used models in simulating hydrological time series. However, when the nonlinear phenomenon is significant, the multiple linear will fail to develop an appropriate predictive model. Recently, neuro-fuzzy systems have gained much popularity for calibrating the nonlinear relationships. This study evaluated the potential of a neuro-fuzzy system as an alternative to the traditional statistical regression technique for the purpose of predicting flow from a local source in a river basin. The effectiveness of the proposed identification technique was demonstrated through a simulation study of the river flow time series of the Citarum River in Indonesia. Furthermore, in order to provide the uncertainty associated with the estimation of river flow, a Monte Carlo simulation was performed. As a comparison, a multiple linear regression analysis that was being used by the Citarum River Authority was also examined using various statistical indices. The simulation results using 95% confidence intervals indicated that the neuro-fuzzy model consistently underestimated the magnitude of high flow while the low and medium flow magnitudes were estimated closer to the observed data. The comparison of the prediction accuracy of the neuro-fuzzy and linear regression methods indicated that the neuro-fuzzy approach was more accurate in predicting river flow dynamics. The neuro-fuzzy model was able to improve the root mean square error (RMSE) and mean absolute percentage error (MAPE) values of the multiple linear regression forecasts by about 13.52% and 10.73%, respectively. Considering its simplicity and efficiency, the neuro-fuzzy model is recommended as an alternative tool for modeling of flow dynamics in the study area.
NASA Astrophysics Data System (ADS)
Rössler, O.; Froidevaux, P.; Börst, U.; Rickli, R.; Martius, O.; Weingartner, R.
2014-06-01
A rain-on-snow flood occurred in the Bernese Alps, Switzerland, on 10 October 2011, and caused significant damage. As the flood peak was unpredicted by the flood forecast system, questions were raised concerning the causes and the predictability of the event. Here, we aimed to reconstruct the anatomy of this rain-on-snow flood in the Lötschen Valley (160 km2) by analyzing meteorological data from the synoptic to the local scale and by reproducing the flood peak with the hydrological model WaSiM-ETH (Water Flow and Balance Simulation Model). This in order to gain process understanding and to evaluate the predictability. The atmospheric drivers of this rain-on-snow flood were (i) sustained snowfall followed by (ii) the passage of an atmospheric river bringing warm and moist air towards the Alps. As a result, intensive rainfall (average of 100 mm day-1) was accompanied by a temperature increase that shifted the 0° line from 1500 to 3200 m a.s.l. (meters above sea level) in 24 h with a maximum increase of 9 K in 9 h. The south-facing slope of the valley received significantly more precipitation than the north-facing slope, leading to flooding only in tributaries along the south-facing slope. We hypothesized that the reason for this very local rainfall distribution was a cavity circulation combined with a seeder-feeder-cloud system enhancing local rainfall and snowmelt along the south-facing slope. By applying and considerably recalibrating the standard hydrological model setup, we proved that both latent and sensible heat fluxes were needed to reconstruct the snow cover dynamic, and that locally high-precipitation sums (160 mm in 12 h) were required to produce the estimated flood peak. However, to reproduce the rapid runoff responses during the event, we conceptually represent likely lateral flow dynamics within the snow cover causing the model to react "oversensitively" to meltwater. Driving the optimized model with COSMO (Consortium for Small-scale Modeling)-2 forecast data, we still failed to simulate the flood because COSMO-2 forecast data underestimated both the local precipitation peak and the temperature increase. Thus we conclude that this rain-on-snow flood was, in general, predictable, but requires a special hydrological model setup and extensive and locally precise meteorological input data. Although, this data quality may not be achieved with forecast data, an additional model with a specific rain-on-snow configuration can provide useful information when rain-on-snow events are likely to occur.
Interpreting the macroscopic pointer by analysing the elements of reality of a Schrödinger cat
NASA Astrophysics Data System (ADS)
Reid, M. D.
2017-10-01
We examine Einstein-Podolsky-Rosen’s (EPR) steering nonlocality for two realisable Schrödinger cat-type states where a meso/macroscopic system (called the ‘cat’-system) is entangled with a microscopic spin-1/2 system. We follow EPR’s argument and derive the predictions for ‘elements of reality’ that would exist to describe the cat-system, under the assumption of EPR’s local realism. By showing that those predictions cannot be replicated by any local quantum state description of the cat-system, we demonstrate the EPR-steering of the cat-system. For large cat-systems, we find that a local hidden state model is near-satisfied, meaning that a local quantum state description exists (for the cat) whose predictions differ from those of the elements of reality by a vanishingly small amount. For such a local hidden state model, the EPR-steering of the cat vanishes, and the cat-system can be regarded as being in a mixture of ‘dead’ and ‘alive’ states despite it being entangled with the spin system. We therefore propose that a rigorous signature of the Schrödinger cat-type paradox is the EPR-steering of the cat-system and provide two experimental signatures. This leads to a hybrid quantum/classical interpretation of the macroscopic pointer of a measurement device and suggests that many Schrödinger cat-type paradoxes may be explained by microscopic nonlocality.
NASA Astrophysics Data System (ADS)
Szunyogh, Istvan; Kostelich, Eric J.; Gyarmati, G.; Patil, D. J.; Hunt, Brian R.; Kalnay, Eugenia; Ott, Edward; Yorke, James A.
2005-08-01
The accuracy and computational efficiency of the recently proposed local ensemble Kalman filter (LEKF) data assimilation scheme is investigated on a state-of-the-art operational numerical weather prediction model using simulated observations. The model selected for this purpose is the T62 horizontal- and 28-level vertical-resolution version of the Global Forecast System (GFS) of the National Center for Environmental Prediction. The performance of the data assimilation system is assessed for different configurations of the LEKF scheme. It is shown that a modest size (40-member) ensemble is sufficient to track the evolution of the atmospheric state with high accuracy. For this ensemble size, the computational time per analysis is less than 9 min on a cluster of PCs. The analyses are extremely accurate in the mid-latitude storm track regions. The largest analysis errors, which are typically much smaller than the observational errors, occur where parametrized physical processes play important roles. Because these are also the regions where model errors are expected to be the largest, limitations of a real-data implementation of the ensemble-based Kalman filter may be easily mistaken for model errors. In light of these results, the importance of testing the ensemble-based Kalman filter data assimilation systems on simulated observations is stressed.
Vlahovicek, K; Munteanu, M G; Pongor, S
1999-01-01
Bending is a local conformational micropolymorphism of DNA in which the original B-DNA structure is only distorted but not extensively modified. Bending can be predicted by simple static geometry models as well as by a recently developed elastic model that incorporate sequence dependent anisotropic bendability (SDAB). The SDAB model qualitatively explains phenomena including affinity of protein binding, kinking, as well as sequence-dependent vibrational properties of DNA. The vibrational properties of DNA segments can be studied by finite element analysis of a model subjected to an initial bending moment. The frequency spectrum is obtained by applying Fourier analysis to the displacement values in the time domain. This analysis shows that the spectrum of the bending vibrations quite sensitively depends on the sequence, for example the spectrum of a curved sequence is characteristically different from the spectrum of straight sequence motifs of identical basepair composition. Curvature distributions are genome-specific, and pronounced differences are found between protein-coding and regulatory regions, respectively, that is, sites of extreme curvature and/or bendability are less frequent in protein-coding regions. A WWW server is set up for the prediction of curvature and generation of 3D models from DNA sequences (http:@www.icgeb.trieste.it/dna).
Yousefzadeh, Behrooz; Hodgson, Murray
2012-09-01
A beam-tracing model was used to study the acoustical responses of three empty, rectangular rooms with different boundary conditions. The model is wave-based (accounting for sound phase) and can be applied to rooms with extended-reaction surfaces that are made of multiple layers of solid, fluid, or poroelastic materials-the acoustical properties of these surfaces are calculated using Biot theory. Three room-acoustical parameters were studied in various room configurations: sound strength, reverberation time, and RApid Speech Transmission Index. The main objective was to investigate the effects of modeling surfaces as either local or extended reaction on predicted values of these three parameters. Moreover, the significance of modeling interference effects was investigated, including the study of sound phase-change on surface reflection. Modeling surfaces as of local or extended reaction was found to be significant for surfaces consisting of multiple layers, specifically when one of the layers is air. For multilayers of solid materials with an air-cavity, this was most significant around their mass-air-mass resonance frequencies. Accounting for interference effects made significant changes in the predicted values of all parameters. Modeling phase change on reflection, on the other hand, was found to be relatively much less significant.
Methodology Development of a Gas-Liquid Dynamic Flow Regime Transition Model
NASA Astrophysics Data System (ADS)
Doup, Benjamin Casey
Current reactor safety analysis codes, such as RELAP5, TRACE, and CATHARE, use flow regime maps or flow regime transition criteria that were developed for static fully-developed two-phase flows to choose interfacial transfer models that are necessary to solve the two-fluid model. The flow regime is therefore difficult to identify near the flow regime transitions, in developing two-phase flows, and in transient two-phase flows. Interfacial area transport equations were developed to more accurately predict the dynamic nature of two-phase flows. However, other model coefficients are still flow regime dependent. Therefore, an accurate prediction of the flow regime is still important. In the current work, the methodology for the development of a dynamic flow regime transition model that uses the void fraction and interfacial area concentration obtained by solving three-field the two-fluid model and two-group interfacial area transport equation is investigated. To develop this model, detailed local experimental data are obtained, the two-group interfacial area transport equations are revised, and a dynamic flow regime transition model is evaluated using a computational fluid dynamics model. Local experimental data is acquired for 63 different flow conditions in bubbly, cap-bubbly, slug, and churn-turbulent flow regimes. The measured parameters are the group-1 and group-2 bubble number frequency, void fraction, interfacial area concentration, and interfacial bubble velocities. The measurements are benchmarked by comparing the prediction of the superficial gas velocities, determined using the local measurements with those determined from volumetric flow rate measurements and the agreement is generally within +/-20%. The repeatability four-sensor probe construction process is within +/-10%. The repeatability of the measurement process is within +/-7%. The symmetry of the test section is examined and the average agreement is within +/-5.3% at z/D = 10 and +/-3.4% at z/D = 32. Revised source/sink terms for the two-group interfacial area transport equations are derived and fit to area-averaged experimental data to determine new model coefficients. The average agreement between this model and the experiment data for the void fraction and interfacial area concentration is 10.6% and 15.7%, respectively. This revised two-group interfacial area transport equation and the three-field two-fluid model are used to solve for the group-1 and group-2 interfacial area concentration and void fraction. These values and a dynamic flow regime transition model are used to classify the flow regimes. The flow regimes determined using this model are compared with the flow regimes based on the experimental data and on a flow regime map using Mishima and Ishii's (1984) transition criteria. The dynamic flow regime transition model is shown to predict the flow regimes dynamically and has improved the prediction of the flow regime over that using a flow regime map. Safety codes often employ the one-dimensional two-fluid model to model two-phase flows. The area-averaged relative velocity correlation necessary to close this model is derived from the drift flux model. The effects of the necessary assumptions used to derive this correlation are investigated using local measurements and these effects are found to have a limited impact on the prediction of the area-averaged relative velocity.
NASA Astrophysics Data System (ADS)
Chernin, A. D.; Teerikorpi, P.; Baryshev, Yu. V.
2006-09-01
Based on the increasing evidence of the cosmological relevance of the local Hubble flow, we consider a simple analytical cosmological model for the Local Universe. This is a non-Friedmann model with a non-uniform static space-time. The major dynamical factor controlling the local expansion is the antigravity produced by the omnipresent and permanent dark energy of the cosmic vacuum (or the cosmological constant). The antigravity dominates at larger distances than 1-2 Mpc from the center of the Local group. The model gives a natural explanation of the two key quantitative characteristics of the local expansion flow, which are the local Hubble constant and the velocity dispersion of the flow. The observed kinematical similarity of the local and global flows of expansion is clarified by the model. We analytically demonstrate the efficiency of the vacuum cooling mechanism that allows one to see the Hubble law this close to the Local group. The "universal Hubble constant" HV (≈60 km s-1 Mpc-1), depending only on the vacuum density, has special significance locally and globally. The model makes a number of verifiable predictions. It also unexpectedly shows that the dwarf galaxies of the local flow with the shortest distances and lowest redshifts may be the most sensitive indicators of dark energy in our neighborhood.
Medium term municipal solid waste generation prediction by autoregressive integrated moving average
DOE Office of Scientific and Technical Information (OSTI.GOV)
Younes, Mohammad K.; Nopiah, Z. M.; Basri, Noor Ezlin A.
2014-09-12
Generally, solid waste handling and management are performed by municipality or local authority. In most of developing countries, local authorities suffer from serious solid waste management (SWM) problems and insufficient data and strategic planning. Thus it is important to develop robust solid waste generation forecasting model. It helps to proper manage the generated solid waste and to develop future plan based on relatively accurate figures. In Malaysia, solid waste generation rate increases rapidly due to the population growth and new consumption trends that characterize the modern life style. This paper aims to develop monthly solid waste forecasting model using Autoregressivemore » Integrated Moving Average (ARIMA), such model is applicable even though there is lack of data and will help the municipality properly establish the annual service plan. The results show that ARIMA (6,1,0) model predicts monthly municipal solid waste generation with root mean square error equals to 0.0952 and the model forecast residuals are within accepted 95% confident interval.« less
A biomechanical model of agonist-initiated contraction in the asthmatic airway.
Brook, B S; Peel, S E; Hall, I P; Politi, A Z; Sneyd, J; Bai, Y; Sanderson, M J; Jensen, O E
2010-01-31
This paper presents a modelling framework in which the local stress environment of airway smooth muscle (ASM) cells may be predicted and cellular responses to local stress may be investigated. We consider an elastic axisymmetric model of a layer of connective tissue and circumferential ASM fibres embedded in parenchymal tissue and model the active contractile force generated by ASM via a stress acting along the fibres. A constitutive law is proposed that accounts for active and passive material properties as well as the proportion of muscle to connective tissue. The model predicts significantly different contractile responses depending on the proportion of muscle to connective tissue in the remodelled airway. We find that radial and hoop-stress distributions in remodelled muscle layers are highly heterogenous with distinct regions of compression and tension. Such patterns of stress are likely to have important implications, from a mechano-transduction perspective, on contractility, short-term cytoskeletal adaptation and long-term airway remodelling in asthma. Copyright 2009 Elsevier B.V. All rights reserved.
Medium term municipal solid waste generation prediction by autoregressive integrated moving average
NASA Astrophysics Data System (ADS)
Younes, Mohammad K.; Nopiah, Z. M.; Basri, Noor Ezlin A.; Basri, Hassan
2014-09-01
Generally, solid waste handling and management are performed by municipality or local authority. In most of developing countries, local authorities suffer from serious solid waste management (SWM) problems and insufficient data and strategic planning. Thus it is important to develop robust solid waste generation forecasting model. It helps to proper manage the generated solid waste and to develop future plan based on relatively accurate figures. In Malaysia, solid waste generation rate increases rapidly due to the population growth and new consumption trends that characterize the modern life style. This paper aims to develop monthly solid waste forecasting model using Autoregressive Integrated Moving Average (ARIMA), such model is applicable even though there is lack of data and will help the municipality properly establish the annual service plan. The results show that ARIMA (6,1,0) model predicts monthly municipal solid waste generation with root mean square error equals to 0.0952 and the model forecast residuals are within accepted 95% confident interval.
Improved protein model quality assessments by changing the target function.
Uziela, Karolis; Menéndez Hurtado, David; Shu, Nanjiang; Wallner, Björn; Elofsson, Arne
2018-06-01
Protein modeling quality is an important part of protein structure prediction. We have for more than a decade developed a set of methods for this problem. We have used various types of description of the protein and different machine learning methodologies. However, common to all these methods has been the target function used for training. The target function in ProQ describes the local quality of a residue in a protein model. In all versions of ProQ the target function has been the S-score. However, other quality estimation functions also exist, which can be divided into superposition- and contact-based methods. The superposition-based methods, such as S-score, are based on a rigid body superposition of a protein model and the native structure, while the contact-based methods compare the local environment of each residue. Here, we examine the effects of retraining our latest predictor, ProQ3D, using identical inputs but different target functions. We find that the contact-based methods are easier to predict and that predictors trained on these measures provide some advantages when it comes to identifying the best model. One possible reason for this is that contact based methods are better at estimating the quality of multi-domain targets. However, training on the S-score gives the best correlation with the GDT_TS score, which is commonly used in CASP to score the global model quality. To take the advantage of both of these features we provide an updated version of ProQ3D that predicts local and global model quality estimates based on different quality estimates. © 2018 Wiley Periodicals, Inc.
Forecasting Geomagnetic Activity Using Kalman Filters
NASA Astrophysics Data System (ADS)
Veeramani, T.; Sharma, A.
2006-05-01
The coupling of energy from the solar wind to the magnetosphere leads to the geomagnetic activity in the form of storms and substorms and are characterized by indices such as AL, Dst and Kp. The geomagnetic activity has been predicted near-real time using local linear filter models of the system dynamics wherein the time series of the input solar wind and the output magnetospheric response were used to reconstruct the phase space of the system by a time-delay embedding technique. Recently, the radiation belt dynamics have been studied using a adaptive linear state space model [Rigler et al. 2004]. This was achieved by assuming a linear autoregressive equation for the underlying process and an adaptive identification of the model parameters using a Kalman filter approach. We use such a model for predicting the geomagnetic activity. In the case of substorms, the Bargatze et al [1985] data set yields persistence like behaviour when a time resolution of 2.5 minutes was used to test the model for the prediction of the AL index. Unlike the local linear filters, which are driven by the solar wind input without feedback from the observations, the Kalman filter makes use of the observations as and when available to optimally update the model parameters. The update procedure requires the prediction intervals to be long enough so that the forecasts can be used in practice. The time resolution of the data suitable for such forecasting is studied by taking averages over different durations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, S.; Toll, J.; Cothern, K.
1995-12-31
The authors have performed robust sensitivity studies of the physico-chemical Hudson River PCB model PCHEPM to identify the parameters and process uncertainties contributing the most to uncertainty in predictions of water column and sediment PCB concentrations, over the time period 1977--1991 in one segment of the lower Hudson River. The term ``robust sensitivity studies`` refers to the use of several sensitivity analysis techniques to obtain a more accurate depiction of the relative importance of different sources of uncertainty. Local sensitivity analysis provided data on the sensitivity of PCB concentration estimates to small perturbations in nominal parameter values. Range sensitivity analysismore » provided information about the magnitude of prediction uncertainty associated with each input uncertainty. Rank correlation analysis indicated which parameters had the most dominant influence on model predictions. Factorial analysis identified important interactions among model parameters. Finally, term analysis looked at the aggregate influence of combinations of parameters representing physico-chemical processes. The authors scored the results of the local and range sensitivity and rank correlation analyses. The authors considered parameters that scored high on two of the three analyses to be important contributors to PCB concentration prediction uncertainty, and treated them probabilistically in simulations. They also treated probabilistically parameters identified in the factorial analysis as interacting with important parameters. The authors used the term analysis to better understand how uncertain parameters were influencing the PCB concentration predictions. The importance analysis allowed us to reduce the number of parameters to be modeled probabilistically from 16 to 5. This reduced the computational complexity of Monte Carlo simulations, and more importantly, provided a more lucid depiction of prediction uncertainty and its causes.« less
Mehl, S.; Hill, M.C.
2002-01-01
Models with local grid refinement, as often required in groundwater models, pose special problems for model calibration. This work investigates the calculation of sensitivities and performance of regression methods using two existing and one new method of grid refinement. The existing local grid refinement methods considered are (1) a variably spaced grid in which the grid spacing becomes smaller near the area of interest and larger where such detail is not needed and (2) telescopic mesh refinement (TMR), which uses the hydraulic heads or fluxes of a regional model to provide the boundary conditions for a locally refined model. The new method has a feedback between the regional and local grids using shared nodes, and thereby, unlike the TMR methods, balances heads and fluxes at the interfacing boundary. Results for sensitivities are compared for the three methods and the effect of the accuracy of sensitivity calculations are evaluated by comparing inverse modelling results. For the cases tested, results indicate that the inaccuracies of the sensitivities calculated using the TMR approach can cause the inverse model to converge to an incorrect solution.
Multiple measures of dispositional global/local bias predict attentional blink magnitude.
Dale, Gillian; Arnell, Karen M
2015-07-01
When the second of two targets (T2) is presented temporally close to the first target (T1) in a rapid serial visual presentation stream, accuracy to identify T2 is markedly reduced-an attentional blink (AB). While most individuals show an AB, Dale and Arnell (Atten Percept Psychophys 72(3):602-606, 2010) demonstrated that individual differences in dispositional attentional focus predicted AB performance, such that individuals who showed a natural bias toward the global level of Navon letter stimuli were less susceptible to the AB and showed a smaller AB effect. For the current study, we extended the findings of Dale and Arnell (Atten Percept Psychophys 72(3):602-606, 2010) through two experiments. In Experiment 1, we examined the relationship between dispositional global/local bias and the AB using a highly reliable hierarchical shape task measure. In Experiment 2, we examined whether three distinct global/local measures could predict AB performance. In both experiments, performance on the global/local tasks predicted subsequent AB performance, such that individuals with a greater preference for the global information showed a reduced AB. This supports previous findings, as well as recent models which discuss the role of attentional breadth in selective attention.
NASA Technical Reports Server (NTRS)
Coats, Timothy William
1994-01-01
Progressive failure is a crucial concern when using laminated composites in structural design. Therefore the ability to model damage and predict the life of laminated composites is vital. The purpose of this research was to experimentally verify the application of the continuum damage model, a progressive failure theory utilizing continuum damage mechanics, to a toughened material system. Damage due to tension-tension fatigue was documented for the IM7/5260 composite laminates. Crack density and delamination surface area were used to calculate matrix cracking and delamination internal state variables, respectively, to predict stiffness loss. A damage dependent finite element code qualitatively predicted trends in transverse matrix cracking, axial splits and local stress-strain distributions for notched quasi-isotropic laminates. The predictions were similar to the experimental data and it was concluded that the continuum damage model provided a good prediction of stiffness loss while qualitatively predicting damage growth in notched laminates.
DOT National Transportation Integrated Search
1972-07-01
The TSC electromagnetic scattering model has been used to predict the course deviation indications (CDI) at the planned Dallas Fort Worth Regional Airport. The results show that the CDI due to scattering from the modeled airport structures are within...
Population-based human exposure models predict the distribution of personal exposures to pollutants of outdoor origin using a variety of inputs, including: air pollution concentrations; human activity patterns, such as the amount of time spent outdoors vs. indoors, commuting, wal...
Localized mRNA translation and protein association
NASA Astrophysics Data System (ADS)
Zhdanov, Vladimir P.
2014-08-01
Recent direct observations of localization of mRNAs and proteins both in prokaryotic and eukaryotic cells can be related to slowdown of diffusion of these species due to macromolecular crowding and their ability to aggregate and form immobile or slowly mobile complexes. Here, a generic kinetic model describing both these factors is presented and comprehensively analyzed. Although the model is non-linear, an accurate self-consistent analytical solution of the corresponding reaction-diffusion equation has been constructed, the types of localized protein distributions have been explicitly shown, and the predicted kinetic regimes of gene expression have been classified.
Extracting local information from crowds through betting markets
NASA Astrophysics Data System (ADS)
Weijs, Steven
2015-04-01
In this research, a set-up is considered in which users can bet against a forecasting agency to challenge their probabilistic forecasts. From an information theory standpoint, a reward structure is considered that either provides the forecasting agency with better information, paying the successful providers of information for their winning bets, or funds excellent forecasting agencies through users that think they know better. Especially for local forecasts, the approach may help to diagnose model biases and to identify local predictive information that can be incorporated in the models. The challenges and opportunities for implementing such a system in practice are also discussed.
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett A.; Arnold, Steven M.
2000-01-01
A new model for local fiber failures in composite materials loaded longitudinally is presented. In developing the model, the goal was to account for the effects of fiber breakage on the global response of a composite in a relatively simple and efficient manner. Towards this end, the model includes the important feature of local stress unloading, even as global loading of the composite continues. The model has been incorporated into NASA Glenn's Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) and was employed to simulate the longitudinal tensile deformation and failure behavior of several silicon carbide fiber/titanium matrix (SiC/Ti) composites. The model is shown to be quite realistic and capable of accurate predictions for various temperatures, fiber volume fractions, and fiber diameters. Further- more, the new model compares favorably to Curtin's (1993) effective fiber breakage model, which has also been incorporated into MAC/GMC.
NASA Astrophysics Data System (ADS)
Choens, R. C., II; Chester, F. M.; Bauer, S. J.; Flint, G. M.
2014-12-01
Fluid-pressure assisted fracturing can produce mesh and other large, interconnected and complex networks consisting of both extension and shear fractures in various metamorphic, magmatic and tectonic systems. Presently, rock failure criteria for tensile and low-mean compressive stress conditions is poorly defined, although there is accumulating evidence that the transition from extension to shear fracture with increasing mean stress is continuous. We report on the results of experiments designed to document failure criteria, fracture mode, and localization phenomena for several rock types (sandstone, limestone, chalk and marble). Experiments were conducted in triaxial extension using a necked (dogbone) geometry to achieve mixed tension and compression stress states with local component-strain measurements in the failure region. The failure envelope for all rock types is similar, but are poorly described using Griffith or modified Griffith (Coulomb or other) failure criteria. Notably, the mode of fracture changes systematically from pure extension to shear with increase in compressive mean stress and display a continuous change in fracture orientation with respect to principal stress axes. Differential stress and inelastic strain show a systematic increase with increasing mean stress, whereas the axial stress decreases before increasing with increasing mean stress. The stress and strain data are used to analyze elastic and plastic strains leading to failure and compare the experimental results to predictions for localization using constitutive models incorporating on bifurcation theory. Although models are able to describe the stability behavior and onset of localization qualitatively, the models are unable to predict fracture type or orientation. Constitutive models using single or multiple yield surfaces are unable to predict the experimental results, reflecting the difficulty in capturing the changing micromechanisms from extension to shear failure. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Deopartment of Energy's National Security Administration under contract DE-AC04-94AL85000. SAND2014-16578A
NASA Astrophysics Data System (ADS)
Glocer, A.; Rastätter, L.; Kuznetsova, M.; Pulkkinen, A.; Singer, H. J.; Balch, C.; Weimer, D.; Welling, D.; Wiltberger, M.; Raeder, J.; Weigel, R. S.; McCollough, J.; Wing, S.
2016-07-01
We present the latest result of a community-wide space weather model validation effort coordinated among the Community Coordinated Modeling Center (CCMC), NOAA Space Weather Prediction Center (SWPC), model developers, and the broader science community. Validation of geospace models is a critical activity for both building confidence in the science results produced by the models and in assessing the suitability of the models for transition to operations. Indeed, a primary motivation of this work is supporting NOAA/SWPC's effort to select a model or models to be transitioned into operations. Our validation efforts focus on the ability of the models to reproduce a regional index of geomagnetic disturbance, the local K-index. Our analysis includes six events representing a range of geomagnetic activity conditions and six geomagnetic observatories representing midlatitude and high-latitude locations. Contingency tables, skill scores, and distribution metrics are used for the quantitative analysis of model performance. We consider model performance on an event-by-event basis, aggregated over events, at specific station locations, and separated into high-latitude and midlatitude domains. A summary of results is presented in this report, and an online tool for detailed analysis is available at the CCMC.
NASA Technical Reports Server (NTRS)
Glocer, A.; Rastaetter, L.; Kuznetsova, M.; Pulkkinen, A.; Singer, H. J.; Balch, C.; Weimer, D.; Welling, D.; Wiltberger, M.; Raeder, J.;
2016-01-01
We present the latest result of a community-wide space weather model validation effort coordinated among the Community Coordinated Modeling Center (CCMC), NOAA Space Weather Prediction Center (SWPC), model developers, and the broader science community. Validation of geospace models is a critical activity for both building confidence in the science results produced by the models and in assessing the suitability of the models for transition to operations. Indeed, a primary motivation of this work is supporting NOAA/SWPCs effort to select a model or models to be transitioned into operations. Our validation efforts focus on the ability of the models to reproduce a regional index of geomagnetic disturbance, the local K-index. Our analysis includes six events representing a range of geomagnetic activity conditions and six geomagnetic observatories representing midlatitude and high-latitude locations. Contingency tables, skill scores, and distribution metrics are used for the quantitative analysis of model performance. We consider model performance on an event-by-event basis, aggregated over events, at specific station locations, and separated into high-latitude and midlatitude domains. A summary of results is presented in this report, and an online tool for detailed analysis is available at the CCMC.
Local dispersal promotes biodiversity in a real-life game of rock-paper-scissors
NASA Astrophysics Data System (ADS)
Kerr, Benjamin; Riley, Margaret A.; Feldman, Marcus W.; Bohannan, Brendan J. M.
2002-07-01
One of the central aims of ecology is to identify mechanisms that maintain biodiversity. Numerous theoretical models have shown that competing species can coexist if ecological processes such as dispersal, movement, and interaction occur over small spatial scales. In particular, this may be the case for non-transitive communities, that is, those without strict competitive hierarchies. The classic non-transitive system involves a community of three competing species satisfying a relationship similar to the children's game rock-paper-scissors, where rock crushes scissors, scissors cuts paper, and paper covers rock. Such relationships have been demonstrated in several natural systems. Some models predict that local interaction and dispersal are sufficient to ensure coexistence of all three species in such a community, whereas diversity is lost when ecological processes occur over larger scales. Here, we test these predictions empirically using a non-transitive model community containing three populations of Escherichia coli. We find that diversity is rapidly lost in our experimental community when dispersal and interaction occur over relatively large spatial scales, whereas all populations coexist when ecological processes are localized.
Spatial variation in the climatic predictors of species compositional turnover and endemism
Di Virgilio, Giovanni; Laffan, Shawn W; Ebach, Malte C; Chapple, David G
2014-01-01
Previous research focusing on broad-scale or geographically invariant species-environment dependencies suggest that temperature-related variables explain more of the variation in reptile distributions than precipitation. However, species–environment relationships may exhibit considerable spatial variation contingent upon the geographic nuances that vary between locations. Broad-scale, geographically invariant analyses may mask this local variation and their findings may not generalize to different locations at local scales. We assess how reptile–climatic relationships change with varying spatial scale, location, and direction. Since the spatial distributions of diversity and endemism hotspots differ for other species groups, we also assess whether reptile species turnover and endemism hotspots are influenced differently by climatic predictors. Using New Zealand reptiles as an example, the variation in species turnover, endemism and turnover in climatic variables was measured using directional moving window analyses, rotated through 360°. Correlations between the species turnover, endemism and climatic turnover results generated by each rotation of the moving window were analysed using multivariate generalized linear models applied at national, regional, and local scales. At national-scale, temperature turnover consistently exhibited the greatest influence on species turnover and endemism, but model predictive capacity was low (typically r2 = 0.05, P < 0.001). At regional scales the relative influence of temperature and precipitation turnover varied between regions, although model predictive capacity was also generally low. Climatic turnover was considerably more predictive of species turnover and endemism at local scales (e.g., r2 = 0.65, P < 0.001). While temperature turnover had the greatest effect in one locale (the northern North Island), there was substantial variation in the relative influence of temperature and precipitation predictors in the remaining four locales. Species turnover and endemism hotspots often occurred in different locations. Climatic predictors had a smaller influence on endemism. Our results caution against assuming that variability in temperature will always be most predictive of reptile biodiversity across different spatial scales, locations and directions. The influence of climatic turnover on the species turnover and endemism of other taxa may exhibit similar patterns of spatial variation. Such intricate variation might be discerned more readily if studies at broad scales are complemented by geographically variant, local-scale analyses. PMID:25473479
NASA Astrophysics Data System (ADS)
Xu, Y.; Jones, A. D.; Rhoades, A.
2017-12-01
Precipitation is a key component in hydrologic cycles, and changing precipitation regimes contribute to more intense and frequent drought and flood events around the world. Numerical climate modeling is a powerful tool to study climatology and to predict future changes. Despite the continuous improvement in numerical models, long-term precipitation prediction remains a challenge especially at regional scales. To improve numerical simulations of precipitation, it is important to find out where the uncertainty in precipitation simulations comes from. There are two types of uncertainty in numerical model predictions. One is related to uncertainty in the input data, such as model's boundary and initial conditions. These uncertainties would propagate to the final model outcomes even if the numerical model has exactly replicated the true world. But a numerical model cannot exactly replicate the true world. Therefore, the other type of model uncertainty is related the errors in the model physics, such as the parameterization of sub-grid scale processes, i.e., given precise input conditions, how much error could be generated by the in-precise model. Here, we build two statistical models based on a neural network algorithm to predict long-term variation of precipitation over California: one uses "true world" information derived from observations, and the other uses "modeled world" information using model inputs and outputs from the North America Coordinated Regional Downscaling Project (NA CORDEX). We derive multiple climate feature metrics as the predictors for the statistical model to represent the impact of global climate on local hydrology, and include topography as a predictor to represent the local control. We first compare the predictors between the true world and the modeled world to determine the errors contained in the input data. By perturbing the predictors in the statistical model, we estimate how much uncertainty in the model's final outcomes is accounted for by each predictor. By comparing the statistical model derived from true world information and modeled world information, we assess the errors lying in the physics of the numerical models. This work provides a unique insight to assess the performance of numerical climate models, and can be used to guide improvement of precipitation prediction.
Prediction of Antibacterial Activity from Physicochemical Properties of Antimicrobial Peptides
Melo, Manuel N.; Ferre, Rafael; Feliu, Lídia; Bardají, Eduard; Planas, Marta; Castanho, Miguel A. R. B.
2011-01-01
Consensus is gathering that antimicrobial peptides that exert their antibacterial action at the membrane level must reach a local concentration threshold to become active. Studies of peptide interaction with model membranes do identify such disruptive thresholds but demonstrations of the possible correlation of these with the in vivo onset of activity have only recently been proposed. In addition, such thresholds observed in model membranes occur at local peptide concentrations close to full membrane coverage. In this work we fully develop an interaction model of antimicrobial peptides with biological membranes; by exploring the consequences of the underlying partition formalism we arrive at a relationship that provides antibacterial activity prediction from two biophysical parameters: the affinity of the peptide to the membrane and the critical bound peptide to lipid ratio. A straightforward and robust method to implement this relationship, with potential application to high-throughput screening approaches, is presented and tested. In addition, disruptive thresholds in model membranes and the onset of antibacterial peptide activity are shown to occur over the same range of locally bound peptide concentrations (10 to 100 mM), which conciliates the two types of observations. PMID:22194847
Nanoscale hotspots due to nonequilibrium thermal transport.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sinha, Sanjiv; Goodson, Kenneth E.
2004-01-01
Recent experimental and modeling efforts have been directed towards the issue of temperature localization and hotspot formation in the vicinity of nanoscale heat generating devices. The nonequilibrium transport conditions which develop around these nanoscale devices results in elevated temperatures near the heat source which can not be predicted by continuum diffusion theory. Efforts to determine the severity of this temperature localization phenomena in silicon devices near and above room temperature are of technological importance to the development of microelectronics and other nanotechnologies. In this work, we have developed a new modeling tool in order to explore the magnitude of themore » additional thermal resistance which forms around nanoscale hotspots from temperatures of 100-1000K. The models are based on a two fluid approximation in which thermal energy is transferred between ''stationary'' optical phonons and fast propagating acoustic phonon modes. The results of the model have shown excellent agreement with experimental results of localized hotspots in silicon at lower temperatures. The model predicts that the effect of added thermal resistance due to the nonequilibrium phonon distribution is greatest at lower temperatures, but is maintained out to temperatures of 1000K. The resistance predicted by the numerical code can be easily integrated with continuum models in order to predict the temperature distribution around nanoscale heat sources with improved accuracy. Additional research efforts also focused on the measurements of the thermal resistance of silicon thin films at higher temperatures, with a focus on polycrystalline silicon. This work was intended to provide much needed experimental data on the thermal transport properties for micro and nanoscale devices built with this material. Initial experiments have shown that the exposure of polycrystalline silicon to high temperatures may induce recrystallization and radically increase the thermal transport properties at room temperature. In addition, the defect density was observed to play a major role in the rate of change in thermal resistivity as a function of temperature.« less
Nowcasting Ground Magnetic Perturbations with the Space Weather Modeling Framework
NASA Astrophysics Data System (ADS)
Welling, D. T.; Toth, G.; Singer, H. J.; Millward, G. H.; Gombosi, T. I.
2015-12-01
Predicting ground-based magnetic perturbations is a critical step towards specifying and predicting geomagnetically induced currents (GICs) in high voltage transmission lines. Currently, the Space Weather Modeling Framework (SWMF), a flexible modeling framework for simulating the multi-scale space environment, is being transitioned from research to operational use (R2O) by NOAA's Space Weather Prediction Center. Upon completion of this transition, the SWMF will provide localized B/t predictions using real-time solar wind observations from L1 and the F10.7 proxy for EUV as model input. This presentation describes the operational SWMF setup and summarizes the changes made to the code to enable R2O progress. The framework's algorithm for calculating ground-based magnetometer observations will be reviewed. Metrics from data-model comparisons will be reviewed to illustrate predictive capabilities. Early data products, such as regional-K index and grids of virtual magnetometer stations, will be presented. Finally, early successes will be shared, including the code's ability to reproduce the recent March 2015 St. Patrick's Day Storm.
Control of the NASA Langley 16-Foot Transonic Tunnel with the Self-Organizing Feature Map
NASA Technical Reports Server (NTRS)
Motter, Mark A.
1998-01-01
A predictive, multiple model control strategy is developed based on an ensemble of local linear models of the nonlinear system dynamics for a transonic wind tunnel. The local linear models are estimated directly from the weights of a Self Organizing Feature Map (SOFM). Local linear modeling of nonlinear autonomous systems with the SOFM is extended to a control framework where the modeled system is nonautonomous, driven by an exogenous input. This extension to a control framework is based on the consideration of a finite number of subregions in the control space. Multiple self organizing feature maps collectively model the global response of the wind tunnel to a finite set of representative prototype controls. These prototype controls partition the control space and incorporate experimental knowledge gained from decades of operation. Each SOFM models the combination of the tunnel with one of the representative controls, over the entire range of operation. The SOFM based linear models are used to predict the tunnel response to a larger family of control sequences which are clustered on the representative prototypes. The control sequence which corresponds to the prediction that best satisfies the requirements on the system output is applied as the external driving signal. Each SOFM provides a codebook representation of the tunnel dynamics corresponding to a prototype control. Different dynamic regimes are organized into topological neighborhoods where the adjacent entries in the codebook represent the minimization of a similarity metric which is the essence of the self organizing feature of the map. Thus, the SOFM is additionally employed to identify the local dynamical regime, and consequently implements a switching scheme than selects the best available model for the applied control. Experimental results of controlling the wind tunnel, with the proposed method, during operational runs where strict research requirements on the control of the Mach number were met, are presented. Comparison to similar runs under the same conditions with the tunnel controlled by either the existing controller or an expert operator indicate the superiority of the method.
Verma, Ruchi; Varshney, Grish C; Raghava, G P S
2010-06-01
The rate of human death due to malaria is increasing day-by-day. Thus the malaria causing parasite Plasmodium falciparum (PF) remains the cause of concern. With the wealth of data now available, it is imperative to understand protein localization in order to gain deeper insight into their functional roles. In this manuscript, an attempt has been made to develop prediction method for the localization of mitochondrial proteins. In this study, we describe a method for predicting mitochondrial proteins of malaria parasite using machine-learning technique. All models were trained and tested on 175 proteins (40 mitochondrial and 135 non-mitochondrial proteins) and evaluated using five-fold cross validation. We developed a Support Vector Machine (SVM) model for predicting mitochondrial proteins of P. falciparum, using amino acids and dipeptides composition and achieved maximum MCC 0.38 and 0.51, respectively. In this study, split amino acid composition (SAAC) is used where composition of N-termini, C-termini, and rest of protein is computed separately. The performance of SVM model improved significantly from MCC 0.38 to 0.73 when SAAC instead of simple amino acid composition was used as input. In addition, SVM model has been developed using composition of PSSM profile with MCC 0.75 and accuracy 91.38%. We achieved maximum MCC 0.81 with accuracy 92% using a hybrid model, which combines PSSM profile and SAAC. When evaluated on an independent dataset our method performs better than existing methods. A web server PFMpred has been developed for predicting mitochondrial proteins of malaria parasites ( http://www.imtech.res.in/raghava/pfmpred/).
B220 analysis with the local lymph node assay: proposal for a more flexible prediction model.
Betts, Catherine J; Dearman, Rebecca J; Kimber, Ian; Ryan, Cindy A; Gerberick, G Frank; Lalko, Jon; Api, Anne Marie
2007-01-01
The mouse local lymph node assay (LLNA) has been developed and validated for the identification of chemicals that have the potential to induce skin sensitisation. In common with other predictive test methods the accuracy of the LLNA is not absolute and experience has revealed that a few chemicals, including for instance a minority of skin irritants, may elicit false-positive reactions in the assay. To improve further the performance of the LLNA, and to eliminate or reduce false-positives, there has been interest in an adjunct method in which the ability of chemicals to cause increases in the frequency of B220(+) lymphocytes in skin-draining lymph nodes is measured. Previous studies suggest that the use of B220 analyses aligned with the standard LLNA may serve to distinguish further between contact allergens and skin irritants. In the original predictive model, chemicals were regarded as being skin sensitisers if they were able to induce a 1.25-fold or greater increase in the percentage of B220(+) cells within lymph nodes compared with concurrent vehicle controls. Although this first prediction model has proven useful, in the light of more recent experience, and specifically as a consequence of some variability observed in the frequency of B220(+) lymphocytes in nodes taken from vehicle control-treated animals, it is timely now to reconsider and refine the model. As a result a new prediction model is proposed in which reliance on the use of absolute thresholds is reduced, and in which small changes in control values can be better accommodated. (c) 2007 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vainshtein, Jeffrey M., E-mail: jvainsh@med.umich.edu; Schipper, Matthew; Zalupski, Mark M.
2013-05-01
Purpose: Although established in the postresection setting, the prognostic value of carbohydrate antigen 19-9 (CA19-9) in unresectable locally advanced pancreatic cancer (LAPC) is less clear. We examined the prognostic utility of CA19-9 in patients with unresectable LAPC treated on a prospective trial of intensity modulated radiation therapy (IMRT) dose escalation with concurrent gemcitabine. Methods and Materials: Forty-six patients with unresectable LAPC were treated at the University of Michigan on a phase 1/2 trial of IMRT dose escalation with concurrent gemcitabine. CA19-9 was obtained at baseline and during routine follow-up. Cox models were used to assess the effect of baseline factorsmore » on freedom from local progression (FFLP), distant progression (FFDP), progression-free survival (PFS), and overall survival (OS). Stepwise forward regression was used to build multivariate predictive models for each endpoint. Results: Thirty-eight patients were eligible for the present analysis. On univariate analysis, baseline CA19-9 and age predicted OS, CA19-9 at baseline and 3 months predicted PFS, gross tumor volume (GTV) and black race predicted FFLP, and CA19-9 at 3 months predicted FFDP. On stepwise multivariate regression modeling, baseline CA19-9, age, and female sex predicted OS; baseline CA19-9 and female sex predicted both PFS and FFDP; and GTV predicted FFLP. Patients with baseline CA19-9 ≤90 U/mL had improved OS (median 23.0 vs 11.1 months, HR 2.88, P<.01) and PFS (14.4 vs 7.0 months, HR 3.61, P=.001). CA19-9 progression over 90 U/mL was prognostic for both OS (HR 3.65, P=.001) and PFS (HR 3.04, P=.001), and it was a stronger predictor of death than either local progression (HR 1.46, P=.42) or distant progression (HR 3.31, P=.004). Conclusions: In patients with unresectable LAPC undergoing definitive chemoradiation therapy, baseline CA19-9 was independently prognostic even after established prognostic factors were controlled for, whereas CA19-9 progression strongly predicted disease progression and death. Future trials should stratify by baseline CA19-9 and incorporate CA19-9 progression as a criterion for progressive disease.« less
Model-based correction for local stress-induced overlay errors
NASA Astrophysics Data System (ADS)
Stobert, Ian; Krishnamurthy, Subramanian; Shi, Hongbo; Stiffler, Scott
2018-03-01
Manufacturing embedded DRAM deep trench capacitors can involve etching very deep holes into silicon wafers1. Due to various design constraints, these holes may not be uniformly distributed across the wafer surface. Some wafer processing steps for these trenches results in stress effects which can distort the silicon wafer in a manner that creates localized alignment issues between the trenches and the structures built above them on the wafer. In this paper, we describe a method to model these localized silicon distortions for complex layouts involving billions of deep trench structures. We describe wafer metrology techniques and data which have been used to verify the stress distortion model accuracy. We also provide a description of how this kind of model can be used to manipulate the polygons in the mask tape out flow to compensate for predicted localized misalignments between design shapes from a deep trench mask and subsequent masks.
One way coupling of CMAQ and a road source dispersion model for fine scale air pollution predictions
Beevers, Sean D.; Kitwiroon, Nutthida; Williams, Martin L.; Carslaw, David C.
2012-01-01
In this paper we have coupled the CMAQ and ADMS air quality models to predict hourly concentrations of NOX, NO2 and O3 for London at a spatial scale of 20 m × 20 m. Model evaluation has demonstrated reasonable agreement with measurements from 80 monitoring sites in London. For NO2 the model evaluation statistics gave 73% of the hourly concentrations within a factor of two of observations, a mean bias of −4.7 ppb and normalised mean bias of −0.17, a RMSE value of 17.7 and an r value of 0.58. The equivalent results for O3 were 61% (FAC2), 2.8 ppb (MB), 0.15 (NMB), 12.1 (RMSE) and 0.64 (r). Analysis of the errors in the model predictions by hour of the week showed the need for improvements in predicting the magnitude of road transport related NOX emissions as well as the hourly emissions scaling in the model. These findings are consistent with recent evidence of UK road transport NOX emissions, reported elsewhere. The predictions of wind speed using the WRF model also influenced the model results and contributed to the daytime over prediction of NOX concentrations at the central London background site at Kensington and Chelsea. An investigation of the use of a simple NO–NO2–O3 chemistry scheme showed good performance close to road sources, and this is also consistent with previous studies. The coupling of the two models raises an issue of emissions double counting. Here, we have put forward a pragmatic solution to this problem with the result that a median double counting error of 0.42% exists across 39 roadside sites in London. Finally, whilst the model can be improved, the current results show promise and demonstrate that the use of a combination of regional scale and local scale models can provide a practical modelling tool for policy development at intergovernmental, national and local authority level, as well as for use in epidemiological studies. PMID:23471172
NASA Astrophysics Data System (ADS)
Wylie, Scott; Watson, Simon
2013-04-01
Any past, current or projected future wind farm developments are highly dependent on localised climatic conditions. For example the mean wind speed, one of the main factors in assessing the economic feasibility of a wind farm, can vary significantly over length scales no greater than the size of a typical wind farm. Any additional heterogeneity at a potential site, such as forestry, can affect the wind resource further not accounting for the additional difficulty of installation. If a wind farm is sited in an environmentally sensitive area then the ability to predict the wind farm performance and possible impacts on the important localised climatic conditions are of increased importance. Siting of wind farms in environmentally sensitive areas is not uncommon, such as areas of peat-land as in this example. Areas of peat-land are important sinks for carbon in the atmosphere but their ability to sequester carbon is highly dependent on the local climatic conditions. An operational wind farm's impact on such an area was investigated using CFD. Validation of the model outputs were carried out using field measurements from three automatic weather stations (AWS) located throughout the site. The study focuses on validation of both wind speed and turbulence measurement, whilst also assessing the models ability to predict wind farm performance. The use of CFD to model the variation in wind speed over heterogeneous terrain, including wind turbines effects, is increasing in popularity. Encouraging results have increased confidence in the ability of CFD performance in complex terrain with features such as steep slopes and forests, which are not well modelled by the widely used linear models such as WAsP and MS-Micro. Using concurrent measurements from three stationary AWS across the wind farm will allow detailed validation of the model predicted flow characteristics, whilst aggregated power output information will allow an assessment of how accurate the model setup can predict wind farm performance. Given the dependence of the local climatic conditions influence on the peat-land's ability to sequester carbon, accurate predictions of the local wind and turbulence features will allow us to quantify any possible wind farm influences. This work was carried out using the commercially available Reynolds Averaged Navier-Stokes (RANS) CFD package ANSYS CFX. Utilising the Windmodeller add-on in CFX, a series of simulations were carried out to assess wind flow interactions through and around the wind farm, incorporating features such as terrain, forestry and rotor wake interactions. Particular attention was paid to forestry effects, as the AWS are located close to the vicinity of forestry. Different Leaf Area Densities (LAD) were tested to assess how sensitive the models output was to this change.
Local modelling of land consumption in Germany with RegioClust
NASA Astrophysics Data System (ADS)
Hagenauer, Julian; Helbich, Marco
2018-03-01
Germany is experiencing extensive land consumption. This necessitates local models to understand actual and future land consumption patterns. This research examined land consumption rates on a municipality level in Germany for the period 2000-10 and predicted rates for 2010-20. For this purpose, RegioClust, an algorithm that combines hierarchical clustering and regression analysis to identify regions with similar relationships between land consumption and its drivers, was developed. The performance of RegioClust was compared against geographically weighted regression (GWR). Distinct spatially varying relationships across regions emerged, whereas population density is suggested as the central driver. Although both RegioClust and GWR predicted an increase in land consumption rates for east Germany for 2010-20, only RegioClust forecasts a decline for west Germany. In conclusion, both models predict for 2010-20 a rate of land consumption that suggests that the policy objective of reducing land consumption to 30 ha per day in 2020 will not be achieved. Policymakers are advised to take action and revise existing planning strategies to counteract this development.
NASA Astrophysics Data System (ADS)
Doytchinova, Irini A.; Walshe, Valerie; Borrow, Persephone; Flower, Darren R.
2005-03-01
The affinities of 177 nonameric peptides binding to the HLA-A*0201 molecule were measured using a FACS-based MHC stabilisation assay and analysed using chemometrics. Their structures were described by global and local descriptors, QSAR models were derived by genetic algorithm, stepwise regression and PLS. The global molecular descriptors included molecular connectivity χ indices, κ shape indices, E-state indices, molecular properties like molecular weight and log P, and three-dimensional descriptors like polarizability, surface area and volume. The local descriptors were of two types. The first used a binary string to indicate the presence of each amino acid type at each position of the peptide. The second was also position-dependent but used five z-scales to describe the main physicochemical properties of the amino acids forming the peptides. The models were developed using a representative training set of 131 peptides and validated using an independent test set of 46 peptides. It was found that the global descriptors could not explain the variance in the training set nor predict the affinities of the test set accurately. Both types of local descriptors gave QSAR models with better explained variance and predictive ability. The results suggest that, in their interactions with the MHC molecule, the peptide acts as a complicated ensemble of multiple amino acids mutually potentiating each other.
Local and Regional Determinants of an Uncommon Functional Group in Freshwater Lakes and Ponds
McCann, Michael James
2015-01-01
A combination of local and regional factors and stochastic forces is expected to determine the occurrence of species and the structure of communities. However, in most cases, our understanding is incomplete, with large amounts of unexplained variation. Using functional groups rather than individual species may help explain the relationship between community composition and conditions. In this study, I used survey data from freshwater lakes and ponds to understand factors that determine the presence of the floating plant functional group in the northeast United States. Of the 176 water bodies surveyed, 104 (59.1%) did not contain any floating plant species. The occurrence of this functional group was largely determined by local abiotic conditions, which were spatially autocorrelated across the region. A model predicting the presence of the floating plant functional group performed similarly to the best species-specific models. Using a permutation test, I also found that the observed prevalence of floating plants is no different than expected by random assembly from a species pool of its size. These results suggest that the size of the species pool interacts with local conditions in determining the presence of a functional group. Nevertheless, a large amount of unexplained variation remains, attributable to either stochastic species occurrence or incomplete predictive models. The simple permutation approach in this study can be extended to test alternative models of community assembly. PMID:26121636
Models of recurrent strike-slip earthquake cycles and the state of crustal stress
NASA Technical Reports Server (NTRS)
Lyzenga, Gregory A.; Raefsky, Arthur; Mulligan, Stephanie G.
1991-01-01
Numerical models of the strike-slip earthquake cycle, assuming a viscoelastic asthenosphere coupling model, are examined. The time-dependent simulations incorporate a stress-driven fault, which leads to tectonic stress fields and earthquake recurrence histories that are mutually consistent. Single-fault simulations with constant far-field plate motion lead to a nearly periodic earthquake cycle and a distinctive spatial distribution of crustal shear stress. The predicted stress distribution includes a local minimum in stress at depths less than typical seismogenic depths. The width of this stress 'trough' depends on the magnitude of crustal stress relative to asthenospheric drag stresses. The models further predict a local near-fault stress maximum at greater depths, sustained by the cyclic transfer of strain from the elastic crust to the ductile asthenosphere. Models incorporating both low-stress and high-stress fault strength assumptions are examined, under Newtonian and non-Newtonian rheology assumptions. Model results suggest a preference for low-stress (a shear stress level of about 10 MPa) fault models, in agreement with previous estimates based on heat flow measurements and other stress indicators.
Development of a flood-induced health risk prediction model for Africa
NASA Astrophysics Data System (ADS)
Lee, D.; Block, P. J.
2017-12-01
Globally, many floods occur in developing or tropical regions where the impact on public health is substantial, including death and injury, drinking water, endemic disease, and so on. Although these flood impacts on public health have been investigated, integrated management of floods and flood-induced health risks is technically and institutionally limited. Specifically, while the use of climatic and hydrologic forecasts for disaster management has been highlighted, analogous predictions for forecasting the magnitude and impact of health risks are lacking, as is the infrastructure for health early warning systems, particularly in developing countries. In this study, we develop flood-induced health risk prediction model for African regions using season-ahead flood predictions with climate drivers and a variety of physical and socio-economic information, such as local hazard, exposure, resilience, and health vulnerability indicators. Skillful prediction of flood and flood-induced health risks can contribute to practical pre- and post-disaster responses in both local- and global-scales, and may eventually be integrated into multi-hazard early warning systems for informed advanced planning and management. This is especially attractive for areas with limited observations and/or little capacity to develop flood-induced health risk warning systems.
Beam-tracing model for predicting sound fields in rooms with multilayer bounding surfaces
NASA Astrophysics Data System (ADS)
Wareing, Andrew; Hodgson, Murray
2005-10-01
This paper presents the development of a wave-based room-prediction model for predicting steady-state sound fields in empty rooms with specularly reflecting, multilayer surfaces. A triangular beam-tracing model with phase, and a transfer-matrix approach to model the surfaces, were involved. Room surfaces were modeled as multilayers of fluid, solid, or porous materials. Biot theory was used in the transfer-matrix formulation of the porous layer. The new model consisted of the transfer-matrix model integrated into the beam-tracing algorithm. The transfer-matrix model was validated by comparing predictions with those by theory, and with experiment. The test surfaces were a glass plate, double drywall panels, double steel panels, a carpeted floor, and a suspended-acoustical ceiling. The beam-tracing model was validated in the cases of three idealized room configurations-a small office, a corridor, and a small industrial workroom-with simple boundary conditions. The number of beams, the reflection order, and the frequency resolution required to obtain accurate results were investigated. Beam-tracing predictions were compared with those by a method-of-images model with phase. The model will be used to study sound fields in rooms with local- or extended-reaction multilayer surfaces.
Prediction of a Densely Loaded Particle-Laden Jet using a Euler-Lagrange Dense Spray Model
NASA Astrophysics Data System (ADS)
Pakseresht, Pedram; Apte, Sourabh V.
2017-11-01
Modeling of a dense spray regime using an Euler-Lagrange discrete-element approach is challenging because of local high volume loading. A subgrid cluster of droplets can lead to locally high void fractions for the disperse phase. Under these conditions, spatio-temporal changes in the carrier phase volume fractions, which are commonly neglected in spray simulations in an Euler-Lagrange two-way coupling model, could become important. Accounting for the carrier phase volume fraction variations, leads to zero-Mach number, variable density governing equations. Using pressure-based solvers, this gives rise to a source term in the pressure Poisson equation and a non-divergence free velocity field. To test the validity and predictive capability of such an approach, a round jet laden with solid particles is investigated using Direct Numerical Simulation and compared with available experimental data for different loadings. Various volume fractions spanning from dilute to dense regimes are investigated with and without taking into account the volume displacement effects. The predictions of the two approaches are compared and analyzed to investigate the effectiveness of the dense spray model. Financial support was provided by National Aeronautics and Space Administration (NASA).
Resolution limits of ultrafast ultrasound localization microscopy
NASA Astrophysics Data System (ADS)
Desailly, Yann; Pierre, Juliette; Couture, Olivier; Tanter, Mickael
2015-11-01
As in other imaging methods based on waves, the resolution of ultrasound imaging is limited by the wavelength. However, the diffraction-limit can be overcome by super-localizing single events from isolated sources. In recent years, we developed plane-wave ultrasound allowing frame rates up to 20 000 fps. Ultrafast processes such as rapid movement or disruption of ultrasound contrast agents (UCA) can thus be monitored, providing us with distinct punctual sources that could be localized beyond the diffraction limit. We previously showed experimentally that resolutions beyond λ/10 can be reached in ultrafast ultrasound localization microscopy (uULM) using a 128 transducer matrix in reception. Higher resolutions are theoretically achievable and the aim of this study is to predict the maximum resolution in uULM with respect to acquisition parameters (frequency, transducer geometry, sampling electronics). The accuracy of uULM is the error on the localization of a bubble, considered a point-source in a homogeneous medium. The proposed model consists in two steps: determining the timing accuracy of the microbubble echo in radiofrequency data, then transferring this time accuracy into spatial accuracy. The simplified model predicts a maximum resolution of 40 μm for a 1.75 MHz transducer matrix composed of two rows of 64 elements. Experimental confirmation of the model was performed by flowing microbubbles within a 60 μm microfluidic channel and localizing their blinking under ultrafast imaging (500 Hz frame rate). The experimental resolution, determined as the standard deviation in the positioning of the microbubbles, was predicted within 6 μm (13%) of the theoretical values and followed the analytical relationship with respect to the number of elements and depth. Understanding the underlying physical principles determining the resolution of superlocalization will allow the optimization of the imaging setup for each organ. Ultimately, accuracies better than the size of capillaries are achievable at several centimeter depths.
NASA Astrophysics Data System (ADS)
Dekoninck, Luc; Botteldooren, Dick; Int Panis, Luc
2013-11-01
Several studies have shown that a significant amount of daily air pollution exposure, in particular Black Carbon (BC), is inhaled during trips. Assessing this contribution to exposure remains difficult because on the one hand local air pollution maps lack spatio-temporal resolution, at the other hand direct measurement of particulate matter concentration remains expensive. This paper proposes to use in-traffic noise measurements in combination with geographical and meteorological information for predicting BC exposure during commuting trips. Mobile noise measurements are cheaper and easier to perform than mobile air pollution measurements and can easily be used in participatory sensing campaigns. The uniqueness of the proposed model lies in the choice of noise indicators that goes beyond the traditional overall A-weighted noise level used in previous work. Noise and BC exposures are both related to the traffic intensity but also to traffic speed and traffic dynamics. Inspired by theoretical knowledge on the emission of noise and BC, the low frequency engine related noise and the difference between high frequency and low frequency noise that indicates the traffic speed, are introduced in the model. In addition, it is shown that splitting BC in a local and a background component significantly improves the model. The coefficients of the proposed model are extracted from 200 commuter bicycle trips. The predicted average exposure over a single trip correlates with measurements with a Pearson coefficient of 0.78 using only four parameters: the low frequency noise level, wind speed, the difference between high and low frequency noise and a street canyon index expressing local air pollution dispersion properties.
Reef-coral refugia in a rapidly changing ocean.
Cacciapaglia, Chris; van Woesik, Robert
2015-06-01
This study sought to identify climate-change thermal-stress refugia for reef corals in the Indian and Pacific Oceans. A species distribution modeling approach was used to identify refugia for 12 coral species that differed considerably in their local response to thermal stress. We hypothesized that the local response of coral species to thermal stress might be similarly reflected as a regional response to climate change. We assessed the contemporary geographic range of each species and determined their temperature and irradiance preferences using a k-fold algorithm to randomly select training and evaluation sites. That information was applied to downscaled outputs of global climate models to predict where each species is likely to exist by the year 2100. Our model was run with and without a 1°C capacity to adapt to the rising ocean temperature. The results show a positive exponential relationship between the current area of habitat that coral species occupy and the predicted area of habitat that they will occupy by 2100. There was considerable decoupling between scales of response, however, and with further ocean warming some 'winners' at local scales will likely become 'losers' at regional scales. We predicted that nine of the 12 species examined will lose 24-50% of their current habitat. Most reductions are predicted to occur between the latitudes 5-15°, in both hemispheres. Yet when we modeled a 1°C capacity to adapt, two ubiquitous species, Acropora hyacinthus and Acropora digitifera, were predicted to retain much of their current habitat. By contrast, the thermally tolerant Porites lobata is expected to increase its current distribution by 14%, particularly southward along the east and west coasts of Australia. Five areas were identified as Indian Ocean refugia, and seven areas were identified as Pacific Ocean refugia for reef corals under climate change. All 12 of these reef-coral refugia deserve high-conservation status. © 2015 John Wiley & Sons Ltd.
Physical and mathematical modelling of ladle metallurgy operations. [steelmaking
NASA Technical Reports Server (NTRS)
El-Kaddah, N.; Szekely, J.
1982-01-01
Experimental measurements are reported, on the velocity fields and turbulence parameters on a water model of an argon stirred ladle. These velocity measurements are complemented by direct heat transfer measurements, obtained by studying the rate at which ice rods immersed into the system melt, at various locations. The theoretical work undertaken involved the use of the turbulence Navier-Stokes equations in conjunction with the kappa-epsilon model to predict the local velocity fields and the maps of the turbulence parameters. Theoretical predictions were in reasonably good agreement with the experimentally measured velocity fields; the agreement between the predicted and the measured turbulence parameters was less perfect, but still satisfactory. The implications of these findings to the modelling of ladle metallurgical operations are discussed.
Modeling Brain Dynamics in Brain Tumor Patients Using the Virtual Brain.
Aerts, Hannelore; Schirner, Michael; Jeurissen, Ben; Van Roost, Dirk; Achten, Eric; Ritter, Petra; Marinazzo, Daniele
2018-01-01
Presurgical planning for brain tumor resection aims at delineating eloquent tissue in the vicinity of the lesion to spare during surgery. To this end, noninvasive neuroimaging techniques such as functional MRI and diffusion-weighted imaging fiber tracking are currently employed. However, taking into account this information is often still insufficient, as the complex nonlinear dynamics of the brain impede straightforward prediction of functional outcome after surgical intervention. Large-scale brain network modeling carries the potential to bridge this gap by integrating neuroimaging data with biophysically based models to predict collective brain dynamics. As a first step in this direction, an appropriate computational model has to be selected, after which suitable model parameter values have to be determined. To this end, we simulated large-scale brain dynamics in 25 human brain tumor patients and 11 human control participants using The Virtual Brain, an open-source neuroinformatics platform. Local and global model parameters of the Reduced Wong-Wang model were individually optimized and compared between brain tumor patients and control subjects. In addition, the relationship between model parameters and structural network topology and cognitive performance was assessed. Results showed (1) significantly improved prediction accuracy of individual functional connectivity when using individually optimized model parameters; (2) local model parameters that can differentiate between regions directly affected by a tumor, regions distant from a tumor, and regions in a healthy brain; and (3) interesting associations between individually optimized model parameters and structural network topology and cognitive performance.
EnKF with closed-eye period - bridging intermittent model structural errors in soil hydrology
NASA Astrophysics Data System (ADS)
Bauser, Hannes H.; Jaumann, Stefan; Berg, Daniel; Roth, Kurt
2017-04-01
The representation of soil water movement exposes uncertainties in all model components, namely dynamics, forcing, subscale physics and the state itself. Especially model structural errors in the description of the dynamics are difficult to represent and can lead to an inconsistent estimation of the other components. We address the challenge of a consistent aggregation of information for a manageable specific hydraulic situation: a 1D soil profile with TDR-measured water contents during a time period of less than 2 months. We assess the uncertainties for this situation and detect initial condition, soil hydraulic parameters, small-scale heterogeneity, upper boundary condition, and (during rain events) the local equilibrium assumption by the Richards equation as the most important ones. We employ an iterative Ensemble Kalman Filter (EnKF) with an augmented state. Based on a single rain event, we are able to reduce all uncertainties directly, except for the intermittent violation of the local equilibrium assumption. We detect these times by analyzing the temporal evolution of estimated parameters. By introducing a closed-eye period - during which we do not estimate parameters, but only guide the state based on measurements - we can bridge these times. The introduced closed-eye period ensured constant parameters, suggesting that they resemble the believed true material properties. The closed-eye period improves predictions during periods when the local equilibrium assumption is met, but consequently worsens predictions when the assumption is violated. Such a prediction requires a description of the dynamics during local non-equilibrium phases, which remains an open challenge.
Mapping local deformation behavior in single cell metal lattice structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlton, Holly D.; Lind, Jonathan; Messner, Mark C.
The deformation behavior of metal lattice structures is extremely complex and challenging to predict, especially since strain is not uniformly distributed throughout the structure. Understanding and predicting the failure behavior for these types of light-weighting structures is of great interest due to the excellent scaling of stiffness- and strength-to weight ratios they display. Therefore, there is a need to perform simplified experiments that probe unit cell mechanisms. This study reports on high resolution mapping of the heterogeneous structural response of single unit cells to the macro-scale loading condition. Two types of structures, known to show different stress-strain responses, were evaluatedmore » using synchrotron radiation micro-tomography while performing in-situ uniaxial compression tests to capture the local micro-strain deformation. These structures included the octet-truss, a stretch-dominated lattice, and the rhombic-dodecahedron, a bend-dominated lattice. The tomographic analysis showed that the stretch- and bend-dominated lattices exhibit different failure mechanisms and that the defects built into the structure cause a heterogeneous localized deformation response. Also shown here is a change in failure mode for stretch-dominated lattices, where there appears to be a transition from buckling to plastic yielding for samples with a relative density between 10 and 20%. In conclusion, the experimental results were also used to inform computational studies designed to predict the mesoscale deformation behavior of lattice structures. Here an equivalent continuum model and a finite element model were used to predict both local strain fields and mechanical behavior of lattices with different topologies.« less
Mapping local deformation behavior in single cell metal lattice structures
Carlton, Holly D.; Lind, Jonathan; Messner, Mark C.; ...
2017-02-08
The deformation behavior of metal lattice structures is extremely complex and challenging to predict, especially since strain is not uniformly distributed throughout the structure. Understanding and predicting the failure behavior for these types of light-weighting structures is of great interest due to the excellent scaling of stiffness- and strength-to weight ratios they display. Therefore, there is a need to perform simplified experiments that probe unit cell mechanisms. This study reports on high resolution mapping of the heterogeneous structural response of single unit cells to the macro-scale loading condition. Two types of structures, known to show different stress-strain responses, were evaluatedmore » using synchrotron radiation micro-tomography while performing in-situ uniaxial compression tests to capture the local micro-strain deformation. These structures included the octet-truss, a stretch-dominated lattice, and the rhombic-dodecahedron, a bend-dominated lattice. The tomographic analysis showed that the stretch- and bend-dominated lattices exhibit different failure mechanisms and that the defects built into the structure cause a heterogeneous localized deformation response. Also shown here is a change in failure mode for stretch-dominated lattices, where there appears to be a transition from buckling to plastic yielding for samples with a relative density between 10 and 20%. In conclusion, the experimental results were also used to inform computational studies designed to predict the mesoscale deformation behavior of lattice structures. Here an equivalent continuum model and a finite element model were used to predict both local strain fields and mechanical behavior of lattices with different topologies.« less
Just-in-Time Correntropy Soft Sensor with Noisy Data for Industrial Silicon Content Prediction.
Chen, Kun; Liang, Yu; Gao, Zengliang; Liu, Yi
2017-08-08
Development of accurate data-driven quality prediction models for industrial blast furnaces encounters several challenges mainly because the collected data are nonlinear, non-Gaussian, and uneven distributed. A just-in-time correntropy-based local soft sensing approach is presented to predict the silicon content in this work. Without cumbersome efforts for outlier detection, a correntropy support vector regression (CSVR) modeling framework is proposed to deal with the soft sensor development and outlier detection simultaneously. Moreover, with a continuous updating database and a clustering strategy, a just-in-time CSVR (JCSVR) method is developed. Consequently, more accurate prediction and efficient implementations of JCSVR can be achieved. Better prediction performance of JCSVR is validated on the online silicon content prediction, compared with traditional soft sensors.
Just-in-Time Correntropy Soft Sensor with Noisy Data for Industrial Silicon Content Prediction
Chen, Kun; Liang, Yu; Gao, Zengliang; Liu, Yi
2017-01-01
Development of accurate data-driven quality prediction models for industrial blast furnaces encounters several challenges mainly because the collected data are nonlinear, non-Gaussian, and uneven distributed. A just-in-time correntropy-based local soft sensing approach is presented to predict the silicon content in this work. Without cumbersome efforts for outlier detection, a correntropy support vector regression (CSVR) modeling framework is proposed to deal with the soft sensor development and outlier detection simultaneously. Moreover, with a continuous updating database and a clustering strategy, a just-in-time CSVR (JCSVR) method is developed. Consequently, more accurate prediction and efficient implementations of JCSVR can be achieved. Better prediction performance of JCSVR is validated on the online silicon content prediction, compared with traditional soft sensors. PMID:28786957
Le Port, Agnès; Cottrell, Gilles; Chandre, Fabrice; Cot, Michel; Massougbodji, Achille; Garcia, André
2013-07-01
According to several studies, infants whose mothers had a malaria-infected placenta (MIP) at delivery are at increased risk of a first malaria infection. Immune tolerance caused by intrauterine contact with the parasite could explain this phenomenon, but it is also known that infants who are highly exposed to Anopheles mosquitoes infected with Plasmodium are at greater risk of contracting malaria. Consequently, local malaria transmission must be taken into account to demonstrate the immune tolerance hypothesis. From data collected between 2007 and 2010 on 545 infants followed from birth to age 18 months in southern Benin, we compared estimates of the effect of MIP on time to first malaria infection obtained through different Cox models. In these models, MIP was adjusted for either 1) "village-like" time-independent exposure variables or 2) spatiotemporal exposure prediction derived from local climatic, environmental, and behavioral factors. Only the use of exposure prediction improved the model's goodness of fit (Bayesian Information Criterion) and led to clear conclusions regarding the effect of placental infection, whereas the models using the village-like variables were less successful than the univariate model. This demonstrated clearly the benefit of adequately taking transmission into account in cohort studies of malaria.
Ion Electrodiffusion Governs Silk Electrogelation.
Kojic, Nikola; Panzer, Matthew J; Leisk, Gary G; Raja, Waseem K; Kojic, Milos; Kaplan, David L
2012-07-14
Silk electrogelation involves the transition of an aqueous silk fibroin solution to a gel state (E-gel) in the presence of an electric current. The process is based on local pH changes as a result of water electrolysis - generating H(+) and OH(-) ions at the (+) and (-) electrodes, respectively. Silk fibroin has a pI=4.2 and when local pH
Deng, Hailong; Li, Wei; Sakai, Tatsuo; Sun, Zhenduo
2015-12-02
The unexpected failures of structural materials in very high cycle fatigue (VHCF) regime have been a critical issue in modern engineering design. In this study, the VHCF property of a Cr-Ni-W gear steel was experimentally investigated under axial loading with the stress ratio of R = -1, and a life prediction model associated with crack initiation and growth behaviors was proposed. Results show that the Cr-Ni-W gear steel exhibits the constantly decreasing S-N property without traditional fatigue limit, and the fatigue strength corresponding to 10⁸ cycles is around 485 MPa. The inclusion-fine granular area (FGA)-fisheye induced failure becomes the main failure mechanism in the VHCF regime, and the local stress around the inclusion play a key role. By using the finite element analysis of representative volume element, the local stress tends to increase with the increase of elastic modulus difference between inclusion and matrix. The predicted crack initiation life occupies the majority of total fatigue life, while the predicted crack growth life is only accounts for a tiny fraction. In view of the good agreement between the predicted and experimental results, the proposed VHCF life prediction model involving crack initiation and growth can be acceptable for inclusion-FGA-fisheye induced failure.
Landscape capability models as a tool to predict fine-scale forest bird occupancy and abundance
Loman, Zachary G.; DeLuca, William; Harrison, Daniel J.; Loftin, Cynthia S.; Rolek, Brian W.; Wood, Petra B.
2018-01-01
ContextSpecies-specific models of landscape capability (LC) can inform landscape conservation design. Landscape capability is “the ability of the landscape to provide the environment […] and the local resources […] needed for survival and reproduction […] in sufficient quantity, quality and accessibility to meet the life history requirements of individuals and local populations.” Landscape capability incorporates species’ life histories, ecologies, and distributions to model habitat for current and future landscapes and climates as a proactive strategy for conservation planning.ObjectivesWe tested the ability of a set of LC models to explain variation in point occupancy and abundance for seven bird species representative of spruce-fir, mixed conifer-hardwood, and riparian and wooded wetland macrohabitats.MethodsWe compiled point count data sets used for biological inventory, species monitoring, and field studies across the northeastern United States to create an independent validation data set. Our validation explicitly accounted for underestimation in validation data using joint distance and time removal sampling.ResultsBlackpoll warbler (Setophaga striata), wood thrush (Hylocichla mustelina), and Louisiana (Parkesia motacilla) and northern waterthrush (P. noveboracensis) models were validated as predicting variation in abundance, although this varied from not biologically meaningful (1%) to strongly meaningful (59%). We verified all seven species models [including ovenbird (Seiurus aurocapilla), blackburnian (Setophaga fusca) and cerulean warbler (Setophaga cerulea)], as all were positively related to occupancy data.ConclusionsLC models represent a useful tool for conservation planning owing to their predictive ability over a regional extent. As improved remote-sensed data become available, LC layers are updated, which will improve predictions.
Satellite remote sensing data can be used to model marine microbial metabolite turnover
Larsen, Peter E; Scott, Nicole; Post, Anton F; Field, Dawn; Knight, Rob; Hamada, Yuki; Gilbert, Jack A
2015-01-01
Sampling ecosystems, even at a local scale, at the temporal and spatial resolution necessary to capture natural variability in microbial communities are prohibitively expensive. We extrapolated marine surface microbial community structure and metabolic potential from 72 16S rRNA amplicon and 8 metagenomic observations using remotely sensed environmental parameters to create a system-scale model of marine microbial metabolism for 5904 grid cells (49 km2) in the Western English Chanel, across 3 years of weekly averages. Thirteen environmental variables predicted the relative abundance of 24 bacterial Orders and 1715 unique enzyme-encoding genes that encode turnover of 2893 metabolites. The genes' predicted relative abundance was highly correlated (Pearson Correlation 0.72, P-value <10−6) with their observed relative abundance in sequenced metagenomes. Predictions of the relative turnover (synthesis or consumption) of CO2 were significantly correlated with observed surface CO2 fugacity. The spatial and temporal variation in the predicted relative abundances of genes coding for cyanase, carbon monoxide and malate dehydrogenase were investigated along with the predicted inter-annual variation in relative consumption or production of ∼3000 metabolites forming six significant temporal clusters. These spatiotemporal distributions could possibly be explained by the co-occurrence of anaerobic and aerobic metabolisms associated with localized plankton blooms or sediment resuspension, which facilitate the presence of anaerobic micro-niches. This predictive model provides a general framework for focusing future sampling and experimental design to relate biogeochemical turnover to microbial ecology. PMID:25072414
Satellite remote sensing data can be used to model marine microbial metabolite turnover
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larsen, Peter E.; Scott, Nicole; Post, Anton F.
Sampling ecosystems, even at a local scale, at the temporal and spatial resolution necessary to capture natural variability in microbial communities are prohibitively expensive. We extrapolated marine surface microbial community structure and metabolic potential from 72 16S rRNA amplicon and 8 metagenomic observations using remotely sensed environmental parameters to create a system-scale model of marine microbial metabolism for 5904 grid cells (49 km2) in the Western English Chanel, across 3 years of weekly averages. Thirteen environmental variables predicted the relative abundance of 24 bacterial Orders and 1715 unique enzyme-encoding genes that encode turnover of 2893 metabolites. The genes’ predicted relativemore » abundance was highly correlated (Pearson Correlation 0.72, P-value <10-6) with their observed relative abundance in sequenced metagenomes. Predictions of the relative turnover (synthesis or consumption) of CO2 were significantly correlated with observed surface CO2 fugacity. The spatial and temporal variation in the predicted relative abundances of genes coding for cyanase, carbon monoxide and malate dehydrogenase were investigated along with the predicted inter-annual variation in relative consumption or production of ~3000 metabolites forming six significant temporal clusters. These spatiotemporal distributions could possibly be explained by the co-occurrence of anaerobic and aerobic metabolisms associated with localized plankton blooms or sediment resuspension, which facilitate the presence of anaerobic micro-niches. This predictive model provides a general framework for focusing future sampling and experimental design to relate biogeochemical turnover to microbial ecology.« less
IASI Radiance Data Assimilation in Local Ensemble Transform Kalman Filter
NASA Astrophysics Data System (ADS)
Cho, K.; Hyoung-Wook, C.; Jo, Y.
2016-12-01
Korea institute of Atmospheric Prediction Systems (KIAPS) is developing NWP model with data assimilation systems. Local Ensemble Transform Kalman Filter (LETKF) system, one of the data assimilation systems, has been developed for KIAPS Integrated Model (KIM) based on cubed-sphere grid and has successfully assimilated real data. LETKF data assimilation system has been extended to 4D- LETKF which considers time-evolving error covariance within assimilation window and IASI radiance data assimilation using KPOP (KIAPS package for observation processing) with RTTOV (Radiative Transfer for TOVS). The LETKF system is implementing semi operational prediction including conventional (sonde, aircraft) observation and AMSU-A (Advanced Microwave Sounding Unit-A) radiance data from April. Recently, the semi operational prediction system updated radiance observations including GPS-RO, AMV, IASI (Infrared Atmospheric Sounding Interferometer) data at July. A set of simulation of KIM with ne30np4 and 50 vertical levels (of top 0.3hPa) were carried out for short range forecast (10days) within semi operation prediction LETKF system with ensemble forecast 50 members. In order to only IASI impact, our experiments used only conventional and IAIS radiance data to same semi operational prediction set. We carried out sensitivity test for IAIS thinning method (3D and 4D). IASI observation number was increased by temporal (4D) thinning and the improvement of IASI radiance data impact on the forecast skill of model will expect.
Modelling road accidents: An approach using structural time series
NASA Astrophysics Data System (ADS)
Junus, Noor Wahida Md; Ismail, Mohd Tahir
2014-09-01
In this paper, the trend of road accidents in Malaysia for the years 2001 until 2012 was modelled using a structural time series approach. The structural time series model was identified using a stepwise method, and the residuals for each model were tested. The best-fitted model was chosen based on the smallest Akaike Information Criterion (AIC) and prediction error variance. In order to check the quality of the model, a data validation procedure was performed by predicting the monthly number of road accidents for the year 2012. Results indicate that the best specification of the structural time series model to represent road accidents is the local level with a seasonal model.
A statistical model for water quality predictions from a river discharge using coastal observations
NASA Astrophysics Data System (ADS)
Kim, S.; Terrill, E. J.
2007-12-01
Understanding and predicting coastal ocean water quality has benefits for reducing human health risks, protecting the environment, and improving local economies which depend on clean beaches. Continuous observations of coastal physical oceanography increase the understanding of the processes which control the fate and transport of a riverine plume which potentially contains high levels of contaminants from the upstream watershed. A data-driven model of the fate and transport of river plume water from the Tijuana River has been developed using surface current observations provided by a network of HF radar operated as part of a local coastal observatory that has been in place since 2002. The model outputs are compared with water quality sampling of shoreline indicator bacteria, and the skill of an alarm for low water quality is evaluated using the receiver operating characteristic (ROC) curve. In addition, statistical analysis of beach closures in comparison with environmental variables is also discussed.
NASA Astrophysics Data System (ADS)
Werth, D. W.; O'Steen, L.; Chen, K.; Altinakar, M. S.; Garrett, A.; Aleman, S.; Ramalingam, V.
2010-12-01
Global climate change has the potential for profound impacts on society, and poses significant challenges to government and industry in the areas of energy security and sustainability. Given that the ability to exploit energy resources often depends on the climate, the possibility of climate change means we cannot simply assume that the untapped potential of today will still exist in the future. Predictions of future climate are generally based on global climate models (GCMs) which, due to computational limitations, are run at spatial resolutions of hundreds of kilometers. While the results from these models can predict climatic trends averaged over large spatial and temporal scales, their ability to describe the effects of atmospheric phenomena that affect weather on regional to local scales is inadequate. We propose the use of several optimized statistical downscaling techniques that can infer climate change at the local scale from coarse resolution GCM predictions, and apply the results to assess future sustainability for two sources of energy production dependent on adequate water resources: nuclear power (through the dissipation of waste heat from cooling towers, ponds, etc.) and hydroelectric power. All methods will be trained with 20th century data, and applied to data from the years 2040-2049 to get the local-scale changes. Models of cooling tower operation and hydropower potential will then use the downscaled data to predict the possible changes in energy production, and the implications of climate change on plant siting, design, and contribution to the future energy grid can then be examined.
Beukinga, Roelof J; Hulshoff, Jan B; van Dijk, Lisanne V; Muijs, Christina T; Burgerhof, Johannes G M; Kats-Ugurlu, Gursah; Slart, Riemer H J A; Slump, Cornelis H; Mul, Véronique E M; Plukker, John Th M
2017-05-01
Adequate prediction of tumor response to neoadjuvant chemoradiotherapy (nCRT) in esophageal cancer (EC) patients is important in a more personalized treatment. The current best clinical method to predict pathologic complete response is SUV max in 18 F-FDG PET/CT imaging. To improve the prediction of response, we constructed a model to predict complete response to nCRT in EC based on pretreatment clinical parameters and 18 F-FDG PET/CT-derived textural features. Methods: From a prospectively maintained single-institution database, we reviewed 97 consecutive patients with locally advanced EC and a pretreatment 18 F-FDG PET/CT scan between 2009 and 2015. All patients were treated with nCRT (carboplatin/paclitaxel/41.4 Gy) followed by esophagectomy. We analyzed clinical, geometric, and pretreatment textural features extracted from both 18 F-FDG PET and CT. The current most accurate prediction model with SUV max as a predictor variable was compared with 6 different response prediction models constructed using least absolute shrinkage and selection operator regularized logistic regression. Internal validation was performed to estimate the model's performances. Pathologic response was defined as complete versus incomplete response (Mandard tumor regression grade system 1 vs. 2-5). Results: Pathologic examination revealed 19 (19.6%) complete and 78 (80.4%) incomplete responders. Least absolute shrinkage and selection operator regularization selected the clinical parameters: histologic type and clinical T stage, the 18 F-FDG PET-derived textural feature long run low gray level emphasis, and the CT-derived textural feature run percentage. Introducing these variables to a logistic regression analysis showed areas under the receiver-operating-characteristic curve (AUCs) of 0.78 compared with 0.58 in the SUV max model. The discrimination slopes were 0.17 compared with 0.01, respectively. After internal validation, the AUCs decreased to 0.74 and 0.54, respectively. Conclusion: The predictive values of the constructed models were superior to the standard method (SUV max ). These results can be considered as an initial step in predicting tumor response to nCRT in locally advanced EC. Further research in refining the predictive value of these models is needed to justify omission of surgery. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.
Forecasting Influenza Outbreaks in Boroughs and Neighborhoods of New York City.
Yang, Wan; Olson, Donald R; Shaman, Jeffrey
2016-11-01
The ideal spatial scale, or granularity, at which infectious disease incidence should be monitored and forecast has been little explored. By identifying the optimal granularity for a given disease and host population, and matching surveillance and prediction efforts to this scale, response to emergent and recurrent outbreaks can be improved. Here we explore how granularity and representation of spatial structure affect influenza forecast accuracy within New York City. We develop network models at the borough and neighborhood levels, and use them in conjunction with surveillance data and a data assimilation method to forecast influenza activity. These forecasts are compared to an alternate system that predicts influenza for each borough or neighborhood in isolation. At the borough scale, influenza epidemics are highly synchronous despite substantial differences in intensity, and inclusion of network connectivity among boroughs generally improves forecast accuracy. At the neighborhood scale, we observe much greater spatial heterogeneity among influenza outbreaks including substantial differences in local outbreak timing and structure; however, inclusion of the network model structure generally degrades forecast accuracy. One notable exception is that local outbreak onset, particularly when signal is modest, is better predicted with the network model. These findings suggest that observation and forecast at sub-municipal scales within New York City provides richer, more discriminant information on influenza incidence, particularly at the neighborhood scale where greater heterogeneity exists, and that the spatial spread of influenza among localities can be forecast.
Stochastic Short-term High-resolution Prediction of Solar Irradiance and Photovoltaic Power Output
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melin, Alexander M.; Olama, Mohammed M.; Dong, Jin
The increased penetration of solar photovoltaic (PV) energy sources into electric grids has increased the need for accurate modeling and prediction of solar irradiance and power production. Existing modeling and prediction techniques focus on long-term low-resolution prediction over minutes to years. This paper examines the stochastic modeling and short-term high-resolution prediction of solar irradiance and PV power output. We propose a stochastic state-space model to characterize the behaviors of solar irradiance and PV power output. This prediction model is suitable for the development of optimal power controllers for PV sources. A filter-based expectation-maximization and Kalman filtering mechanism is employed tomore » estimate the parameters and states in the state-space model. The mechanism results in a finite dimensional filter which only uses the first and second order statistics. The structure of the scheme contributes to a direct prediction of the solar irradiance and PV power output without any linearization process or simplifying assumptions of the signal’s model. This enables the system to accurately predict small as well as large fluctuations of the solar signals. The mechanism is recursive allowing the solar irradiance and PV power to be predicted online from measurements. The mechanism is tested using solar irradiance and PV power measurement data collected locally in our lab.« less
Yu, Xianyu; Wang, Yi; Niu, Ruiqing; Hu, Youjian
2016-01-01
In this study, a novel coupling model for landslide susceptibility mapping is presented. In practice, environmental factors may have different impacts at a local scale in study areas. To provide better predictions, a geographically weighted regression (GWR) technique is firstly used in our method to segment study areas into a series of prediction regions with appropriate sizes. Meanwhile, a support vector machine (SVM) classifier is exploited in each prediction region for landslide susceptibility mapping. To further improve the prediction performance, the particle swarm optimization (PSO) algorithm is used in the prediction regions to obtain optimal parameters for the SVM classifier. To evaluate the prediction performance of our model, several SVM-based prediction models are utilized for comparison on a study area of the Wanzhou district in the Three Gorges Reservoir. Experimental results, based on three objective quantitative measures and visual qualitative evaluation, indicate that our model can achieve better prediction accuracies and is more effective for landslide susceptibility mapping. For instance, our model can achieve an overall prediction accuracy of 91.10%, which is 7.8%–19.1% higher than the traditional SVM-based models. In addition, the obtained landslide susceptibility map by our model can demonstrate an intensive correlation between the classified very high-susceptibility zone and the previously investigated landslides. PMID:27187430
Yu, Xianyu; Wang, Yi; Niu, Ruiqing; Hu, Youjian
2016-05-11
In this study, a novel coupling model for landslide susceptibility mapping is presented. In practice, environmental factors may have different impacts at a local scale in study areas. To provide better predictions, a geographically weighted regression (GWR) technique is firstly used in our method to segment study areas into a series of prediction regions with appropriate sizes. Meanwhile, a support vector machine (SVM) classifier is exploited in each prediction region for landslide susceptibility mapping. To further improve the prediction performance, the particle swarm optimization (PSO) algorithm is used in the prediction regions to obtain optimal parameters for the SVM classifier. To evaluate the prediction performance of our model, several SVM-based prediction models are utilized for comparison on a study area of the Wanzhou district in the Three Gorges Reservoir. Experimental results, based on three objective quantitative measures and visual qualitative evaluation, indicate that our model can achieve better prediction accuracies and is more effective for landslide susceptibility mapping. For instance, our model can achieve an overall prediction accuracy of 91.10%, which is 7.8%-19.1% higher than the traditional SVM-based models. In addition, the obtained landslide susceptibility map by our model can demonstrate an intensive correlation between the classified very high-susceptibility zone and the previously investigated landslides.
Bayesian Integration of Information in Hippocampal Place Cells
Madl, Tamas; Franklin, Stan; Chen, Ke; Montaldi, Daniela; Trappl, Robert
2014-01-01
Accurate spatial localization requires a mechanism that corrects for errors, which might arise from inaccurate sensory information or neuronal noise. In this paper, we propose that Hippocampal place cells might implement such an error correction mechanism by integrating different sources of information in an approximately Bayes-optimal fashion. We compare the predictions of our model with physiological data from rats. Our results suggest that useful predictions regarding the firing fields of place cells can be made based on a single underlying principle, Bayesian cue integration, and that such predictions are possible using a remarkably small number of model parameters. PMID:24603429
NOAA's weather forecasts go hyper-local with next-generation weather
model NOAA HOME WEATHER OCEANS FISHERIES CHARTING SATELLITES CLIMATE RESEARCH COASTS CAREERS with next-generation weather model New model will help forecasters predict a storm's path, timing and intensity better than ever September 30, 2014 This is a comparison of two weather forecast models looking
2014-10-27
a phase-averaged spectral wind-wave generation and transformation model and its interface in the Surface-water Modeling System (SMS). Ambrose...applications of the Boussinesq (BOUSS-2D) wave model that provides more rigorous calculations for design and performance optimization of integrated...navigation systems . Together these wave models provide reliable predictions on regional and local spatial domains and cost-effective engineering solutions
A next generation air quality modeling system is being developed at the U.S. EPA to enable seamless modeling of air quality from global to regional to (eventually) local scales. State of the science chemistry and aerosol modules from the Community Multiscale Air Quality (CMAQ) mo...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, J; Pollom, E; Durkee, B
2015-06-15
Purpose: To predict response to radiation treatment using computational FDG-PET and CT images in locally advanced head and neck cancer (HNC). Methods: 68 patients with State III-IVB HNC treated with chemoradiation were included in this retrospective study. For each patient, we analyzed primary tumor and lymph nodes on PET and CT scans acquired both prior to and during radiation treatment, which led to 8 combinations of image datasets. From each image set, we extracted high-throughput, radiomic features of the following types: statistical, morphological, textural, histogram, and wavelet, resulting in a total of 437 features. We then performed unsupervised redundancy removalmore » and stability test on these features. To avoid over-fitting, we trained a logistic regression model with simultaneous feature selection based on least absolute shrinkage and selection operator (LASSO). To objectively evaluate the prediction ability, we performed 5-fold cross validation (CV) with 50 random repeats of stratified bootstrapping. Feature selection and model training was solely conducted on the training set and independently validated on the holdout test set. Receiver operating characteristic (ROC) curve of the pooled Result and the area under the ROC curve (AUC) was calculated as figure of merit. Results: For predicting local-regional recurrence, our model built on pre-treatment PET of lymph nodes achieved the best performance (AUC=0.762) on 5-fold CV, which compared favorably with node volume and SUVmax (AUC=0.704 and 0.449, p<0.001). Wavelet coefficients turned out to be the most predictive features. Prediction of distant recurrence showed a similar trend, in which pre-treatment PET features of lymph nodes had the highest AUC of 0.705. Conclusion: The radiomics approach identified novel imaging features that are predictive to radiation treatment response. If prospectively validated in larger cohorts, they could aid in risk-adaptive treatment of HNC.« less
Effects of different dispersal patterns on the presence-absence of multiple species
NASA Astrophysics Data System (ADS)
Mohd, Mohd Hafiz; Murray, Rua; Plank, Michael J.; Godsoe, William
2018-03-01
Predicting which species will be present (or absent) across a geographical region remains one of the key problems in ecology. Numerous studies have suggested several ecological factors that can determine species presence-absence: environmental factors (i.e. abiotic environments), interactions among species (i.e. biotic interactions) and dispersal process. While various ecological factors have been considered, less attention has been given to the problem of understanding how different dispersal patterns, in interaction with other factors, shape community assembly in the presence of priority effects (i.e. where relative initial abundances determine the long-term presence-absence of each species). By employing both local and non-local dispersal models, we investigate the consequences of different dispersal patterns on the occurrence of priority effects and coexistence in multi-species communities. In the case of non-local, but short-range dispersal, we observe agreement with the predictions of local models for weak and medium dispersal strength, but disagreement for relatively strong dispersal levels. Our analysis shows the existence of a threshold value in dispersal strength (i.e. saddle-node bifurcation) above which priority effects disappear. These results also reveal a co-dimension 2 point, corresponding to a degenerate transcritical bifurcation: at this point, the transcritical bifurcation changes from subcritical to supercritical with corresponding creation of a saddle-node bifurcation curve. We observe further contrasting effects of non-local dispersal as dispersal distance changes: while very long-range dispersal can lead to species extinctions, intermediate-range dispersal can permit more outcomes with multi-species coexistence than short-range dispersal (or purely local dispersal). Overall, our results show that priority effects are more pronounced in the non-local dispersal models than in the local dispersal models. Taken together, our findings highlight the profound delicacy in the mediation of priority effects by dispersal processes: ;big steps; can have more influence than many ;small steps;.
Predicting silicon pore optics
NASA Astrophysics Data System (ADS)
Vacanti, Giuseppe; Barriére, Nicolas; Bavdaz, Marcos; Chatbi, Abdelhakim; Collon, Maximilien; Dekker, Danielle; Girou, David; Günther, Ramses; van der Hoeven, Roy; Landgraf, Boris; Sforzini, Jessica; Vervest, Mark; Wille, Eric
2017-09-01
Continuing improvement of Silicon Pore Optics (SPO) calls for regular extension and validation of the tools used to model and predict their X-ray performance. In this paper we present an updated geometrical model for the SPO optics and describe how we make use of the surface metrology collected during each of the SPO manufacturing runs. The new geometrical model affords the user a finer degree of control on the mechanical details of the SPO stacks, while a standard interface has been developed to make use of any type of metrology that can return changes in the local surface normal of the reflecting surfaces. Comparisons between the predicted and actual performance of samples optics will be shown and discussed.
NASA Astrophysics Data System (ADS)
Falugi, P.; Olaru, S.; Dumur, D.
2010-08-01
This article proposes an explicit robust predictive control solution based on linear matrix inequalities (LMIs). The considered predictive control strategy uses different local descriptions of the system dynamics and uncertainties and thus allows the handling of less conservative input constraints. The computed control law guarantees constraint satisfaction and asymptotic stability. The technique is effective for a class of nonlinear systems embedded into polytopic models. A detailed discussion of the procedures which adapt the partition of the state space is presented. For the practical implementation the construction of suitable (explicit) descriptions of the control law are described upon concrete algorithms.
Predictions of spray combustion interactions
NASA Technical Reports Server (NTRS)
Shuen, J. S.; Solomon, A. S. P.; Faeth, G. M.
1984-01-01
Mean and fluctuating phase velocities; mean particle mass flux; particle size; and mean gas-phase Reynolds stress, composition and temperature were measured in stationary, turbulent, axisymmetric, and flows which conform to the boundary layer approximations while having well-defined initial and boundary conditions in dilute particle-laden jets, nonevaporating sprays, and evaporating sprays injected into a still air environment. Three models of the processes, typical of current practice, were evaluated. The local homogeneous flow and deterministic separated flow models did not provide very satisfactory predictions over the present data base. In contrast, the stochastic separated flow model generally provided good predictions and appears to be an attractive approach for treating nonlinear interphase transport processes in turbulent flows containing particles (drops).
Chen, Zhangxing; Huang, Tianyu; Shao, Yimin; ...
2018-03-15
Predicting the mechanical behavior of the chopped carbon fiber Sheet Molding Compound (SMC) due to spatial variations in local material properties is critical for the structural performance analysis but is computationally challenging. Such spatial variations are induced by the material flow in the compression molding process. In this work, a new multiscale SMC modeling framework and the associated computational techniques are developed to provide accurate and efficient predictions of SMC mechanical performance. The proposed multiscale modeling framework contains three modules. First, a stochastic algorithm for 3D chip-packing reconstruction is developed to efficiently generate the SMC mesoscale Representative Volume Element (RVE)more » model for Finite Element Analysis (FEA). A new fiber orientation tensor recovery function is embedded in the reconstruction algorithm to match reconstructions with the target characteristics of fiber orientation distribution. Second, a metamodeling module is established to improve the computational efficiency by creating the surrogates of mesoscale analyses. Third, the macroscale behaviors are predicted by an efficient multiscale model, in which the spatially varying material properties are obtained based on the local fiber orientation tensors. Our approach is further validated through experiments at both meso- and macro-scales, such as tensile tests assisted by Digital Image Correlation (DIC) and mesostructure imaging.« less
PASMet: a web-based platform for prediction, modelling and analyses of metabolic systems
Sriyudthsak, Kansuporn; Mejia, Ramon Francisco; Arita, Masanori; Hirai, Masami Yokota
2016-01-01
PASMet (Prediction, Analysis and Simulation of Metabolic networks) is a web-based platform for proposing and verifying mathematical models to understand the dynamics of metabolism. The advantages of PASMet include user-friendliness and accessibility, which enable biologists and biochemists to easily perform mathematical modelling. PASMet offers a series of user-functions to handle the time-series data of metabolite concentrations. The functions are organised into four steps: (i) Prediction of a probable metabolic pathway and its regulation; (ii) Construction of mathematical models; (iii) Simulation of metabolic behaviours; and (iv) Analysis of metabolic system characteristics. Each function contains various statistical and mathematical methods that can be used independently. Users who may not have enough knowledge of computing or programming can easily and quickly analyse their local data without software downloads, updates or installations. Users only need to upload their files in comma-separated values (CSV) format or enter their model equations directly into the website. Once the time-series data or mathematical equations are uploaded, PASMet automatically performs computation on server-side. Then, users can interactively view their results and directly download them to their local computers. PASMet is freely available with no login requirement at http://pasmet.riken.jp/ from major web browsers on Windows, Mac and Linux operating systems. PMID:27174940
Parametric models to compute tryptophan fluorescence wavelengths from classical protein simulations.
Lopez, Alvaro J; Martínez, Leandro
2018-02-26
Fluorescence spectroscopy is an important method to study protein conformational dynamics and solvation structures. Tryptophan (Trp) residues are the most important and practical intrinsic probes for protein fluorescence due to the variability of their fluorescence wavelengths: Trp residues emit in wavelengths ranging from 308 to 360 nm depending on the local molecular environment. Fluorescence involves electronic transitions, thus its computational modeling is a challenging task. We show that it is possible to predict the wavelength of emission of a Trp residue from classical molecular dynamics simulations by computing the solvent-accessible surface area or the electrostatic interaction between the indole group and the rest of the system. Linear parametric models are obtained to predict the maximum emission wavelengths with standard errors of the order 5 nm. In a set of 19 proteins with emission wavelengths ranging from 308 to 352 nm, the best model predicts the maximum wavelength of emission with a standard error of 4.89 nm and a quadratic Pearson correlation coefficient of 0.81. These models can be used for the interpretation of fluorescence spectra of proteins with multiple Trp residues, or for which local Trp environmental variability exists and can be probed by classical molecular dynamics simulations. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Zhangxing; Huang, Tianyu; Shao, Yimin
Predicting the mechanical behavior of the chopped carbon fiber Sheet Molding Compound (SMC) due to spatial variations in local material properties is critical for the structural performance analysis but is computationally challenging. Such spatial variations are induced by the material flow in the compression molding process. In this work, a new multiscale SMC modeling framework and the associated computational techniques are developed to provide accurate and efficient predictions of SMC mechanical performance. The proposed multiscale modeling framework contains three modules. First, a stochastic algorithm for 3D chip-packing reconstruction is developed to efficiently generate the SMC mesoscale Representative Volume Element (RVE)more » model for Finite Element Analysis (FEA). A new fiber orientation tensor recovery function is embedded in the reconstruction algorithm to match reconstructions with the target characteristics of fiber orientation distribution. Second, a metamodeling module is established to improve the computational efficiency by creating the surrogates of mesoscale analyses. Third, the macroscale behaviors are predicted by an efficient multiscale model, in which the spatially varying material properties are obtained based on the local fiber orientation tensors. Our approach is further validated through experiments at both meso- and macro-scales, such as tensile tests assisted by Digital Image Correlation (DIC) and mesostructure imaging.« less
Green, Timothy W.; Slone, Daniel H.; Swain, Eric D.; Cherkiss, Michael S.; Lohmann, Melinda; Mazzotti, Frank J.; Rice, Kenneth G.
2014-01-01
The distribution and abundance of the American crocodile (Crocodylus acutus) in the Florida Everglades is dependent on the timing, amount, and location of freshwater flow. One of the goals of the Comprehensive Everglades Restoration Plan (CERP) is to restore historic freshwater flows to American crocodile habitat throughout the Everglades. To predict the impacts on the crocodile population from planned restoration activities, we created a stage-based spatially explicit crocodile population model that incorporated regional hydrology models and American crocodile research and monitoring data. Growth and survival were influenced by salinity, water depth, and density-dependent interactions. A stage-structured spatial model was used with discrete spatial convolution to direct crocodiles toward attractive sources where conditions were favorable. The model predicted that CERP would have both positive and negative impacts on American crocodile growth, survival, and distribution. Overall, crocodile populations across south Florida were predicted to decrease approximately 3 % with the implementation of CERP compared to future conditions without restoration, but local increases up to 30 % occurred in the Joe Bay area near Taylor Slough, and local decreases up to 30 % occurred in the vicinity of Buttonwood Canal due to changes in salinity and freshwater flows.
NASA Astrophysics Data System (ADS)
Ghomri, Amina; Mekelleche, Sidi Mohamed
2014-03-01
Global and local reactivity indices derived from density functional theory were used to elucidate the regio- and chemoselectivity of Diels-Alder reactions of masked o-benzoquinones with thiophenes acting as dienophiles. The polarity of the studied reactions is evaluated in terms of the difference of electrophilicity powers between the diene and dienophile partners. Preferential cyclisation modes of these cycloadditions are predicted using Domingo's polar model based on the local electrophilicity index, ωk, of the electrophile and the local nucleophilicity index, Nuk, of the nucleophile. The theoretical calculations, carried out at the B3LYP/6-311G(d,p) level of theory, are in good agreement with experimental findings.
Geothermal modelling and geoneutrino flux prediction at JUNO with local heat production data
NASA Astrophysics Data System (ADS)
Xi, Y.; Wipperfurth, S. A.; McDonough, W. F.; Sramek, O.; Roskovec, B.; He, J.
2017-12-01
Geoneutrinos are mostly electron antineutrinos created from natural radioactive decays in the Earth's interior. Measurement of a geoneutrino flux at near surface detector can lead to a better understanding of the composition of the Earth, inform about chemical layering in the mantle, define the power driving mantle convection and plate tectonics, and reveal the energy supplying the geodynamo. JUNO (Jiangmen Underground Neutrino Observatory) is a 20 kton liquid scintillator detector currently under construction with an expected start date in 2020. Due to its enormous mass, JUNO will detect about 400 geoneutrinos per year, making it an ideal tool to study the Earth. JUNO is located on the passive continental margin of South China, where there is an extensive continental shelf. The continental crust surrounding the JUNO detector is between 26 and 32 km thick and represents the transition between the southern Eurasian continental plate and oceanic plate of the South China Sea.We seek to predict the geoneutrino flux at JUNO prior to data taking and announcement of the particle physics measurement. To do so requires a detail survey of the local lithosphere, as it contributes about 50% of the signal. Previous estimates of the geoneutrino signal at JUNO utilized global crustal models, with no local constraints. Regionally, the area is characterized by extensive lateral and vertical variations in lithology and dominated by Mesozoic granite intrusions, with an average heat production of 6.29 μW/m3. Consequently, at 3 times greater heat production than the globally average upper crust, these granites will generate a higher than average geoneutrino flux at JUNO. To better define the U and Th concentrations in the upper crust, we collected some 300 samples within 50 km of JUNO. By combining chemical data obtained from these samples with data for crustal structures defined by local geophysical studies, we will construct a detailed 3D geothermal model of the region. Our prediction of the geoneutrino signal at JUNO will integrate data for the local (nearest 500 km to the detector) lithosphere, with a far-field model for the rest of the global lithosphere, and a model for the mantle.
Impacts of Larval Connectivity on Coral Heat Tolerance
NASA Astrophysics Data System (ADS)
Pinsky, M. L.; Kleypas, J. A.; Thompson, D. M.; Castruccio, F. S.; Curchitser, E. N.; Watson, J. R.
2016-02-01
The sensitivity of corals to elevated temperature depends on their acclimation and adaptation to the local maximum temperature regime. Through larval dispersal, however, coral populations can receive larvae from regions that are significantly warmer or colder. If these exogenous larvae carry genetic-based tolerances to colder or warmer temperatures, then the thermal sensitivity of the receiving population may be lower or higher, respectively. Using a high-resolution Regional Ocean Modeling System (ROMS) configuration for the Coral Triangle region, we quantify the potential role of connectivity in determining the thermal stress threshold (TST) of a typical broadcast spawner. The model results suggest that even with a pelagic larval dispersal period of only 10 days, many reefs receive larvae from reefs that are warmer or cooler than the local temperature, and that accounting for this connectivity improves bleaching predictions. This has important implications for conservation planning, because connectivity may allow some reefs to have an inherited heat tolerance that is higher or lower than would be predicted based on local conditions alone.
NASA Astrophysics Data System (ADS)
Yang, Kai; Chen, Xiangguang; Wang, Li; Jin, Huaiping
2017-01-01
In rubber mixing process, the key parameter (Mooney viscosity), which is used to evaluate the property of the product, can only be obtained with 4-6h delay offline. It is quite helpful for the industry, if the parameter can be estimate on line. Various data driven soft sensors have been used to prediction in the rubber mixing. However, it always not functions well due to the phase and nonlinear property in the process. The purpose of this paper is to develop an efficient soft sensing algorithm to solve the problem. Based on the proposed GMMD local sample selecting criterion, the phase information is extracted in the local modeling. Using the Gaussian local modeling method within Just-in-time (JIT) learning framework, nonlinearity of the process is well handled. Efficiency of the new method is verified by comparing the performance with various mainstream soft sensors, using the samples from real industrial rubber mixing process.
Spreading Speed of Magnetopause Reconnection X-Lines Using Ground-Satellite Coordination
NASA Astrophysics Data System (ADS)
Zou, Ying; Walsh, Brian M.; Nishimura, Yukitoshi; Angelopoulos, Vassilis; Ruohoniemi, J. Michael; McWilliams, Kathryn A.; Nishitani, Nozomu
2018-01-01
Conceptual and numerical models predict that magnetic reconnection starts at a localized region and then spreads out of the reconnection plane. At the Earth's magnetopause this spreading would occur primarily in local time along the boundary. Different simulations have found the spreading to occur at different speeds such as the Alfvén speed and speed of the current carriers. We use conjugate Time History of Events and Macroscale Interactions during Substorms (THEMIS) spacecraft and Super Dual Auroral Radar Network (SuperDARN) radar measurements to observationally determine the X-line spreading speed at the magnetopause. THEMIS probes the reconnection parameters locally, and SuperDARN tracks the reconnection development remotely. Spreading speeds under different magnetopause boundary conditions are obtained and compared with model predictions. We find that while spreading under weak guide field could be explained by either the current carriers or the Alfvén waves, spreading under strong guide field is consistent only with the current carriers.
Modified linear predictive coding approach for moving target tracking by Doppler radar
NASA Astrophysics Data System (ADS)
Ding, Yipeng; Lin, Xiaoyi; Sun, Ke-Hui; Xu, Xue-Mei; Liu, Xi-Yao
2016-07-01
Doppler radar is a cost-effective tool for moving target tracking, which can support a large range of civilian and military applications. A modified linear predictive coding (LPC) approach is proposed to increase the target localization accuracy of the Doppler radar. Based on the time-frequency analysis of the received echo, the proposed approach first real-time estimates the noise statistical parameters and constructs an adaptive filter to intelligently suppress the noise interference. Then, a linear predictive model is applied to extend the available data, which can help improve the resolution of the target localization result. Compared with the traditional LPC method, which empirically decides the extension data length, the proposed approach develops an error array to evaluate the prediction accuracy and thus, adjust the optimum extension data length intelligently. Finally, the prediction error array is superimposed with the predictor output to correct the prediction error. A series of experiments are conducted to illustrate the validity and performance of the proposed techniques.
Rixen, M.; Ferreira-Coelho, E.; Signell, R.
2008-01-01
Despite numerous and regular improvements in underlying models, surface drift prediction in the ocean remains a challenging task because of our yet limited understanding of all processes involved. Hence, deterministic approaches to the problem are often limited by empirical assumptions on underlying physics. Multi-model hyper-ensemble forecasts, which exploit the power of an optimal local combination of available information including ocean, atmospheric and wave models, may show superior forecasting skills when compared to individual models because they allow for local correction and/or bias removal. In this work, we explore in greater detail the potential and limitations of the hyper-ensemble method in the Adriatic Sea, using a comprehensive surface drifter database. The performance of the hyper-ensembles and the individual models are discussed by analyzing associated uncertainties and probability distribution maps. Results suggest that the stochastic method may reduce position errors significantly for 12 to 72??h forecasts and hence compete with pure deterministic approaches. ?? 2007 NATO Undersea Research Centre (NURC).
Ziegelwanger, Harald; Majdak, Piotr; Kreuzer, Wolfgang
2015-01-01
Head-related transfer functions (HRTFs) can be numerically calculated by applying the boundary element method on the geometry of a listener’s head and pinnae. The calculation results are defined by geometrical, numerical, and acoustical parameters like the microphone used in acoustic measurements. The scope of this study was to estimate requirements on the size and position of the microphone model and on the discretization of the boundary geometry as triangular polygon mesh for accurate sound localization. The evaluation involved the analysis of localization errors predicted by a sagittal-plane localization model, the comparison of equivalent head radii estimated by a time-of-arrival model, and the analysis of actual localization errors obtained in a sound-localization experiment. While the average edge length (AEL) of the mesh had a negligible effect on localization performance in the lateral dimension, the localization performance in sagittal planes, however, degraded for larger AELs with the geometrical error as dominant factor. A microphone position at an arbitrary position at the entrance of the ear canal, a microphone size of 1 mm radius, and a mesh with 1 mm AEL yielded a localization performance similar to or better than observed with acoustically measured HRTFs. PMID:26233020
Rupture Predictions of Notched Ti-6Al-4V Using Local Approaches
Peron, Mirco; Berto, Filippo
2018-01-01
Ti-6Al-4V has been extensively used in structural applications in various engineering fields, from naval to automotive and from aerospace to biomedical. Structural applications are characterized by geometrical discontinuities such as notches, which are widely known to harmfully affect their tensile strength. In recent years, many attempts have been done to define solid criteria with which to reliably predict the tensile strength of materials. Among these criteria, two local approaches are worth mentioning due to the accuracy of their predictions, i.e., the strain energy density (SED) approach and the theory of critical distance (TCD) method. In this manuscript, the robustness of these two methods in predicting the tensile behavior of notched Ti-6Al-4V specimens has been compared. To this aim, two very dissimilar notch geometries have been tested, i.e., semi-circular and blunt V-notch with a notch root radius equal to 1 mm, and the experimental results have been compared with those predicted by the two models. The experimental values have been estimated with low discrepancies by either the SED approach and the TCD method, but the former results in better predictions. The deviations for the SED are in fact lower than 1.3%, while the TCD provides predictions with errors almost up to 8.5%. Finally, the weaknesses and the strengths of the two models have been reported. PMID:29693565
Yoo, Byong Chul; Yeo, Seung-Gu
2017-03-01
Approximately 20% of all patients with locally advanced rectal cancer experience pathologically complete responses following neoadjuvant chemoradiotherapy (CRT) and standard surgery. The utility of radical surgery for patients exhibiting good CRT responses has been challenged. Organ-sparing strategies for selected patients exhibiting complete clinical responses include local excision or no immediate surgery. The subjects of this tailored management are patients whose presenting disease corresponds to current indications of neoadjuvant CRT, and their post-CRT tumor response is assessed by clinical and radiological examinations. However, a model predictive of the CRT response, applied before any treatment commenced, would be valuable to facilitate such a personalized approach. This would increase organ preservation, particularly in patients for whom upfront CRT is not generally prescribed. Molecular biomarkers hold the greatest promise for development of a pretreatment predictive model of CRT response. A combination of clinicopathological, radiological, and molecular markers will be necessary to render the model robust. Molecular research will also contribute to the development of drugs that can overcome the radioresistance of rectal tumors. Current treatments for rectal cancer are based on the expected prognosis given the presenting disease extent. In the future, treatment schemes may be modified by including the predicted CRT response evaluated at presentation.
MODELING MULTICOMPONENT ORGANIC CHEMICAL TRANSPORT IN THREE-FLUID-PHASE POROUS MEDIA
A two dimensional finite-element model was developed to predict coupled transient flow and multicomponent transport of organic chemicals which can partition between NAPL, water, gas and solid phases in porous media under the assumption of local chemical equilibrium. as-phase pres...
MODELING MULTICOMPONENT ORGANIC CHEMICAL TRANSPORT IN THREE FLUID PHASE POROUS MEDIA
A two-dimensional finite-element model was developed to predict coupled transient flow and multicomponent transport of organic chemicals which can partition between nonaqueous phase liquid, water, gas and solid phases in porous media under the assumption of local chemical equilib...
The role of envelope shape in the localization of multiple sound sources and echoes in the barn owl.
Baxter, Caitlin S; Nelson, Brian S; Takahashi, Terry T
2013-02-01
Echoes and sounds of independent origin often obscure sounds of interest, but echoes can go undetected under natural listening conditions, a perception called the precedence effect. How does the auditory system distinguish between echoes and independent sources? To investigate, we presented two broadband noises to barn owls (Tyto alba) while varying the similarity of the sounds' envelopes. The carriers of the noises were identical except for a 2- or 3-ms delay. Their onsets and offsets were also synchronized. In owls, sound localization is guided by neural activity on a topographic map of auditory space. When there are two sources concomitantly emitting sounds with overlapping amplitude spectra, space map neurons discharge when the stimulus in their receptive field is louder than the one outside it and when the averaged amplitudes of both sounds are rising. A model incorporating these features calculated the strengths of the two sources' representations on the map (B. S. Nelson and T. T. Takahashi; Neuron 67: 643-655, 2010). The target localized by the owls could be predicted from the model's output. The model also explained why the echo is not localized at short delays: when envelopes are similar, peaks in the leading sound mask corresponding peaks in the echo, weakening the echo's space map representation. When the envelopes are dissimilar, there are few or no corresponding peaks, and the owl localizes whichever source is predicted by the model to be less masked. Thus the precedence effect in the owl is a by-product of a mechanism for representing multiple sound sources on its map.
Numerical simulation of heat transfer in metal foams
NASA Astrophysics Data System (ADS)
Gangapatnam, Priyatham; Kurian, Renju; Venkateshan, S. P.
2018-02-01
This paper reports a numerical study of forced convection heat transfer in high porosity aluminum foams. Numerical modeling is done considering both local thermal equilibrium and non local thermal equilibrium conditions in ANSYS-Fluent. The results of the numerical model were validated with experimental results, where air was forced through aluminum foams in a vertical duct at different heat fluxes and velocities. It is observed that while the LTE model highly under predicts the heat transfer in these foams, LTNE model predicts the Nusselt number accurately. The novelty of this study is that once hydrodynamic experiments are conducted the permeability and porosity values obtained experimentally can be used to numerically simulate heat transfer in metal foams. The simulation of heat transfer in foams is further extended to find the effect of foam thickness on heat transfer in metal foams. The numerical results indicate that though larger foam thicknesses resulted in higher heat transfer coefficient, this effect weakens with thickness and is negligible in thick foams.
NASA Technical Reports Server (NTRS)
Case, Jonathan L; White, Kristopher D.
2014-01-01
The NASA Short-term Prediction Research and Transition (SPoRT) Center in Huntsville, AL is running a real-time configuration of the Noah land surface model (LSM) within the NASA Land Information System (LIS) framework (hereafter referred to as the "SPoRT-LIS"). Output from the real-time SPoRT-LIS is used for (1) initializing land surface variables for local modeling applications, and (2) displaying in decision support systems for situational awareness and drought monitoring at select NOAA/National Weather Service (NWS) partner offices. The experimental CONUS run incorporates hourly quantitative precipitation estimation (QPE) from the National Severe Storms Laboratory Multi- Radar Multi-Sensor (MRMS) which will be transitioned into operations at the National Centers for Environmental Prediction (NCEP) in Fall 2014.This paper describes the current and experimental SPoRT-LIS configurations, and documents some of the limitations still remaining through the advent of MRMS precipitation analyses in the SPoRT-LIS land surface model (LSM) simulations.
Well logging interpretation of production profile in horizontal oil-water two phase flow pipes
NASA Astrophysics Data System (ADS)
Zhai, Lu-Sheng; Jin, Ning-De; Gao, Zhong-Ke; Zheng, Xi-Ke
2012-03-01
Due to the complicated distribution of local velocity and local phase hold up along the radial direction of pipe in horizontal oil-water two phase flow, it is difficult to measure the total flow rate and phase volume fraction. In this study, we carried out dynamic experiment in horizontal oil-water two phases flow simulation well by using combination measurement system including turbine flowmeter with petal type concentrating diverter, conductance sensor and flowpassing capacitance sensor. According to the response resolution ability of the conductance and capacitance sensor in different range of total flow rate and water-cut, we use drift flux model and statistical model to predict the partial phase flow rate, respectively. The results indicate that the variable coefficient drift flux model can self-adaptively tone the model parameter according to the oil-water two phase flow characteristic, and the prediction result of partial phase flow rate of oil-water two phase flow is of high accuracy.
NASA Astrophysics Data System (ADS)
Liu, Jianzhong; Kern, Petra S.; Gerberick, G. Frank; Santos-Filho, Osvaldo A.; Esposito, Emilio X.; Hopfinger, Anton J.; Tseng, Yufeng J.
2008-06-01
In previous studies we have developed categorical QSAR models for predicting skin-sensitization potency based on 4D-fingerprint (4D-FP) descriptors and in vivo murine local lymph node assay (LLNA) measures. Only 4D-FP derived from the ground state (GMAX) structures of the molecules were used to build the QSAR models. In this study we have generated 4D-FP descriptors from the first excited state (EMAX) structures of the molecules. The GMAX, EMAX and the combined ground and excited state 4D-FP descriptors (GEMAX) were employed in building categorical QSAR models. Logistic regression (LR) and partial least square coupled logistic regression (PLS-CLR), found to be effective model building for the LLNA skin-sensitization measures in our previous studies, were used again in this study. This also permitted comparison of the prior ground state models to those involving first excited state 4D-FP descriptors. Three types of categorical QSAR models were constructed for each of the GMAX, EMAX and GEMAX datasets: a binary model (2-state), an ordinal model (3-state) and a binary-binary model (two-2-state). No significant differences exist among the LR 2-state model constructed for each of the three datasets. However, the PLS-CLR 3-state and 2-state models based on the EMAX and GEMAX datasets have higher predictivity than those constructed using only the GMAX dataset. These EMAX and GMAX categorical models are also more significant and predictive than corresponding models built in our previous QSAR studies of LLNA skin-sensitization measures.
Multivariable Time Series Prediction for the Icing Process on Overhead Power Transmission Line
Li, Peng; Zhao, Na; Zhou, Donghua; Cao, Min; Li, Jingjie; Shi, Xinling
2014-01-01
The design of monitoring and predictive alarm systems is necessary for successful overhead power transmission line icing. Given the characteristics of complexity, nonlinearity, and fitfulness in the line icing process, a model based on a multivariable time series is presented here to predict the icing load of a transmission line. In this model, the time effects of micrometeorology parameters for the icing process have been analyzed. The phase-space reconstruction theory and machine learning method were then applied to establish the prediction model, which fully utilized the history of multivariable time series data in local monitoring systems to represent the mapping relationship between icing load and micrometeorology factors. Relevant to the characteristic of fitfulness in line icing, the simulations were carried out during the same icing process or different process to test the model's prediction precision and robustness. According to the simulation results for the Tao-Luo-Xiong Transmission Line, this model demonstrates a good accuracy of prediction in different process, if the prediction length is less than two hours, and would be helpful for power grid departments when deciding to take action in advance to address potential icing disasters. PMID:25136653
Nonlinear Modeling by Assembling Piecewise Linear Models
NASA Technical Reports Server (NTRS)
Yao, Weigang; Liou, Meng-Sing
2013-01-01
To preserve nonlinearity of a full order system over a parameters range of interest, we propose a simple modeling approach by assembling a set of piecewise local solutions, including the first-order Taylor series terms expanded about some sampling states. The work by Rewienski and White inspired our use of piecewise linear local solutions. The assembly of these local approximations is accomplished by assigning nonlinear weights, through radial basis functions in this study. The efficacy of the proposed procedure is validated for a two-dimensional airfoil moving at different Mach numbers and pitching motions, under which the flow exhibits prominent nonlinear behaviors. All results confirm that our nonlinear model is accurate and stable for predicting not only aerodynamic forces but also detailed flowfields. Moreover, the model is robustness-accurate for inputs considerably different from the base trajectory in form and magnitude. This modeling preserves nonlinearity of the problems considered in a rather simple and accurate manner.
Wang, Bao-Zhen; Chen, Zhi
2013-01-01
This article presents a GIS-based multi-source and multi-box modeling approach (GMSMB) to predict the spatial concentration distributions of airborne pollutant on local and regional scales. In this method, an extended multi-box model combined with a multi-source and multi-grid Gaussian model are developed within the GIS framework to examine the contributions from both point- and area-source emissions. By using GIS, a large amount of data including emission sources, air quality monitoring, meteorological data, and spatial location information required for air quality modeling are brought into an integrated modeling environment. It helps more details of spatial variation in source distribution and meteorological condition to be quantitatively analyzed. The developed modeling approach has been examined to predict the spatial concentration distribution of four air pollutants (CO, NO(2), SO(2) and PM(2.5)) for the State of California. The modeling results are compared with the monitoring data. Good agreement is acquired which demonstrated that the developed modeling approach could deliver an effective air pollution assessment on both regional and local scales to support air pollution control and management planning.
NASA Technical Reports Server (NTRS)
Mixson, J. S.; Roussos, L. A.
1986-01-01
Possible reasons for disagreement between measured and predicted trends of sidewall noise transmission at low frequency are investigated using simplified analysis methods. An analytical model combining incident plane acoustic waves with an infinite flat panel is used to study the effects of sound incidence angle, plate structural properties, frequency, absorption, and the difference between noise reduction and transmission loss. Analysis shows that these factors have significant effects on noise transmission but they do not account for the differences between measured and predicted trends at low frequencies. An analytical model combining an infinite flat plate with a normally incident acoustic wave having exponentially decaying magnitude along one coordinate is used to study the effect of a localized source distribution such as is associated with propeller noise. Results show that the localization brings the predicted low-frequency trend of noise transmission into better agreement with measured propeller results. This effect is independent of low-frequency stiffness effects that have been previously reported to be associated with boundary conditions.
Bankruptcy Prevention: New Effort to Reflect on Legal and Social Changes.
Kliestik, Tomas; Misankova, Maria; Valaskova, Katarina; Svabova, Lucia
2018-04-01
Every corporation has an economic and moral responsibility to its stockholders to perform well financially. However, the number of bankruptcies in Slovakia has been growing for several years without an apparent macroeconomic cause. To prevent a rapid denigration and to prevent the outflow of foreign capital, various efforts are being zealously implemented. Robust analysis using conventional bankruptcy prediction tools revealed that the existing models are adaptable to local conditions, particularly local legislation. Furthermore, it was confirmed that most of these outdated tools have sufficient capability to warn of impending financial problems several years in advance. A novel bankruptcy prediction tool that outperforms the conventional models was developed. However, it is increasingly challenging to predict bankruptcy risk as corporations have become more global and more complex and as they have developed sophisticated schemes to hide their actual situations under the guise of "optimization" for tax authorities. Nevertheless, scepticism remains because economic engineers have established bankruptcy as a strategy to limit the liability resulting from court-imposed penalties.
NASA Technical Reports Server (NTRS)
Engwirda, Darren
2017-01-01
An algorithm for the generation of non-uniform, locally orthogonal staggered unstructured spheroidal grids is described. This technique is designed to generate very high-quality staggered VoronoiDelaunay meshes appropriate for general circulation modelling on the sphere, including applications to atmospheric simulation, ocean-modelling and numerical weather prediction. Using a recently developed Frontal-Delaunay refinement technique, a method for the construction of high-quality unstructured spheroidal Delaunay triangulations is introduced. A locally orthogonal polygonal grid, derived from the associated Voronoi diagram, is computed as the staggered dual. It is shown that use of the Frontal-Delaunay refinement technique allows for the generation of very high-quality unstructured triangulations, satisfying a priori bounds on element size and shape. Grid quality is further improved through the application of hill-climbing-type optimisation techniques. Overall, the algorithm is shown to produce grids with very high element quality and smooth grading characteristics, while imposing relatively low computational expense. A selection of uniform and non-uniform spheroidal grids appropriate for high-resolution, multi-scale general circulation modelling are presented. These grids are shown to satisfy the geometric constraints associated with contemporary unstructured C-grid-type finite-volume models, including the Model for Prediction Across Scales (MPAS-O). The use of user-defined mesh-spacing functions to generate smoothly graded, non-uniform grids for multi-resolution-type studies is discussed in detail.
NASA Astrophysics Data System (ADS)
Engwirda, Darren
2017-06-01
An algorithm for the generation of non-uniform, locally orthogonal staggered unstructured spheroidal grids is described. This technique is designed to generate very high-quality staggered Voronoi-Delaunay meshes appropriate for general circulation modelling on the sphere, including applications to atmospheric simulation, ocean-modelling and numerical weather prediction. Using a recently developed Frontal-Delaunay refinement technique, a method for the construction of high-quality unstructured spheroidal Delaunay triangulations is introduced. A locally orthogonal polygonal grid, derived from the associated Voronoi diagram, is computed as the staggered dual. It is shown that use of the Frontal-Delaunay refinement technique allows for the generation of very high-quality unstructured triangulations, satisfying a priori bounds on element size and shape. Grid quality is further improved through the application of hill-climbing-type optimisation techniques. Overall, the algorithm is shown to produce grids with very high element quality and smooth grading characteristics, while imposing relatively low computational expense. A selection of uniform and non-uniform spheroidal grids appropriate for high-resolution, multi-scale general circulation modelling are presented. These grids are shown to satisfy the geometric constraints associated with contemporary unstructured C-grid-type finite-volume models, including the Model for Prediction Across Scales (MPAS-O). The use of user-defined mesh-spacing functions to generate smoothly graded, non-uniform grids for multi-resolution-type studies is discussed in detail.
Comparative dynamics of avian communities across edges and interiors of North American ecoregions
Karanth, K.K.; Nichols, J.D.; Sauer, J.R.; Hines, J.E.
2006-01-01
Aim Based on a priori hypotheses, we developed predictions about how avian communities might differ at the edges vs. interiors of ecoregions. Specifically, we predicted lower species richness and greater local turnover and extinction probabilities for regional edges. We tested these predictions using North American Breeding Bird Survey (BBS) data across nine ecoregions over a 20-year time period. Location Data from 2238 BBS routes within nine ecoregions of the United States were used. Methods The estimation methods used accounted for species detection probabilities < 1. Parameter estimates for species richness, local turnover and extinction probabilities were obtained using the program COMDYN. We examined the difference in community-level parameters estimated from within exterior edges (the habitat interface between ecoregions), interior edges (the habitat interface between two bird conservation regions within the same ecoregion) and interior (habitat excluding interfaces). General linear models were constructed to examine sources of variation in community parameters for five ecoregions (containing all three habitat types) and all nine ecoregions (containing two habitat types). Results Analyses provided evidence that interior habitats and interior edges had on average higher bird species richness than exterior edges, providing some evidence of reduced species richness near habitat edges. Lower average extinction probabilities and turnover rates in interior habitats (five-region analysis) provided some support for our predictions about these quantities. However, analyses directed at all three response variables, i.e. species richness, local turnover, and local extinction probability, provided evidence of an interaction between habitat and region, indicating that the relationships did not hold in all regions. Main conclusions The overall predictions of lower species richness, higher local turnover and extinction probabilities in regional edge habitats, as opposed to interior habitats, were generally supported. However, these predicted tendencies did not hold in all regions.
A Dynamical Model Reveals Gene Co-Localizations in Nucleus
Yao, Ye; Lin, Wei; Hennessy, Conor; Fraser, Peter; Feng, Jianfeng
2011-01-01
Co-localization of networks of genes in the nucleus is thought to play an important role in determining gene expression patterns. Based upon experimental data, we built a dynamical model to test whether pure diffusion could account for the observed co-localization of genes within a defined subnuclear region. A simple standard Brownian motion model in two and three dimensions shows that preferential co-localization is possible for co-regulated genes without any direct interaction, and suggests the occurrence may be due to a limitation in the number of available transcription factors. Experimental data of chromatin movements demonstrates that fractional rather than standard Brownian motion is more appropriate to model gene mobilizations, and we tested our dynamical model against recent static experimental data, using a sub-diffusion process by which the genes tend to colocalize more easily. Moreover, in order to compare our model with recently obtained experimental data, we studied the association level between genes and factors, and presented data supporting the validation of this dynamic model. As further applications of our model, we applied it to test against more biological observations. We found that increasing transcription factor number, rather than factory number and nucleus size, might be the reason for decreasing gene co-localization. In the scenario of frequency- or amplitude-modulation of transcription factors, our model predicted that frequency-modulation may increase the co-localization between its targeted genes. PMID:21760760
Kashima, Saori; Yorifuji, Takashi; Sawada, Norie; Nakaya, Tomoki; Eboshida, Akira
2018-08-01
Typically, land use regression (LUR) models have been developed using campaign monitoring data rather than routine monitoring data. However, the latter have advantages such as low cost and long-term coverage. Based on the idea that LUR models representing regional differences in air pollution and regional road structures are optimal, the objective of this study was to evaluate the validity of LUR models for nitrogen dioxide (NO 2 ) based on routine and campaign monitoring data obtained from an urban area. We selected the city of Suita in Osaka (Japan). We built a model based on routine monitoring data obtained from all sites (routine-LUR-All), and a model based on campaign monitoring data (campaign-LUR) within the city. Models based on routine monitoring data obtained from background sites (routine-LUR-BS) and based on data obtained from roadside sites (routine-LUR-RS) were also built. The routine LUR models were based on monitoring networks across two prefectures (i.e., Osaka and Hyogo prefectures). We calculated the predictability of the each model. We then compared the predicted NO 2 concentrations from each model with measured annual average NO 2 concentrations from evaluation sites. The routine-LUR-All and routine-LUR-BS models both predicted NO 2 concentrations well: adjusted R 2 =0.68 and 0.76, respectively, and root mean square error=3.4 and 2.1ppb, respectively. The predictions from the routine-LUR-All model were highly correlated with the measured NO 2 concentrations at evaluation sites. Although the predicted NO 2 concentrations from each model were correlated, the LUR models based on routine networks, and particularly those based on all monitoring sites, provided better visual representations of the local road conditions in the city. The present study demonstrated that LUR models based on routine data could estimate local traffic-related air pollution in an urban area. The importance and usefulness of data from routine monitoring networks should be acknowledged. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Yane; Fan, Ming; Cheng, Hu; Zhang, Peng; Zheng, Bin; Li, Lihua
2018-01-01
This study aims to develop and test a new imaging marker-based short-term breast cancer risk prediction model. An age-matched dataset of 566 screening mammography cases was used. All ‘prior’ images acquired in the two screening series were negative, while in the ‘current’ screening images, 283 cases were positive for cancer and 283 cases remained negative. For each case, two bilateral cranio-caudal view mammograms acquired from the ‘prior’ negative screenings were selected and processed by a computer-aided image processing scheme, which segmented the entire breast area into nine strip-based local regions, extracted the element regions using difference of Gaussian filters, and computed both global- and local-based bilateral asymmetrical image features. An initial feature pool included 190 features related to the spatial distribution and structural similarity of grayscale values, as well as of the magnitude and phase responses of multidirectional Gabor filters. Next, a short-term breast cancer risk prediction model based on a generalized linear model was built using an embedded stepwise regression analysis method to select features and a leave-one-case-out cross-validation method to predict the likelihood of each woman having image-detectable cancer in the next sequential mammography screening. The area under the receiver operating characteristic curve (AUC) values significantly increased from 0.5863 ± 0.0237 to 0.6870 ± 0.0220 when the model trained by the image features extracted from the global regions and by the features extracted from both the global and the matched local regions (p = 0.0001). The odds ratio values monotonically increased from 1.00-8.11 with a significantly increasing trend in slope (p = 0.0028) as the model-generated risk score increased. In addition, the AUC values were 0.6555 ± 0.0437, 0.6958 ± 0.0290, and 0.7054 ± 0.0529 for the three age groups of 37-49, 50-65, and 66-87 years old, respectively. AUC values of 0.6529 ± 0.1100, 0.6820 ± 0.0353, 0.6836 ± 0.0302 and 0.8043 ± 0.1067 were yielded for the four mammography density sub-groups (BIRADS from 1-4), respectively. This study demonstrated that bilateral asymmetry features extracted from local regions combined with the global region in bilateral negative mammograms could be used as a new imaging marker to assist in the prediction of short-term breast cancer risk.
Lee, Hasup; Baek, Minkyung; Lee, Gyu Rie; Park, Sangwoo; Seok, Chaok
2017-03-01
Many proteins function as homo- or hetero-oligomers; therefore, attempts to understand and regulate protein functions require knowledge of protein oligomer structures. The number of available experimental protein structures is increasing, and oligomer structures can be predicted using the experimental structures of related proteins as templates. However, template-based models may have errors due to sequence differences between the target and template proteins, which can lead to functional differences. Such structural differences may be predicted by loop modeling of local regions or refinement of the overall structure. In CAPRI (Critical Assessment of PRotein Interactions) round 30, we used recently developed features of the GALAXY protein modeling package, including template-based structure prediction, loop modeling, model refinement, and protein-protein docking to predict protein complex structures from amino acid sequences. Out of the 25 CAPRI targets, medium and acceptable quality models were obtained for 14 and 1 target(s), respectively, for which proper oligomer or monomer templates could be detected. Symmetric interface loop modeling on oligomer model structures successfully improved model quality, while loop modeling on monomer model structures failed. Overall refinement of the predicted oligomer structures consistently improved the model quality, in particular in interface contacts. Proteins 2017; 85:399-407. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Inflationary tensor fossils in large-scale structure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dimastrogiovanni, Emanuela; Fasiello, Matteo; Jeong, Donghui
Inflation models make specific predictions for a tensor-scalar-scalar three-point correlation, or bispectrum, between one gravitational-wave (tensor) mode and two density-perturbation (scalar) modes. This tensor-scalar-scalar correlation leads to a local power quadrupole, an apparent departure from statistical isotropy in our Universe, as well as characteristic four-point correlations in the current mass distribution in the Universe. So far, the predictions for these observables have been worked out only for single-clock models in which certain consistency conditions between the tensor-scalar-scalar correlation and tensor and scalar power spectra are satisfied. Here we review the requirements on inflation models for these consistency conditions to bemore » satisfied. We then consider several examples of inflation models, such as non-attractor and solid-inflation models, in which these conditions are put to the test. In solid inflation the simplest consistency conditions are already violated whilst in the non-attractor model we find that, contrary to the standard scenario, the tensor-scalar-scalar correlator probes directly relevant model-dependent information. We work out the predictions for observables in these models. For non-attractor inflation we find an apparent local quadrupolar departure from statistical isotropy in large-scale structure but that this power quadrupole decreases very rapidly at smaller scales. The consistency of the CMB quadrupole with statistical isotropy then constrains the distance scale that corresponds to the transition from the non-attractor to attractor phase of inflation to be larger than the currently observable horizon. Solid inflation predicts clustering fossils signatures in the current galaxy distribution that may be large enough to be detectable with forthcoming, and possibly even current, galaxy surveys.« less
Efficient Actor-Critic Algorithm with Hierarchical Model Learning and Planning
Fu, QiMing
2016-01-01
To improve the convergence rate and the sample efficiency, two efficient learning methods AC-HMLP and RAC-HMLP (AC-HMLP with ℓ 2-regularization) are proposed by combining actor-critic algorithm with hierarchical model learning and planning. The hierarchical models consisting of the local and the global models, which are learned at the same time during learning of the value function and the policy, are approximated by local linear regression (LLR) and linear function approximation (LFA), respectively. Both the local model and the global model are applied to generate samples for planning; the former is used only if the state-prediction error does not surpass the threshold at each time step, while the latter is utilized at the end of each episode. The purpose of taking both models is to improve the sample efficiency and accelerate the convergence rate of the whole algorithm through fully utilizing the local and global information. Experimentally, AC-HMLP and RAC-HMLP are compared with three representative algorithms on two Reinforcement Learning (RL) benchmark problems. The results demonstrate that they perform best in terms of convergence rate and sample efficiency. PMID:27795704
Efficient Actor-Critic Algorithm with Hierarchical Model Learning and Planning.
Zhong, Shan; Liu, Quan; Fu, QiMing
2016-01-01
To improve the convergence rate and the sample efficiency, two efficient learning methods AC-HMLP and RAC-HMLP (AC-HMLP with ℓ 2 -regularization) are proposed by combining actor-critic algorithm with hierarchical model learning and planning. The hierarchical models consisting of the local and the global models, which are learned at the same time during learning of the value function and the policy, are approximated by local linear regression (LLR) and linear function approximation (LFA), respectively. Both the local model and the global model are applied to generate samples for planning; the former is used only if the state-prediction error does not surpass the threshold at each time step, while the latter is utilized at the end of each episode. The purpose of taking both models is to improve the sample efficiency and accelerate the convergence rate of the whole algorithm through fully utilizing the local and global information. Experimentally, AC-HMLP and RAC-HMLP are compared with three representative algorithms on two Reinforcement Learning (RL) benchmark problems. The results demonstrate that they perform best in terms of convergence rate and sample efficiency.