Using Robust Variance Estimation to Combine Multiple Regression Estimates with Meta-Analysis
ERIC Educational Resources Information Center
Williams, Ryan
2013-01-01
The purpose of this study was to explore the use of robust variance estimation for combining commonly specified multiple regression models and for combining sample-dependent focal slope estimates from diversely specified models. The proposed estimator obviates traditionally required information about the covariance structure of the dependent…
Genomic-based multiple-trait evaluation in Eucalyptus grandis using dominant DArT markers.
Cappa, Eduardo P; El-Kassaby, Yousry A; Muñoz, Facundo; Garcia, Martín N; Villalba, Pamela V; Klápště, Jaroslav; Marcucci Poltri, Susana N
2018-06-01
We investigated the impact of combining the pedigree- and genomic-based relationship matrices in a multiple-trait individual-tree mixed model (a.k.a., multiple-trait combined approach) on the estimates of heritability and on the genomic correlations between growth and stem straightness in an open-pollinated Eucalyptus grandis population. Additionally, the added advantage of incorporating genomic information on the theoretical accuracies of parents and offspring breeding values was evaluated. Our results suggested that the use of the combined approach for estimating heritabilities and additive genetic correlations in multiple-trait evaluations is advantageous and including genomic information increases the expected accuracy of breeding values. Furthermore, the multiple-trait combined approach was proven to be superior to the single-trait combined approach in predicting breeding values, in particular for low-heritability traits. Finally, our results advocate the use of the combined approach in forest tree progeny testing trials, specifically when a multiple-trait individual-tree mixed model is considered. Copyright © 2018 Elsevier B.V. All rights reserved.
Algorithms for output feedback, multiple-model, and decentralized control problems
NASA Technical Reports Server (NTRS)
Halyo, N.; Broussard, J. R.
1984-01-01
The optimal stochastic output feedback, multiple-model, and decentralized control problems with dynamic compensation are formulated and discussed. Algorithms for each problem are presented, and their relationship to a basic output feedback algorithm is discussed. An aircraft control design problem is posed as a combined decentralized, multiple-model, output feedback problem. A control design is obtained using the combined algorithm. An analysis of the design is presented.
A Technique of Fuzzy C-Mean in Multiple Linear Regression Model toward Paddy Yield
NASA Astrophysics Data System (ADS)
Syazwan Wahab, Nur; Saifullah Rusiman, Mohd; Mohamad, Mahathir; Amira Azmi, Nur; Che Him, Norziha; Ghazali Kamardan, M.; Ali, Maselan
2018-04-01
In this paper, we propose a hybrid model which is a combination of multiple linear regression model and fuzzy c-means method. This research involved a relationship between 20 variates of the top soil that are analyzed prior to planting of paddy yields at standard fertilizer rates. Data used were from the multi-location trials for rice carried out by MARDI at major paddy granary in Peninsular Malaysia during the period from 2009 to 2012. Missing observations were estimated using mean estimation techniques. The data were analyzed using multiple linear regression model and a combination of multiple linear regression model and fuzzy c-means method. Analysis of normality and multicollinearity indicate that the data is normally scattered without multicollinearity among independent variables. Analysis of fuzzy c-means cluster the yield of paddy into two clusters before the multiple linear regression model can be used. The comparison between two method indicate that the hybrid of multiple linear regression model and fuzzy c-means method outperform the multiple linear regression model with lower value of mean square error.
Jafarzadeh, S Reza; Johnson, Wesley O; Gardner, Ian A
2016-03-15
The area under the receiver operating characteristic (ROC) curve (AUC) is used as a performance metric for quantitative tests. Although multiple biomarkers may be available for diagnostic or screening purposes, diagnostic accuracy is often assessed individually rather than in combination. In this paper, we consider the interesting problem of combining multiple biomarkers for use in a single diagnostic criterion with the goal of improving the diagnostic accuracy above that of an individual biomarker. The diagnostic criterion created from multiple biomarkers is based on the predictive probability of disease, conditional on given multiple biomarker outcomes. If the computed predictive probability exceeds a specified cutoff, the corresponding subject is allocated as 'diseased'. This defines a standard diagnostic criterion that has its own ROC curve, namely, the combined ROC (cROC). The AUC metric for cROC, namely, the combined AUC (cAUC), is used to compare the predictive criterion based on multiple biomarkers to one based on fewer biomarkers. A multivariate random-effects model is proposed for modeling multiple normally distributed dependent scores. Bayesian methods for estimating ROC curves and corresponding (marginal) AUCs are developed when a perfect reference standard is not available. In addition, cAUCs are computed to compare the accuracy of different combinations of biomarkers for diagnosis. The methods are evaluated using simulations and are applied to data for Johne's disease (paratuberculosis) in cattle. Copyright © 2015 John Wiley & Sons, Ltd.
Bouwhuis, Stef; Geuskens, Goedele A; Boot, Cécile R L; Bongers, Paulien M; van der Beek, Allard J
2017-08-01
To construct prediction models for transitions to combination multiple job holding (MJH) (multiple jobs as an employee) and hybrid MJH (being an employee and self-employed), among employees aged 45-64. A total of 5187 employees in the Netherlands completed online questionnaires annually between 2010 and 2013. We applied logistic regression analyses with a backward elimination strategy to construct prediction models. Transitions to combination MJH and hybrid MJH were best predicted by a combination of factors including: demographics, health and mastery, work characteristics, work history, skills and knowledge, social factors, and financial factors. Not having a permanent contract and a poor household financial situation predicted both transitions. Some predictors only predicted combination MJH, e.g., working part-time, or hybrid MJH, e.g., work-home interference. A wide variety of factors predict combination MJH and/or hybrid MJH. The prediction model approach allowed for the identification of predictors that have not been previously studied. © 2017 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Joshi, Aditya; Lindsey, Brooks D.; Dayton, Paul A.; Pinton, Gianmarco; Muller, Marie
2017-05-01
Ultrasound contrast agents (UCA), such as microbubbles, enhance the scattering properties of blood, which is otherwise hypoechoic. The multiple scattering interactions of the acoustic field with UCA are poorly understood due to the complexity of the multiple scattering theories and the nonlinear microbubble response. The majority of bubble models describe the behavior of UCA as single, isolated microbubbles suspended in infinite medium. Multiple scattering models such as the independent scattering approximation can approximate phase velocity and attenuation for low scatterer volume fractions. However, all current models and simulation approaches only describe multiple scattering and nonlinear bubble dynamics separately. Here we present an approach that combines two existing models: (1) a full-wave model that describes nonlinear propagation and scattering interactions in a heterogeneous attenuating medium and (2) a Paul-Sarkar model that describes the nonlinear interactions between an acoustic field and microbubbles. These two models were solved numerically and combined with an iterative approach. The convergence of this combined model was explored in silico for 0.5 × 106 microbubbles ml-1, 1% and 2% bubble concentration by volume. The backscattering predicted by our modeling approach was verified experimentally with water tank measurements performed with a 128-element linear array transducer. An excellent agreement in terms of the fundamental and harmonic acoustic fields is shown. Additionally, our model correctly predicts the phase velocity and attenuation measured using through transmission and predicted by the independent scattering approximation.
Features in visual search combine linearly
Pramod, R. T.; Arun, S. P.
2014-01-01
Single features such as line orientation and length are known to guide visual search, but relatively little is known about how multiple features combine in search. To address this question, we investigated how search for targets differing in multiple features (intensity, length, orientation) from the distracters is related to searches for targets differing in each of the individual features. We tested race models (based on reaction times) and co-activation models (based on reciprocal of reaction times) for their ability to predict multiple feature searches. Multiple feature searches were best accounted for by a co-activation model in which feature information combined linearly (r = 0.95). This result agrees with the classic finding that these features are separable i.e., subjective dissimilarity ratings sum linearly. We then replicated the classical finding that the length and width of a rectangle are integral features—in other words, they combine nonlinearly in visual search. However, to our surprise, upon including aspect ratio as an additional feature, length and width combined linearly and this model outperformed all other models. Thus, length and width of a rectangle became separable when considered together with aspect ratio. This finding predicts that searches involving shapes with identical aspect ratio should be more difficult than searches where shapes differ in aspect ratio. We confirmed this prediction on a variety of shapes. We conclude that features in visual search co-activate linearly and demonstrate for the first time that aspect ratio is a novel feature that guides visual search. PMID:24715328
RooStatsCms: A tool for analysis modelling, combination and statistical studies
NASA Astrophysics Data System (ADS)
Piparo, D.; Schott, G.; Quast, G.
2010-04-01
RooStatsCms is an object oriented statistical framework based on the RooFit technology. Its scope is to allow the modelling, statistical analysis and combination of multiple search channels for new phenomena in High Energy Physics. It provides a variety of methods described in literature implemented as classes, whose design is oriented to the execution of multiple CPU intensive jobs on batch systems or on the Grid.
Xiong, Chengjie; Luo, Jingqin; Morris, John C; Bateman, Randall
2018-01-01
Modern clinical trials on Alzheimer disease (AD) focus on the early symptomatic stage or even the preclinical stage. Subtle disease progression at the early stages, however, poses a major challenge in designing such clinical trials. We propose a multivariate mixed model on repeated measures to model the disease progression over time on multiple efficacy outcomes, and derive the optimum weights to combine multiple outcome measures by minimizing the sample sizes to adequately power the clinical trials. A cross-validation simulation study is conducted to assess the accuracy for the estimated weights as well as the improvement in reducing the sample sizes for such trials. The proposed methodology is applied to the multiple cognitive tests from the ongoing observational study of the Dominantly Inherited Alzheimer Network (DIAN) to power future clinical trials in the DIAN with a cognitive endpoint. Our results show that the optimum weights to combine multiple outcome measures can be accurately estimated, and that compared to the individual outcomes, the combined efficacy outcome with these weights significantly reduces the sample size required to adequately power clinical trials. When applied to the clinical trial in the DIAN, the estimated linear combination of six cognitive tests can adequately power the clinical trial. PMID:29546251
An improved Multimodel Approach for Global Sea Surface Temperature Forecasts
NASA Astrophysics Data System (ADS)
Khan, M. Z. K.; Mehrotra, R.; Sharma, A.
2014-12-01
The concept of ensemble combinations for formulating improved climate forecasts has gained popularity in recent years. However, many climate models share similar physics or modeling processes, which may lead to similar (or strongly correlated) forecasts. Recent approaches for combining forecasts that take into consideration differences in model accuracy over space and time have either ignored the similarity of forecast among the models or followed a pairwise dynamic combination approach. Here we present a basis for combining model predictions, illustrating the improvements that can be achieved if procedures for factoring in inter-model dependence are utilised. The utility of the approach is demonstrated by combining sea surface temperature (SST) forecasts from five climate models over a period of 1960-2005. The variable of interest, the monthly global sea surface temperature anomalies (SSTA) at a 50´50 latitude-longitude grid, is predicted three months in advance to demonstrate the utility of the proposed algorithm. Results indicate that the proposed approach offers consistent and significant improvements for majority of grid points compared to the case where the dependence among the models is ignored. Therefore, the proposed approach of combining multiple models by taking into account the existing interdependence, provides an attractive alternative to obtain improved climate forecast. In addition, an approach to combine seasonal forecasts from multiple climate models with varying periods of availability is also demonstrated.
Due to the complexity of the processes contributing to beach bacteria concentrations, many researchers rely on statistical modeling, among which multiple linear regression (MLR) modeling is most widely used. Despite its ease of use and interpretation, there may be time dependence...
Mohd Yusof, Mohd Yusmiaidil Putera; Cauwels, Rita; Deschepper, Ellen; Martens, Luc
2015-08-01
The third molar development (TMD) has been widely utilized as one of the radiographic method for dental age estimation. By using the same radiograph of the same individual, third molar eruption (TME) information can be incorporated to the TMD regression model. This study aims to evaluate the performance of dental age estimation in individual method models and the combined model (TMD and TME) based on the classic regressions of multiple linear and principal component analysis. A sample of 705 digital panoramic radiographs of Malay sub-adults aged between 14.1 and 23.8 years was collected. The techniques described by Gleiser and Hunt (modified by Kohler) and Olze were employed to stage the TMD and TME, respectively. The data was divided to develop three respective models based on the two regressions of multiple linear and principal component analysis. The trained models were then validated on the test sample and the accuracy of age prediction was compared between each model. The coefficient of determination (R²) and root mean square error (RMSE) were calculated. In both genders, adjusted R² yielded an increment in the linear regressions of combined model as compared to the individual models. The overall decrease in RMSE was detected in combined model as compared to TMD (0.03-0.06) and TME (0.2-0.8). In principal component regression, low value of adjusted R(2) and high RMSE except in male were exhibited in combined model. Dental age estimation is better predicted using combined model in multiple linear regression models. Copyright © 2015 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
Flexible Language Constructs for Large Parallel Programs
Rosing, Matt; Schnabel, Robert
1994-01-01
The goal of the research described in this article is to develop flexible language constructs for writing large data parallel numerical programs for distributed memory (multiple instruction multiple data [MIMD]) multiprocessors. Previously, several models have been developed to support synchronization and communication. Models for global synchronization include single instruction multiple data (SIMD), single program multiple data (SPMD), and sequential programs annotated with data distribution statements. The two primary models for communication include implicit communication based on shared memory and explicit communication based on messages. None of these models by themselves seem sufficient to permit the natural and efficient expression ofmore » the variety of algorithms that occur in large scientific computations. In this article, we give an overview of a new language that combines many of these programming models in a clean manner. This is done in a modular fashion such that different models can be combined to support large programs. Within a module, the selection of a model depends on the algorithm and its efficiency requirements. In this article, we give an overview of the language and discuss some of the critical implementation details.« less
Integrated model of multiple kernel learning and differential evolution for EUR/USD trading.
Deng, Shangkun; Sakurai, Akito
2014-01-01
Currency trading is an important area for individual investors, government policy decisions, and organization investments. In this study, we propose a hybrid approach referred to as MKL-DE, which combines multiple kernel learning (MKL) with differential evolution (DE) for trading a currency pair. MKL is used to learn a model that predicts changes in the target currency pair, whereas DE is used to generate the buy and sell signals for the target currency pair based on the relative strength index (RSI), while it is also combined with MKL as a trading signal. The new hybrid implementation is applied to EUR/USD trading, which is the most traded foreign exchange (FX) currency pair. MKL is essential for utilizing information from multiple information sources and DE is essential for formulating a trading rule based on a mixture of discrete structures and continuous parameters. Initially, the prediction model optimized by MKL predicts the returns based on a technical indicator called the moving average convergence and divergence. Next, a combined trading signal is optimized by DE using the inputs from the prediction model and technical indicator RSI obtained from multiple timeframes. The experimental results showed that trading using the prediction learned by MKL yielded consistent profits.
A new multiple trauma model of the mouse.
Fitschen-Oestern, Stefanie; Lippross, Sebastian; Klueter, Tim; Weuster, Matthias; Varoga, Deike; Tohidnezhad, Mersedeh; Pufe, Thomas; Rose-John, Stefan; Andruszkow, Hagen; Hildebrand, Frank; Steubesand, Nadine; Seekamp, Andreas; Neunaber, Claudia
2017-11-21
Blunt trauma is the most frequent mechanism of injury in multiple trauma, commonly resulting from road traffic collisions or falls. Two of the most frequent injuries in patients with multiple trauma are chest trauma and extremity fracture. Several trauma mouse models combine chest trauma and head injury, but no trauma mouse model to date includes the combination of long bone fractures and chest trauma. Outcome is essentially determined by the combination of these injuries. In this study, we attempted to establish a reproducible novel multiple trauma model in mice that combines blunt trauma, major injuries and simple practicability. Ninety-six male C57BL/6 N mice (n = 8/group) were subjected to trauma for isolated femur fracture and a combination of femur fracture and chest injury. Serum samples of mice were obtained by heart puncture at defined time points of 0 h (hour), 6 h, 12 h, 24 h, 3 d (days), and 7 d. A tendency toward reduced weight and temperature was observed at 24 h after chest trauma and femur fracture. Blood analyses revealed a decrease in hemoglobin during the first 24 h after trauma. Some animals were killed by heart puncture immediately after chest contusion; these animals showed the most severe lung contusion and hemorrhage. The extent of structural lung injury varied in different mice but was evident in all animals. Representative H&E-stained (Haematoxylin and Eosin-stained) paraffin lung sections of mice with multiple trauma revealed hemorrhage and an inflammatory immune response. Plasma samples of mice with chest trauma and femur fracture showed an up-regulation of IL-1β (Interleukin-1β), IL-6, IL-10, IL-12p70 and TNF-α (Tumor necrosis factor- α) compared with the control group. Mice with femur fracture and chest trauma showed a significant up-regulation of IL-6 compared to group with isolated femur fracture. The multiple trauma mouse model comprising chest trauma and femur fracture enables many analogies to clinical cases of multiple trauma in humans and demonstrates associated characteristic clinical and pathophysiological changes. This model is easy to perform, is economical and can be used for further research examining specific immunological questions.
ERIC Educational Resources Information Center
von Davier, Matthias
2014-01-01
Diagnostic models combine multiple binary latent variables in an attempt to produce a latent structure that provides more information about test takers' performance than do unidimensional latent variable models. Recent developments in diagnostic modeling emphasize the possibility that multiple skills may interact in a conjunctive way within the…
Validation and calibration of structural models that combine information from multiple sources.
Dahabreh, Issa J; Wong, John B; Trikalinos, Thomas A
2017-02-01
Mathematical models that attempt to capture structural relationships between their components and combine information from multiple sources are increasingly used in medicine. Areas covered: We provide an overview of methods for model validation and calibration and survey studies comparing alternative approaches. Expert commentary: Model validation entails a confrontation of models with data, background knowledge, and other models, and can inform judgments about model credibility. Calibration involves selecting parameter values to improve the agreement of model outputs with data. When the goal of modeling is quantitative inference on the effects of interventions or forecasting, calibration can be viewed as estimation. This view clarifies issues related to parameter identifiability and facilitates formal model validation and the examination of consistency among different sources of information. In contrast, when the goal of modeling is the generation of qualitative insights about the modeled phenomenon, calibration is a rather informal process for selecting inputs that result in model behavior that roughly reproduces select aspects of the modeled phenomenon and cannot be equated to an estimation procedure. Current empirical research on validation and calibration methods consists primarily of methodological appraisals or case-studies of alternative techniques and cannot address the numerous complex and multifaceted methodological decisions that modelers must make. Further research is needed on different approaches for developing and validating complex models that combine evidence from multiple sources.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, S.; Barua, A.; Zhou, M., E-mail: min.zhou@me.gatech.edu
2014-05-07
Accounting for the combined effect of multiple sources of stochasticity in material attributes, we develop an approach that computationally predicts the probability of ignition of polymer-bonded explosives (PBXs) under impact loading. The probabilistic nature of the specific ignition processes is assumed to arise from two sources of stochasticity. The first source involves random variations in material microstructural morphology; the second source involves random fluctuations in grain-binder interfacial bonding strength. The effect of the first source of stochasticity is analyzed with multiple sets of statistically similar microstructures and constant interfacial bonding strength. Subsequently, each of the microstructures in the multiple setsmore » is assigned multiple instantiations of randomly varying grain-binder interfacial strengths to analyze the effect of the second source of stochasticity. Critical hotspot size-temperature states reaching the threshold for ignition are calculated through finite element simulations that explicitly account for microstructure and bulk and interfacial dissipation to quantify the time to criticality (t{sub c}) of individual samples, allowing the probability distribution of the time to criticality that results from each source of stochastic variation for a material to be analyzed. Two probability superposition models are considered to combine the effects of the multiple sources of stochasticity. The first is a parallel and series combination model, and the second is a nested probability function model. Results show that the nested Weibull distribution provides an accurate description of the combined ignition probability. The approach developed here represents a general framework for analyzing the stochasticity in the material behavior that arises out of multiple types of uncertainty associated with the structure, design, synthesis and processing of materials.« less
The Modeling Environment for Total Risks studies (MENTOR) system, combined with an extension of the SHEDS (Stochastic Human Exposure and Dose Simulation) methodology, provide a mechanistically consistent framework for conducting source-to-dose exposure assessments of multiple pol...
Combining multiple imputation and meta-analysis with individual participant data
Burgess, Stephen; White, Ian R; Resche-Rigon, Matthieu; Wood, Angela M
2013-01-01
Multiple imputation is a strategy for the analysis of incomplete data such that the impact of the missingness on the power and bias of estimates is mitigated. When data from multiple studies are collated, we can propose both within-study and multilevel imputation models to impute missing data on covariates. It is not clear how to choose between imputation models or how to combine imputation and inverse-variance weighted meta-analysis methods. This is especially important as often different studies measure data on different variables, meaning that we may need to impute data on a variable which is systematically missing in a particular study. In this paper, we consider a simulation analysis of sporadically missing data in a single covariate with a linear analysis model and discuss how the results would be applicable to the case of systematically missing data. We find in this context that ensuring the congeniality of the imputation and analysis models is important to give correct standard errors and confidence intervals. For example, if the analysis model allows between-study heterogeneity of a parameter, then we should incorporate this heterogeneity into the imputation model to maintain the congeniality of the two models. In an inverse-variance weighted meta-analysis, we should impute missing data and apply Rubin's rules at the study level prior to meta-analysis, rather than meta-analyzing each of the multiple imputations and then combining the meta-analysis estimates using Rubin's rules. We illustrate the results using data from the Emerging Risk Factors Collaboration. PMID:23703895
Multi-objective optimization for generating a weighted multi-model ensemble
NASA Astrophysics Data System (ADS)
Lee, H.
2017-12-01
Many studies have demonstrated that multi-model ensembles generally show better skill than each ensemble member. When generating weighted multi-model ensembles, the first step is measuring the performance of individual model simulations using observations. There is a consensus on the assignment of weighting factors based on a single evaluation metric. When considering only one evaluation metric, the weighting factor for each model is proportional to a performance score or inversely proportional to an error for the model. While this conventional approach can provide appropriate combinations of multiple models, the approach confronts a big challenge when there are multiple metrics under consideration. When considering multiple evaluation metrics, it is obvious that a simple averaging of multiple performance scores or model ranks does not address the trade-off problem between conflicting metrics. So far, there seems to be no best method to generate weighted multi-model ensembles based on multiple performance metrics. The current study applies the multi-objective optimization, a mathematical process that provides a set of optimal trade-off solutions based on a range of evaluation metrics, to combining multiple performance metrics for the global climate models and their dynamically downscaled regional climate simulations over North America and generating a weighted multi-model ensemble. NASA satellite data and the Regional Climate Model Evaluation System (RCMES) software toolkit are used for assessment of the climate simulations. Overall, the performance of each model differs markedly with strong seasonal dependence. Because of the considerable variability across the climate simulations, it is important to evaluate models systematically and make future projections by assigning optimized weighting factors to the models with relatively good performance. Our results indicate that the optimally weighted multi-model ensemble always shows better performance than an arithmetic ensemble mean and may provide reliable future projections.
Integrated Model of Multiple Kernel Learning and Differential Evolution for EUR/USD Trading
Deng, Shangkun; Sakurai, Akito
2014-01-01
Currency trading is an important area for individual investors, government policy decisions, and organization investments. In this study, we propose a hybrid approach referred to as MKL-DE, which combines multiple kernel learning (MKL) with differential evolution (DE) for trading a currency pair. MKL is used to learn a model that predicts changes in the target currency pair, whereas DE is used to generate the buy and sell signals for the target currency pair based on the relative strength index (RSI), while it is also combined with MKL as a trading signal. The new hybrid implementation is applied to EUR/USD trading, which is the most traded foreign exchange (FX) currency pair. MKL is essential for utilizing information from multiple information sources and DE is essential for formulating a trading rule based on a mixture of discrete structures and continuous parameters. Initially, the prediction model optimized by MKL predicts the returns based on a technical indicator called the moving average convergence and divergence. Next, a combined trading signal is optimized by DE using the inputs from the prediction model and technical indicator RSI obtained from multiple timeframes. The experimental results showed that trading using the prediction learned by MKL yielded consistent profits. PMID:25097891
Combining information from multiple flood projections in a hierarchical Bayesian framework
NASA Astrophysics Data System (ADS)
Le Vine, Nataliya
2016-04-01
This study demonstrates, in the context of flood frequency analysis, the potential of a recently proposed hierarchical Bayesian approach to combine information from multiple models. The approach explicitly accommodates shared multimodel discrepancy as well as the probabilistic nature of the flood estimates, and treats the available models as a sample from a hypothetical complete (but unobserved) set of models. The methodology is applied to flood estimates from multiple hydrological projections (the Future Flows Hydrology data set) for 135 catchments in the UK. The advantages of the approach are shown to be: (1) to ensure adequate "baseline" with which to compare future changes; (2) to reduce flood estimate uncertainty; (3) to maximize use of statistical information in circumstances where multiple weak predictions individually lack power, but collectively provide meaningful information; (4) to diminish the importance of model consistency when model biases are large; and (5) to explicitly consider the influence of the (model performance) stationarity assumption. Moreover, the analysis indicates that reducing shared model discrepancy is the key to further reduction of uncertainty in the flood frequency analysis. The findings are of value regarding how conclusions about changing exposure to flooding are drawn, and to flood frequency change attribution studies.
Unbiased multi-fidelity estimate of failure probability of a free plane jet
NASA Astrophysics Data System (ADS)
Marques, Alexandre; Kramer, Boris; Willcox, Karen; Peherstorfer, Benjamin
2017-11-01
Estimating failure probability related to fluid flows is a challenge because it requires a large number of evaluations of expensive models. We address this challenge by leveraging multiple low fidelity models of the flow dynamics to create an optimal unbiased estimator. In particular, we investigate the effects of uncertain inlet conditions in the width of a free plane jet. We classify a condition as failure when the corresponding jet width is below a small threshold, such that failure is a rare event (failure probability is smaller than 0.001). We estimate failure probability by combining the frameworks of multi-fidelity importance sampling and optimal fusion of estimators. Multi-fidelity importance sampling uses a low fidelity model to explore the parameter space and create a biasing distribution. An unbiased estimate is then computed with a relatively small number of evaluations of the high fidelity model. In the presence of multiple low fidelity models, this framework offers multiple competing estimators. Optimal fusion combines all competing estimators into a single estimator with minimal variance. We show that this combined framework can significantly reduce the cost of estimating failure probabilities, and thus can have a large impact in fluid flow applications. This work was funded by DARPA.
An improved null model for assessing the net effects of multiple stressors on communities.
Thompson, Patrick L; MacLennan, Megan M; Vinebrooke, Rolf D
2018-01-01
Ecological stressors (i.e., environmental factors outside their normal range of variation) can mediate each other through their interactions, leading to unexpected combined effects on communities. Determining whether the net effect of stressors is ecologically surprising requires comparing their cumulative impact to a null model that represents the linear combination of their individual effects (i.e., an additive expectation). However, we show that standard additive and multiplicative null models that base their predictions on the effects of single stressors on community properties (e.g., species richness or biomass) do not provide this linear expectation, leading to incorrect interpretations of antagonistic and synergistic responses by communities. We present an alternative, the compositional null model, which instead bases its predictions on the effects of stressors on individual species, and then aggregates them to the community level. Simulations demonstrate the improved ability of the compositional null model to accurately provide a linear expectation of the net effect of stressors. We simulate the response of communities to paired stressors that affect species in a purely additive fashion and compare the relative abilities of the compositional null model and two standard community property null models (additive and multiplicative) to predict these linear changes in species richness and community biomass across different combinations (both positive, negative, or opposite) and intensities of stressors. The compositional model predicts the linear effects of multiple stressors under almost all scenarios, allowing for proper classification of net effects, whereas the standard null models do not. Our findings suggest that current estimates of the prevalence of ecological surprises on communities based on community property null models are unreliable, and should be improved by integrating the responses of individual species to the community level as does our compositional null model. © 2017 John Wiley & Sons Ltd.
ERIC Educational Resources Information Center
Bechger, Timo M.; Maris, Gunter
2004-01-01
This paper is about the structural equation modelling of quantitative measures that are obtained from a multiple facet design. A facet is simply a set consisting of a finite number of elements. It is assumed that measures are obtained by combining each element of each facet. Methods and traits are two such facets, and a multitrait-multimethod…
A new adaptive multiple modelling approach for non-linear and non-stationary systems
NASA Astrophysics Data System (ADS)
Chen, Hao; Gong, Yu; Hong, Xia
2016-07-01
This paper proposes a novel adaptive multiple modelling algorithm for non-linear and non-stationary systems. This simple modelling paradigm comprises K candidate sub-models which are all linear. With data available in an online fashion, the performance of all candidate sub-models are monitored based on the most recent data window, and M best sub-models are selected from the K candidates. The weight coefficients of the selected sub-model are adapted via the recursive least square (RLS) algorithm, while the coefficients of the remaining sub-models are unchanged. These M model predictions are then optimally combined to produce the multi-model output. We propose to minimise the mean square error based on a recent data window, and apply the sum to one constraint to the combination parameters, leading to a closed-form solution, so that maximal computational efficiency can be achieved. In addition, at each time step, the model prediction is chosen from either the resultant multiple model or the best sub-model, whichever is the best. Simulation results are given in comparison with some typical alternatives, including the linear RLS algorithm and a number of online non-linear approaches, in terms of modelling performance and time consumption.
Ji, Zhiwei; Su, Jing; Wu, Dan; Peng, Huiming; Zhao, Weiling; Nlong Zhao, Brian; Zhou, Xiaobo
2017-01-31
Multiple myeloma is a malignant still incurable plasma cell disorder. This is due to refractory disease relapse, immune impairment, and development of multi-drug resistance. The growth of malignant plasma cells is dependent on the bone marrow (BM) microenvironment and evasion of the host's anti-tumor immune response. Hence, we hypothesized that targeting tumor-stromal cell interaction and endogenous immune system in BM will potentially improve the response of multiple myeloma (MM). Therefore, we proposed a computational simulation of the myeloma development in the complicated microenvironment which includes immune cell components and bone marrow stromal cells and predicted the effects of combined treatment with multi-drugs on myeloma cell growth. We constructed a hybrid multi-scale agent-based model (HABM) that combines an ODE system and Agent-based model (ABM). The ODEs was used for modeling the dynamic changes of intracellular signal transductions and ABM for modeling the cell-cell interactions between stromal cells, tumor, and immune components in the BM. This model simulated myeloma growth in the bone marrow microenvironment and revealed the important role of immune system in this process. The predicted outcomes were consistent with the experimental observations from previous studies. Moreover, we applied this model to predict the treatment effects of three key therapeutic drugs used for MM, and found that the combination of these three drugs potentially suppress the growth of myeloma cells and reactivate the immune response. In summary, the proposed model may serve as a novel computational platform for simulating the formation of MM and evaluating the treatment response of MM to multiple drugs.
Health Consequences of Racist and Antigay Discrimination for Multiple Minority Adolescents
Thoma, Brian C.; Huebner, David M.
2014-01-01
Individuals who belong to a marginalized group and who perceive discrimination based on that group membership suffer from a variety of poor health outcomes. Many people belong to more than one marginalized group, and much less is known about the influence of multiple forms of discrimination on health outcomes. Drawing on literature describing the influence of multiple stressors, three models of combined forms of discrimination are discussed: additive, prominence, and exacerbation. The current study examined the influence of multiple forms of discrimination in a sample of African American lesbian, gay, or bisexual (LGB) adolescents ages 14–19. Each of the three models of combined stressors were tested to determine which best describes how racist and antigay discrimination combine to predict depressive symptoms, suicidal ideation, and substance use. Participants were included in this analysis if they identified their ethnicity as either African American (n = 156) or African American mixed (n = 120). Mean age was 17.45 years (SD = 1.36). Results revealed both forms of mistreatment were associated with depressive symptoms and suicidal ideation among African American LGB adolescents. Racism was more strongly associated with substance use. Future intervention efforts should be targeted toward reducing discrimination and improving the social context of multiple minority adolescents, and future research with multiple minority individuals should be attuned to the multiple forms of discrimination experienced by these individuals within their environments. PMID:23731232
Reducing hydrologic model uncertainty in monthly streamflow predictions using multimodel combination
NASA Astrophysics Data System (ADS)
Li, Weihua; Sankarasubramanian, A.
2012-12-01
Model errors are inevitable in any prediction exercise. One approach that is currently gaining attention in reducing model errors is by combining multiple models to develop improved predictions. The rationale behind this approach primarily lies on the premise that optimal weights could be derived for each model so that the developed multimodel predictions will result in improved predictions. A new dynamic approach (MM-1) to combine multiple hydrological models by evaluating their performance/skill contingent on the predictor state is proposed. We combine two hydrological models, "abcd" model and variable infiltration capacity (VIC) model, to develop multimodel streamflow predictions. To quantify precisely under what conditions the multimodel combination results in improved predictions, we compare multimodel scheme MM-1 with optimal model combination scheme (MM-O) by employing them in predicting the streamflow generated from a known hydrologic model (abcd model orVICmodel) with heteroscedastic error variance as well as from a hydrologic model that exhibits different structure than that of the candidate models (i.e., "abcd" model or VIC model). Results from the study show that streamflow estimated from single models performed better than multimodels under almost no measurement error. However, under increased measurement errors and model structural misspecification, both multimodel schemes (MM-1 and MM-O) consistently performed better than the single model prediction. Overall, MM-1 performs better than MM-O in predicting the monthly flow values as well as in predicting extreme monthly flows. Comparison of the weights obtained from each candidate model reveals that as measurement errors increase, MM-1 assigns weights equally for all the models, whereas MM-O assigns higher weights for always the best-performing candidate model under the calibration period. Applying the multimodel algorithms for predicting streamflows over four different sites revealed that MM-1 performs better than all single models and optimal model combination scheme, MM-O, in predicting the monthly flows as well as the flows during wetter months.
Oguz, Ozgur S; Zhou, Zhehua; Glasauer, Stefan; Wollherr, Dirk
2018-04-03
Human motor control is highly efficient in generating accurate and appropriate motor behavior for a multitude of tasks. This paper examines how kinematic and dynamic properties of the musculoskeletal system are controlled to achieve such efficiency. Even though recent studies have shown that the human motor control relies on multiple models, how the central nervous system (CNS) controls this combination is not fully addressed. In this study, we utilize an Inverse Optimal Control (IOC) framework in order to find the combination of those internal models and how this combination changes for different reaching tasks. We conducted an experiment where participants executed a comprehensive set of free-space reaching motions. The results show that there is a trade-off between kinematics and dynamics based controllers depending on the reaching task. In addition, this trade-off depends on the initial and final arm configurations, which in turn affect the musculoskeletal load to be controlled. Given this insight, we further provide a discomfort metric to demonstrate its influence on the contribution of different inverse internal models. This formulation together with our analysis not only support the multiple internal models (MIMs) hypothesis but also suggest a hierarchical framework for the control of human reaching motions by the CNS.
A hadoop-based method to predict potential effective drug combination.
Sun, Yifan; Xiong, Yi; Xu, Qian; Wei, Dongqing
2014-01-01
Combination drugs that impact multiple targets simultaneously are promising candidates for combating complex diseases due to their improved efficacy and reduced side effects. However, exhaustive screening of all possible drug combinations is extremely time-consuming and impractical. Here, we present a novel Hadoop-based approach to predict drug combinations by taking advantage of the MapReduce programming model, which leads to an improvement of scalability of the prediction algorithm. By integrating the gene expression data of multiple drugs, we constructed data preprocessing and the support vector machines and naïve Bayesian classifiers on Hadoop for prediction of drug combinations. The experimental results suggest that our Hadoop-based model achieves much higher efficiency in the big data processing steps with satisfactory performance. We believed that our proposed approach can help accelerate the prediction of potential effective drugs with the increasing of the combination number at an exponential rate in future. The source code and datasets are available upon request.
A Hadoop-Based Method to Predict Potential Effective Drug Combination
Xiong, Yi; Xu, Qian; Wei, Dongqing
2014-01-01
Combination drugs that impact multiple targets simultaneously are promising candidates for combating complex diseases due to their improved efficacy and reduced side effects. However, exhaustive screening of all possible drug combinations is extremely time-consuming and impractical. Here, we present a novel Hadoop-based approach to predict drug combinations by taking advantage of the MapReduce programming model, which leads to an improvement of scalability of the prediction algorithm. By integrating the gene expression data of multiple drugs, we constructed data preprocessing and the support vector machines and naïve Bayesian classifiers on Hadoop for prediction of drug combinations. The experimental results suggest that our Hadoop-based model achieves much higher efficiency in the big data processing steps with satisfactory performance. We believed that our proposed approach can help accelerate the prediction of potential effective drugs with the increasing of the combination number at an exponential rate in future. The source code and datasets are available upon request. PMID:25147789
Protein structure modeling for CASP10 by multiple layers of global optimization.
Joo, Keehyoung; Lee, Juyong; Sim, Sangjin; Lee, Sun Young; Lee, Kiho; Heo, Seungryong; Lee, In-Ho; Lee, Sung Jong; Lee, Jooyoung
2014-02-01
In the template-based modeling (TBM) category of CASP10 experiment, we introduced a new protocol called protein modeling system (PMS) to generate accurate protein structures in terms of side-chains as well as backbone trace. In the new protocol, a global optimization algorithm, called conformational space annealing (CSA), is applied to the three layers of TBM procedure: multiple sequence-structure alignment, 3D chain building, and side-chain re-modeling. For 3D chain building, we developed a new energy function which includes new distance restraint terms of Lorentzian type (derived from multiple templates), and new energy terms that combine (physical) energy terms such as dynamic fragment assembly (DFA) energy, DFIRE statistical potential energy, hydrogen bonding term, etc. These physical energy terms are expected to guide the structure modeling especially for loop regions where no template structures are available. In addition, we developed a new quality assessment method based on random forest machine learning algorithm to screen templates, multiple alignments, and final models. For TBM targets of CASP10, we find that, due to the combination of three stages of CSA global optimizations and quality assessment, the modeling accuracy of PMS improves at each additional stage of the protocol. It is especially noteworthy that the side-chains of the final PMS models are far more accurate than the models in the intermediate steps. Copyright © 2013 Wiley Periodicals, Inc.
A Survey of Insider Attack Detection Research
2008-08-25
modeling of statistical features , such as the frequency of events, the duration of events, the co-occurrence of multiple events combined through...forms of attack that have been reported [Error! Reference source not found.]. For example: • Unauthorized extraction , duplication, or exfiltration...network level. Schultz pointed out that not one approach will work but solutions need to be based on multiple sensors to be able to find any combination
Tremblay-LeMay, Rosemarie; Rastgoo, Nasrin; Chang, Hong
2018-03-27
Even with recent advances in therapy regimen, multiple myeloma patients commonly develop drug resistance and relapse. The relevance of targeting the PD-1/PD-L1 axis has been demonstrated in pre-clinical models. Monotherapy with PD-1 inhibitors produced disappointing results, but combinations with other drugs used in the treatment of multiple myeloma seemed promising, and clinical trials are ongoing. However, there have recently been concerns about the safety of PD-1 and PD-L1 inhibitors combined with immunomodulators in the treatment of multiple myeloma, and several trials have been suspended. There is therefore a need for alternative combinations of drugs or different approaches to target this pathway. Protein expression of PD-L1 on cancer cells, including in multiple myeloma, has been associated with intrinsic aggressive features independent of immune evasion mechanisms, thereby providing a rationale for the adoption of new strategies directly targeting PD-L1 protein expression. Drugs modulating the transcriptional and post-transcriptional regulation of PD-L1 could represent new therapeutic strategies for the treatment of multiple myeloma, help potentiate the action of other drugs or be combined to PD-1/PD-L1 inhibitors in order to avoid the potentially problematic combination with immunomodulators. This review will focus on the pathophysiology of PD-L1 expression in multiple myeloma and drugs that have been shown to modulate this expression.
Binocular contrast discrimination needs monocular multiplicative noise
Ding, Jian; Levi, Dennis M.
2016-01-01
The effects of signal and noise on contrast discrimination are difficult to separate because of a singularity in the signal-detection-theory model of two-alternative forced-choice contrast discrimination (Katkov, Tsodyks, & Sagi, 2006). In this article, we show that it is possible to eliminate the singularity by combining that model with a binocular combination model to fit monocular, dichoptic, and binocular contrast discrimination. We performed three experiments using identical stimuli to measure the perceived phase, perceived contrast, and contrast discrimination of a cyclopean sine wave. In the absence of a fixation point, we found a binocular advantage in contrast discrimination both at low contrasts (<4%), consistent with previous studies, and at high contrasts (≥34%), which has not been previously reported. However, control experiments showed no binocular advantage at high contrasts in the presence of a fixation point or for observers without accommodation. We evaluated two putative contrast-discrimination mechanisms: a nonlinear contrast transducer and multiplicative noise (MN). A binocular combination model (the DSKL model; Ding, Klein, & Levi, 2013b) was first fitted to both the perceived-phase and the perceived-contrast data sets, then combined with either the nonlinear contrast transducer or the MN mechanism to fit the contrast-discrimination data. We found that the best model combined the DSKL model with early MN. Model simulations showed that, after going through interocular suppression, the uncorrelated noise in the two eyes became anticorrelated, resulting in less binocular noise and therefore a binocular advantage in the discrimination task. Combining a nonlinear contrast transducer or MN with a binocular combination model (DSKL) provides a powerful method for evaluating the two putative contrast-discrimination mechanisms. PMID:26982370
Binocular contrast discrimination needs monocular multiplicative noise.
Ding, Jian; Levi, Dennis M
2016-01-01
The effects of signal and noise on contrast discrimination are difficult to separate because of a singularity in the signal-detection-theory model of two-alternative forced-choice contrast discrimination (Katkov, Tsodyks, & Sagi, 2006). In this article, we show that it is possible to eliminate the singularity by combining that model with a binocular combination model to fit monocular, dichoptic, and binocular contrast discrimination. We performed three experiments using identical stimuli to measure the perceived phase, perceived contrast, and contrast discrimination of a cyclopean sine wave. In the absence of a fixation point, we found a binocular advantage in contrast discrimination both at low contrasts (<4%), consistent with previous studies, and at high contrasts (≥34%), which has not been previously reported. However, control experiments showed no binocular advantage at high contrasts in the presence of a fixation point or for observers without accommodation. We evaluated two putative contrast-discrimination mechanisms: a nonlinear contrast transducer and multiplicative noise (MN). A binocular combination model (the DSKL model; Ding, Klein, & Levi, 2013b) was first fitted to both the perceived-phase and the perceived-contrast data sets, then combined with either the nonlinear contrast transducer or the MN mechanism to fit the contrast-discrimination data. We found that the best model combined the DSKL model with early MN. Model simulations showed that, after going through interocular suppression, the uncorrelated noise in the two eyes became anticorrelated, resulting in less binocular noise and therefore a binocular advantage in the discrimination task. Combining a nonlinear contrast transducer or MN with a binocular combination model (DSKL) provides a powerful method for evaluating the two putative contrast-discrimination mechanisms.
Unified tensor model for space-frequency spreading-multiplexing (SFSM) MIMO communication systems
NASA Astrophysics Data System (ADS)
de Almeida, André LF; Favier, Gérard
2013-12-01
This paper presents a unified tensor model for space-frequency spreading-multiplexing (SFSM) multiple-input multiple-output (MIMO) wireless communication systems that combine space- and frequency-domain spreadings, followed by a space-frequency multiplexing. Spreading across space (transmit antennas) and frequency (subcarriers) adds resilience against deep channel fades and provides space and frequency diversities, while orthogonal space-frequency multiplexing enables multi-stream transmission. We adopt a tensor-based formulation for the proposed SFSM MIMO system that incorporates space, frequency, time, and code dimensions by means of the parallel factor model. The developed SFSM tensor model unifies the tensorial formulation of some existing multiple-access/multicarrier MIMO signaling schemes as special cases, while revealing interesting tradeoffs due to combined space, frequency, and time diversities which are of practical relevance for joint symbol-channel-code estimation. The performance of the proposed SFSM MIMO system using either a zero forcing receiver or a semi-blind tensor-based receiver is illustrated by means of computer simulation results under realistic channel and system parameters.
Ko, Junsu; Park, Hahnbeom; Seok, Chaok
2012-08-10
Protein structures can be reliably predicted by template-based modeling (TBM) when experimental structures of homologous proteins are available. However, it is challenging to obtain structures more accurate than the single best templates by either combining information from multiple templates or by modeling regions that vary among templates or are not covered by any templates. We introduce GalaxyTBM, a new TBM method in which the more reliable core region is modeled first from multiple templates and less reliable, variable local regions, such as loops or termini, are then detected and re-modeled by an ab initio method. This TBM method is based on "Seok-server," which was tested in CASP9 and assessed to be amongst the top TBM servers. The accuracy of the initial core modeling is enhanced by focusing on more conserved regions in the multiple-template selection and multiple sequence alignment stages. Additional improvement is achieved by ab initio modeling of up to 3 unreliable local regions in the fixed framework of the core structure. Overall, GalaxyTBM reproduced the performance of Seok-server, with GalaxyTBM and Seok-server resulting in average GDT-TS of 68.1 and 68.4, respectively, when tested on 68 single-domain CASP9 TBM targets. For application to multi-domain proteins, GalaxyTBM must be combined with domain-splitting methods. Application of GalaxyTBM to CASP9 targets demonstrates that accurate protein structure prediction is possible by use of a multiple-template-based approach, and ab initio modeling of variable regions can further enhance the model quality.
Health consequences of racist and antigay discrimination for multiple minority adolescents.
Thoma, Brian C; Huebner, David M
2013-10-01
Individuals who belong to a marginalized group and who perceive discrimination based on that group membership suffer from a variety of poor health outcomes. Many people belong to more than one marginalized group, and much less is known about the influence of multiple forms of discrimination on health outcomes. Drawing on literature describing the influence of multiple stressors, three models of combined forms of discrimination are discussed: additive, prominence, and exacerbation. The current study examined the influence of multiple forms of discrimination in a sample of African American lesbian, gay, or bisexual (LGB) adolescents ages 14-19. Each of the three models of combined stressors were tested to determine which best describes how racist and antigay discrimination combine to predict depressive symptoms, suicidal ideation, and substance use. Participants were included in this analysis if they identified their ethnicity as either African American (n = 156) or African American mixed (n = 120). Mean age was 17.45 years (SD = 1.36). Results revealed both forms of mistreatment were associated with depressive symptoms and suicidal ideation among African American LGB adolescents. Racism was more strongly associated with substance use. Future intervention efforts should be targeted toward reducing discrimination and improving the social context of multiple minority adolescents, and future research with multiple minority individuals should be attuned to the multiple forms of discrimination experienced by these individuals within their environments. PsycINFO Database Record (c) 2013 APA, all rights reserved.
A Framework for Distributed Problem Solving
NASA Astrophysics Data System (ADS)
Leone, Joseph; Shin, Don G.
1989-03-01
This work explores a distributed problem solving (DPS) approach, namely the AM/AG model, to cooperative memory recall. The AM/AG model is a hierarchic social system metaphor for DPS based on the Mintzberg's model of organizations. At the core of the model are information flow mechanisms, named amplification and aggregation. Amplification is a process of expounding a given task, called an agenda, into a set of subtasks with magnified degree of specificity and distributing them to multiple processing units downward in the hierarchy. Aggregation is a process of combining the results reported from multiple processing units into a unified view, called a resolution, and promoting the conclusion upward in the hierarchy. The combination of amplification and aggregation can account for a memory recall process which primarily relies on the ability of making associations between vast amounts of related concepts, sorting out the combined results, and promoting the most plausible ones. The amplification process is discussed in detail. An implementation of the amplification process is presented. The process is illustrated by an example.
Liu, Hua; Wu, Wen
2017-01-01
For improving the tracking accuracy and model switching speed of maneuvering target tracking in nonlinear systems, a new algorithm named the interacting multiple model fifth-degree spherical simplex-radial cubature Kalman filter (IMM5thSSRCKF) is proposed in this paper. The new algorithm is a combination of the interacting multiple model (IMM) filter and the fifth-degree spherical simplex-radial cubature Kalman filter (5thSSRCKF). The proposed algorithm makes use of Markov process to describe the switching probability among the models, and uses 5thSSRCKF to deal with the state estimation of each model. The 5thSSRCKF is an improved filter algorithm, which utilizes the fifth-degree spherical simplex-radial rule to improve the filtering accuracy. Finally, the tracking performance of the IMM5thSSRCKF is evaluated by simulation in a typical maneuvering target tracking scenario. Simulation results show that the proposed algorithm has better tracking performance and quicker model switching speed when disposing maneuver models compared with the interacting multiple model unscented Kalman filter (IMMUKF), the interacting multiple model cubature Kalman filter (IMMCKF) and the interacting multiple model fifth-degree cubature Kalman filter (IMM5thCKF). PMID:28608843
Liu, Hua; Wu, Wen
2017-06-13
For improving the tracking accuracy and model switching speed of maneuvering target tracking in nonlinear systems, a new algorithm named the interacting multiple model fifth-degree spherical simplex-radial cubature Kalman filter (IMM5thSSRCKF) is proposed in this paper. The new algorithm is a combination of the interacting multiple model (IMM) filter and the fifth-degree spherical simplex-radial cubature Kalman filter (5thSSRCKF). The proposed algorithm makes use of Markov process to describe the switching probability among the models, and uses 5thSSRCKF to deal with the state estimation of each model. The 5thSSRCKF is an improved filter algorithm, which utilizes the fifth-degree spherical simplex-radial rule to improve the filtering accuracy. Finally, the tracking performance of the IMM5thSSRCKF is evaluated by simulation in a typical maneuvering target tracking scenario. Simulation results show that the proposed algorithm has better tracking performance and quicker model switching speed when disposing maneuver models compared with the interacting multiple model unscented Kalman filter (IMMUKF), the interacting multiple model cubature Kalman filter (IMMCKF) and the interacting multiple model fifth-degree cubature Kalman filter (IMM5thCKF).
Combined influence of multiple climatic factors on the incidence of bacterial foodborne diseases.
Park, Myoung Su; Park, Ki Hwan; Bahk, Gyung Jin
2018-01-01
Information regarding the relationship between the incidence of foodborne diseases (FBD) and climatic factors is useful in designing preventive strategies for FBD based on anticipated future climate change. To better predict the effect of climate change on foodborne pathogens, the present study investigated the combined influence of multiple climatic factors on bacterial FBD incidence in South Korea. During 2011-2015, the relationships between 8 climatic factors and the incidences of 13 bacterial FBD, were determined based on inpatient stays, on a monthly basis using the Pearson correlation analyses, multicollinearity tests, principal component analysis (PCA), and the seasonal autoregressive integrated moving average (SARIMA) modeling. Of the 8 climatic variables, the combination of temperature, relative humidity, precipitation, insolation, and cloudiness was significantly associated with salmonellosis (P<0.01), vibriosis (P<0.05), and enterohemorrhagic Escherichia coli O157:H7 infection (P<0.01). The combined effects of snowfall, wind speed, duration of sunshine, and cloudiness were not significant for these 3 FBD. Other FBD, including campylobacteriosis, were not significantly associated with any combination of climatic factors. These findings indicate that the relationships between multiple climatic factors and bacterial FBD incidence can be valuable for the development of prediction models for future patterns of diseases in response to changes in climate. Copyright © 2017 Elsevier B.V. All rights reserved.
Mondy, Cédric P; Muñoz, Isabel; Dolédec, Sylvain
2016-12-01
Multiple stressors constitute a serious threat to aquatic ecosystems, particularly in the Mediterranean region where water scarcity is likely to interact with other anthropogenic stressors. Biological traits potentially allow the unravelling of the effects of multiple stressors. However, thus far, trait-based approaches have failed to fully deliver on their promise and still lack strong predictive power when multiple stressors are present. We aimed to quantify specific community tolerances against six anthropogenic stressors and investigate the responses of the underlying macroinvertebrate biological traits and their combinations. We built and calibrated boosted regression tree models to predict community tolerances using multiple biological traits with a priori hypotheses regarding their individual responses to specific stressors. We analysed the combinations of traits underlying community tolerance and the effect of trait association on this tolerance. Our results validated the following three hypotheses: (i) the community tolerance models efficiently and robustly related trait combinations to stressor intensities and, to a lesser extent, to stressors related to the presence of dams and insecticides; (ii) the effects of traits on community tolerance not only depended on trait identity but also on the trait associations emerging at the community level from the co-occurrence of different traits in species; and (iii) the community tolerances and the underlying trait combinations were specific to the different stressors. This study takes a further step towards predictive tools in community ecology that consider combinations and associations of traits as the basis of stressor tolerance. Additionally, the community tolerance concept has potential application to help stream managers in the decision process regarding management options. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Assessing Risk Prediction Models Using Individual Participant Data From Multiple Studies
Pennells, Lisa; Kaptoge, Stephen; White, Ian R.; Thompson, Simon G.; Wood, Angela M.; Tipping, Robert W.; Folsom, Aaron R.; Couper, David J.; Ballantyne, Christie M.; Coresh, Josef; Goya Wannamethee, S.; Morris, Richard W.; Kiechl, Stefan; Willeit, Johann; Willeit, Peter; Schett, Georg; Ebrahim, Shah; Lawlor, Debbie A.; Yarnell, John W.; Gallacher, John; Cushman, Mary; Psaty, Bruce M.; Tracy, Russ; Tybjærg-Hansen, Anne; Price, Jackie F.; Lee, Amanda J.; McLachlan, Stela; Khaw, Kay-Tee; Wareham, Nicholas J.; Brenner, Hermann; Schöttker, Ben; Müller, Heiko; Jansson, Jan-Håkan; Wennberg, Patrik; Salomaa, Veikko; Harald, Kennet; Jousilahti, Pekka; Vartiainen, Erkki; Woodward, Mark; D'Agostino, Ralph B.; Bladbjerg, Else-Marie; Jørgensen, Torben; Kiyohara, Yutaka; Arima, Hisatomi; Doi, Yasufumi; Ninomiya, Toshiharu; Dekker, Jacqueline M.; Nijpels, Giel; Stehouwer, Coen D. A.; Kauhanen, Jussi; Salonen, Jukka T.; Meade, Tom W.; Cooper, Jackie A.; Cushman, Mary; Folsom, Aaron R.; Psaty, Bruce M.; Shea, Steven; Döring, Angela; Kuller, Lewis H.; Grandits, Greg; Gillum, Richard F.; Mussolino, Michael; Rimm, Eric B.; Hankinson, Sue E.; Manson, JoAnn E.; Pai, Jennifer K.; Kirkland, Susan; Shaffer, Jonathan A.; Shimbo, Daichi; Bakker, Stephan J. L.; Gansevoort, Ron T.; Hillege, Hans L.; Amouyel, Philippe; Arveiler, Dominique; Evans, Alun; Ferrières, Jean; Sattar, Naveed; Westendorp, Rudi G.; Buckley, Brendan M.; Cantin, Bernard; Lamarche, Benoît; Barrett-Connor, Elizabeth; Wingard, Deborah L.; Bettencourt, Richele; Gudnason, Vilmundur; Aspelund, Thor; Sigurdsson, Gunnar; Thorsson, Bolli; Kavousi, Maryam; Witteman, Jacqueline C.; Hofman, Albert; Franco, Oscar H.; Howard, Barbara V.; Zhang, Ying; Best, Lyle; Umans, Jason G.; Onat, Altan; Sundström, Johan; Michael Gaziano, J.; Stampfer, Meir; Ridker, Paul M.; Michael Gaziano, J.; Ridker, Paul M.; Marmot, Michael; Clarke, Robert; Collins, Rory; Fletcher, Astrid; Brunner, Eric; Shipley, Martin; Kivimäki, Mika; Ridker, Paul M.; Buring, Julie; Cook, Nancy; Ford, Ian; Shepherd, James; Cobbe, Stuart M.; Robertson, Michele; Walker, Matthew; Watson, Sarah; Alexander, Myriam; Butterworth, Adam S.; Angelantonio, Emanuele Di; Gao, Pei; Haycock, Philip; Kaptoge, Stephen; Pennells, Lisa; Thompson, Simon G.; Walker, Matthew; Watson, Sarah; White, Ian R.; Wood, Angela M.; Wormser, David; Danesh, John
2014-01-01
Individual participant time-to-event data from multiple prospective epidemiologic studies enable detailed investigation into the predictive ability of risk models. Here we address the challenges in appropriately combining such information across studies. Methods are exemplified by analyses of log C-reactive protein and conventional risk factors for coronary heart disease in the Emerging Risk Factors Collaboration, a collation of individual data from multiple prospective studies with an average follow-up duration of 9.8 years (dates varied). We derive risk prediction models using Cox proportional hazards regression analysis stratified by study and obtain estimates of risk discrimination, Harrell's concordance index, and Royston's discrimination measure within each study; we then combine the estimates across studies using a weighted meta-analysis. Various weighting approaches are compared and lead us to recommend using the number of events in each study. We also discuss the calculation of measures of reclassification for multiple studies. We further show that comparison of differences in predictive ability across subgroups should be based only on within-study information and that combining measures of risk discrimination from case-control studies and prospective studies is problematic. The concordance index and discrimination measure gave qualitatively similar results throughout. While the concordance index was very heterogeneous between studies, principally because of differing age ranges, the increments in the concordance index from adding log C-reactive protein to conventional risk factors were more homogeneous. PMID:24366051
Assessing risk prediction models using individual participant data from multiple studies.
Pennells, Lisa; Kaptoge, Stephen; White, Ian R; Thompson, Simon G; Wood, Angela M
2014-03-01
Individual participant time-to-event data from multiple prospective epidemiologic studies enable detailed investigation into the predictive ability of risk models. Here we address the challenges in appropriately combining such information across studies. Methods are exemplified by analyses of log C-reactive protein and conventional risk factors for coronary heart disease in the Emerging Risk Factors Collaboration, a collation of individual data from multiple prospective studies with an average follow-up duration of 9.8 years (dates varied). We derive risk prediction models using Cox proportional hazards regression analysis stratified by study and obtain estimates of risk discrimination, Harrell's concordance index, and Royston's discrimination measure within each study; we then combine the estimates across studies using a weighted meta-analysis. Various weighting approaches are compared and lead us to recommend using the number of events in each study. We also discuss the calculation of measures of reclassification for multiple studies. We further show that comparison of differences in predictive ability across subgroups should be based only on within-study information and that combining measures of risk discrimination from case-control studies and prospective studies is problematic. The concordance index and discrimination measure gave qualitatively similar results throughout. While the concordance index was very heterogeneous between studies, principally because of differing age ranges, the increments in the concordance index from adding log C-reactive protein to conventional risk factors were more homogeneous.
The Role of Multimodel Combination in Improving Streamflow Prediction
NASA Astrophysics Data System (ADS)
Arumugam, S.; Li, W.
2008-12-01
Model errors are the inevitable part in any prediction exercise. One approach that is currently gaining attention to reduce model errors is by optimally combining multiple models to develop improved predictions. The rationale behind this approach primarily lies on the premise that optimal weights could be derived for each model so that the developed multimodel predictions will result in improved predictability. In this study, we present a new approach to combine multiple hydrological models by evaluating their predictability contingent on the predictor state. We combine two hydrological models, 'abcd' model and Variable Infiltration Capacity (VIC) model, with each model's parameter being estimated by two different objective functions to develop multimodel streamflow predictions. The performance of multimodel predictions is compared with individual model predictions using correlation, root mean square error and Nash-Sutcliffe coefficient. To quantify precisely under what conditions the multimodel predictions result in improved predictions, we evaluate the proposed algorithm by testing it against streamflow generated from a known model ('abcd' model or VIC model) with errors being homoscedastic or heteroscedastic. Results from the study show that streamflow simulated from individual models performed better than multimodels under almost no model error. Under increased model error, the multimodel consistently performed better than the single model prediction in terms of all performance measures. The study also evaluates the proposed algorithm for streamflow predictions in two humid river basins from NC as well as in two arid basins from Arizona. Through detailed validation in these four sites, the study shows that multimodel approach better predicts the observed streamflow in comparison to the single model predictions.
Bennetts, Victor Hernandez; Schaffernicht, Erik; Pomareda, Victor; Lilienthal, Achim J; Marco, Santiago; Trincavelli, Marco
2014-09-17
In this paper, we address the task of gas distribution modeling in scenarios where multiple heterogeneous compounds are present. Gas distribution modeling is particularly useful in emission monitoring applications where spatial representations of the gaseous patches can be used to identify emission hot spots. In realistic environments, the presence of multiple chemicals is expected and therefore, gas discrimination has to be incorporated in the modeling process. The approach presented in this work addresses the task of gas distribution modeling by combining different non selective gas sensors. Gas discrimination is addressed with an open sampling system, composed by an array of metal oxide sensors and a probabilistic algorithm tailored to uncontrolled environments. For each of the identified compounds, the mapping algorithm generates a calibrated gas distribution model using the classification uncertainty and the concentration readings acquired with a photo ionization detector. The meta parameters of the proposed modeling algorithm are automatically learned from the data. The approach was validated with a gas sensitive robot patrolling outdoor and indoor scenarios, where two different chemicals were released simultaneously. The experimental results show that the generated multi compound maps can be used to accurately predict the location of emitting gas sources.
Parikh, Nidhi; Hayatnagarkar, Harshal G; Beckman, Richard J; Marathe, Madhav V; Swarup, Samarth
2016-11-01
We describe a large-scale simulation of the aftermath of a hypothetical 10kT improvised nuclear detonation at ground level, near the White House in Washington DC. We take a synthetic information approach, where multiple data sets are combined to construct a synthesized representation of the population of the region with accurate demographics, as well as four infrastructures: transportation, healthcare, communication, and power. In this article, we focus on the model of agents and their behavior, which is represented using the options framework. Six different behavioral options are modeled: household reconstitution, evacuation, healthcare-seeking, worry, shelter-seeking, and aiding & assisting others. Agent decision-making takes into account their health status, information about family members, information about the event, and their local environment. We combine these behavioral options into five different behavior models of increasing complexity and do a number of simulations to compare the models.
Parikh, Nidhi; Hayatnagarkar, Harshal G.; Beckman, Richard J.; Marathe, Madhav V.; Swarup, Samarth
2016-01-01
We describe a large-scale simulation of the aftermath of a hypothetical 10kT improvised nuclear detonation at ground level, near the White House in Washington DC. We take a synthetic information approach, where multiple data sets are combined to construct a synthesized representation of the population of the region with accurate demographics, as well as four infrastructures: transportation, healthcare, communication, and power. In this article, we focus on the model of agents and their behavior, which is represented using the options framework. Six different behavioral options are modeled: household reconstitution, evacuation, healthcare-seeking, worry, shelter-seeking, and aiding & assisting others. Agent decision-making takes into account their health status, information about family members, information about the event, and their local environment. We combine these behavioral options into five different behavior models of increasing complexity and do a number of simulations to compare the models. PMID:27909393
ERIC Educational Resources Information Center
Sideridis, Georgios D.; Tsaousis, Ioannis; Al-harbi, Khaleel A.
2015-01-01
The purpose of the present study was to extend the model of measurement invariance by simultaneously estimating invariance across multiple populations in the dichotomous instrument case using multi-group confirmatory factor analytic and multiple indicator multiple causes (MIMIC) methodologies. Using the Arabic version of the General Aptitude Test…
Association analysis of multiple traits by an approach of combining P values.
Chen, Lili; Wang, Yong; Zhou, Yajing
2018-03-01
Increasing evidence shows that one variant can affect multiple traits, which is a widespread phenomenon in complex diseases. Joint analysis of multiple traits can increase statistical power of association analysis and uncover the underlying genetic mechanism. Although there are many statistical methods to analyse multiple traits, most of these methods are usually suitable for detecting common variants associated with multiple traits. However, because of low minor allele frequency of rare variant, these methods are not optimal for rare variant association analysis. In this paper, we extend an adaptive combination of P values method (termed ADA) for single trait to test association between multiple traits and rare variants in the given region. For a given region, we use reverse regression model to test each rare variant associated with multiple traits and obtain the P value of single-variant test. Further, we take the weighted combination of these P values as the test statistic. Extensive simulation studies show that our approach is more powerful than several other comparison methods in most cases and is robust to the inclusion of a high proportion of neutral variants and the different directions of effects of causal variants.
Schmidt, Aaron M; Dolis, Chad M
2009-05-01
The current study developed and tested a model of the interplay among goal difficulty, goal progress, and expectancy over time in influencing resource allocation toward competing demands. The results provided broad support for the theoretical model. As predicted, dual-goal expectancy-the perceived likelihood of meeting both goals in competition-played a central role, moderating the relationship between relative goal progress and resource allocation. Dual-goal difficulty was also found to exert an important influence on multiple-goal self-regulation. Although it did not influence total productivity across both tasks combined, it did combine with other model components to influence the relative emphasis of one task over another. These results suggest that the cumulative demands placed by multiple difficult goals may exceed individuals' perceived capabilities and may lead to partial or total abandonment of 1 goal to ensure attainment of the other. The model helps shed light on some of the conflicting theoretical propositions and empirical results obtained in prior work. Implications for theory and research regarding multiple-goal self-regulation are discussed. (c) 2009 APA, all rights reserved.
Medendorp, W. P.
2015-01-01
It is known that the brain uses multiple reference frames to code spatial information, including eye-centered and body-centered frames. When we move our body in space, these internal representations are no longer in register with external space, unless they are actively updated. Whether the brain updates multiple spatial representations in parallel, or whether it restricts its updating mechanisms to a single reference frame from which other representations are constructed, remains an open question. We developed an optimal integration model to simulate the updating of visual space across body motion in multiple or single reference frames. To test this model, we designed an experiment in which participants had to remember the location of a briefly presented target while being translated sideways. The behavioral responses were in agreement with a model that uses a combination of eye- and body-centered representations, weighted according to the reliability in which the target location is stored and updated in each reference frame. Our findings suggest that the brain simultaneously updates multiple spatial representations across body motion. Because both representations are kept in sync, they can be optimally combined to provide a more precise estimate of visual locations in space than based on single-frame updating mechanisms. PMID:26490289
Owen, Rhiannon K; Cooper, Nicola J; Quinn, Terence J; Lees, Rosalind; Sutton, Alex J
2018-07-01
Network meta-analyses (NMA) have extensively been used to compare the effectiveness of multiple interventions for health care policy and decision-making. However, methods for evaluating the performance of multiple diagnostic tests are less established. In a decision-making context, we are often interested in comparing and ranking the performance of multiple diagnostic tests, at varying levels of test thresholds, in one simultaneous analysis. Motivated by an example of cognitive impairment diagnosis following stroke, we synthesized data from 13 studies assessing the efficiency of two diagnostic tests: Mini-Mental State Examination (MMSE) and Montreal Cognitive Assessment (MoCA), at two test thresholds: MMSE <25/30 and <27/30, and MoCA <22/30 and <26/30. Using Markov chain Monte Carlo (MCMC) methods, we fitted a bivariate network meta-analysis model incorporating constraints on increasing test threshold, and accounting for the correlations between multiple test accuracy measures from the same study. We developed and successfully fitted a model comparing multiple tests/threshold combinations while imposing threshold constraints. Using this model, we found that MoCA at threshold <26/30 appeared to have the best true positive rate, whereas MMSE at threshold <25/30 appeared to have the best true negative rate. The combined analysis of multiple tests at multiple thresholds allowed for more rigorous comparisons between competing diagnostics tests for decision making. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Pokhrel, Prafulla; Wang, Q. J.; Robertson, David E.
2013-10-01
Seasonal streamflow forecasts are valuable for planning and allocation of water resources. In Australia, the Bureau of Meteorology employs a statistical method to forecast seasonal streamflows. The method uses predictors that are related to catchment wetness at the start of a forecast period and to climate during the forecast period. For the latter, a predictor is selected among a number of lagged climate indices as candidates to give the "best" model in terms of model performance in cross validation. This study investigates two strategies for further improvement in seasonal streamflow forecasts. The first is to combine, through Bayesian model averaging, multiple candidate models with different lagged climate indices as predictors, to take advantage of different predictive strengths of the multiple models. The second strategy is to introduce additional candidate models, using rainfall and sea surface temperature predictions from a global climate model as predictors. This is to take advantage of the direct simulations of various dynamic processes. The results show that combining forecasts from multiple statistical models generally yields more skillful forecasts than using only the best model and appears to moderate the worst forecast errors. The use of rainfall predictions from the dynamical climate model marginally improves the streamflow forecasts when viewed over all the study catchments and seasons, but the use of sea surface temperature predictions provide little additional benefit.
Park, Byung-Jung; Lord, Dominique; Wu, Lingtao
2016-10-28
This study aimed to investigate the relative performance of two models (negative binomial (NB) model and two-component finite mixture of negative binomial models (FMNB-2)) in terms of developing crash modification factors (CMFs). Crash data on rural multilane divided highways in California and Texas were modeled with the two models, and crash modification functions (CMFunctions) were derived. The resultant CMFunction estimated from the FMNB-2 model showed several good properties over that from the NB model. First, the safety effect of a covariate was better reflected by the CMFunction developed using the FMNB-2 model, since the model takes into account the differential responsiveness of crash frequency to the covariate. Second, the CMFunction derived from the FMNB-2 model is able to capture nonlinear relationships between covariate and safety. Finally, following the same concept as those for NB models, the combined CMFs of multiple treatments were estimated using the FMNB-2 model. The results indicated that they are not the simple multiplicative of single ones (i.e., their safety effects are not independent under FMNB-2 models). Adjustment Factors (AFs) were then developed. It is revealed that current Highway Safety Manual's method could over- or under-estimate the combined CMFs under particular combination of covariates. Safety analysts are encouraged to consider using the FMNB-2 models for developing CMFs and AFs. Copyright © 2016 Elsevier Ltd. All rights reserved.
Categorical Variables in Multiple Regression: Some Cautions.
ERIC Educational Resources Information Center
O'Grady, Kevin E.; Medoff, Deborah R.
1988-01-01
Limitations of dummy coding and nonsense coding as methods of coding categorical variables for use as predictors in multiple regression analysis are discussed. The combination of these approaches often yields estimates and tests of significance that are not intended by researchers for inclusion in their models. (SLD)
ERIC Educational Resources Information Center
Young, Tamara V.; Shepley, Thomas V.; Song, Mengli
2010-01-01
Drawing on interview data from reading policy actors in California, Michigan, and Texas, this study applied Kingdon's (1984, 1995) multiple streams model to explain how the issue of reading became prominent on the agenda of state governments during the latter half of the 1990s. A combination of factors influenced the status of a state's reading…
Flexible language constructs for large parallel programs
NASA Technical Reports Server (NTRS)
Rosing, Matthew; Schnabel, Robert
1993-01-01
The goal of the research described is to develop flexible language constructs for writing large data parallel numerical programs for distributed memory (MIMD) multiprocessors. Previously, several models have been developed to support synchronization and communication. Models for global synchronization include SIMD (Single Instruction Multiple Data), SPMD (Single Program Multiple Data), and sequential programs annotated with data distribution statements. The two primary models for communication include implicit communication based on shared memory and explicit communication based on messages. None of these models by themselves seem sufficient to permit the natural and efficient expression of the variety of algorithms that occur in large scientific computations. An overview of a new language that combines many of these programming models in a clean manner is given. This is done in a modular fashion such that different models can be combined to support large programs. Within a module, the selection of a model depends on the algorithm and its efficiency requirements. An overview of the language and discussion of some of the critical implementation details is given.
Wang, Shuangquan; Sun, Huiyong; Liu, Hui; Li, Dan; Li, Youyong; Hou, Tingjun
2016-08-01
Blockade of human ether-à-go-go related gene (hERG) channel by compounds may lead to drug-induced QT prolongation, arrhythmia, and Torsades de Pointes (TdP), and therefore reliable prediction of hERG liability in the early stages of drug design is quite important to reduce the risk of cardiotoxicity-related attritions in the later development stages. In this study, pharmacophore modeling and machine learning approaches were combined to construct classification models to distinguish hERG active from inactive compounds based on a diverse data set. First, an optimal ensemble of pharmacophore hypotheses that had good capability to differentiate hERG active from inactive compounds was identified by the recursive partitioning (RP) approach. Then, the naive Bayesian classification (NBC) and support vector machine (SVM) approaches were employed to construct classification models by integrating multiple important pharmacophore hypotheses. The integrated classification models showed improved predictive capability over any single pharmacophore hypothesis, suggesting that the broad binding polyspecificity of hERG can only be well characterized by multiple pharmacophores. The best SVM model achieved the prediction accuracies of 84.7% for the training set and 82.1% for the external test set. Notably, the accuracies for the hERG blockers and nonblockers in the test set reached 83.6% and 78.2%, respectively. Analysis of significant pharmacophores helps to understand the multimechanisms of action of hERG blockers. We believe that the combination of pharmacophore modeling and SVM is a powerful strategy to develop reliable theoretical models for the prediction of potential hERG liability.
Cumulative Risk and Impact Modeling on Environmental Chemical and Social Stressors.
Huang, Hongtai; Wang, Aolin; Morello-Frosch, Rachel; Lam, Juleen; Sirota, Marina; Padula, Amy; Woodruff, Tracey J
2018-03-01
The goal of this review is to identify cumulative modeling methods used to evaluate combined effects of exposures to environmental chemicals and social stressors. The specific review question is: What are the existing quantitative methods used to examine the cumulative impacts of exposures to environmental chemical and social stressors on health? There has been an increase in literature that evaluates combined effects of exposures to environmental chemicals and social stressors on health using regression models; very few studies applied other data mining and machine learning techniques to this problem. The majority of studies we identified used regression models to evaluate combined effects of multiple environmental and social stressors. With proper study design and appropriate modeling assumptions, additional data mining methods may be useful to examine combined effects of environmental and social stressors.
Group-oriented coordination models for distributed client-server computing
NASA Technical Reports Server (NTRS)
Adler, Richard M.; Hughes, Craig S.
1994-01-01
This paper describes group-oriented control models for distributed client-server interactions. These models transparently coordinate requests for services that involve multiple servers, such as queries across distributed databases. Specific capabilities include: decomposing and replicating client requests; dispatching request subtasks or copies to independent, networked servers; and combining server results into a single response for the client. The control models were implemented by combining request broker and process group technologies with an object-oriented communication middleware tool. The models are illustrated in the context of a distributed operations support application for space-based systems.
Yu, Guozhi; Hozé, Nathanaël; Rolff, Jens
2016-01-01
Antimicrobial peptides (AMPs) and antibiotics reduce the net growth rate of bacterial populations they target. It is relevant to understand if effects of multiple antimicrobials are synergistic or antagonistic, in particular for AMP responses, because naturally occurring responses involve multiple AMPs. There are several competing proposals describing how multiple types of antimicrobials add up when applied in combination, such as Loewe additivity or Bliss independence. These additivity terms are defined ad hoc from abstract principles explaining the supposed interaction between the antimicrobials. Here, we link these ad hoc combination terms to a mathematical model that represents the dynamics of antimicrobial molecules hitting targets on bacterial cells. In this multi-hit model, bacteria are killed when a certain number of targets are hit by antimicrobials. Using this bottom-up approach reveals that Bliss independence should be the model of choice if no interaction between antimicrobial molecules is expected. Loewe additivity, on the other hand, describes scenarios in which antimicrobials affect the same components of the cell, i.e. are not acting independently. While our approach idealizes the dynamics of antimicrobials, it provides a conceptual underpinning of the additivity terms. The choice of the additivity term is essential to determine synergy or antagonism of antimicrobials. This article is part of the themed issue ‘Evolutionary ecology of arthropod antimicrobial peptides’. PMID:27160596
Leyrat, Clémence; Seaman, Shaun R; White, Ian R; Douglas, Ian; Smeeth, Liam; Kim, Joseph; Resche-Rigon, Matthieu; Carpenter, James R; Williamson, Elizabeth J
2017-01-01
Inverse probability of treatment weighting is a popular propensity score-based approach to estimate marginal treatment effects in observational studies at risk of confounding bias. A major issue when estimating the propensity score is the presence of partially observed covariates. Multiple imputation is a natural approach to handle missing data on covariates: covariates are imputed and a propensity score analysis is performed in each imputed dataset to estimate the treatment effect. The treatment effect estimates from each imputed dataset are then combined to obtain an overall estimate. We call this method MIte. However, an alternative approach has been proposed, in which the propensity scores are combined across the imputed datasets (MIps). Therefore, there are remaining uncertainties about how to implement multiple imputation for propensity score analysis: (a) should we apply Rubin's rules to the inverse probability of treatment weighting treatment effect estimates or to the propensity score estimates themselves? (b) does the outcome have to be included in the imputation model? (c) how should we estimate the variance of the inverse probability of treatment weighting estimator after multiple imputation? We studied the consistency and balancing properties of the MIte and MIps estimators and performed a simulation study to empirically assess their performance for the analysis of a binary outcome. We also compared the performance of these methods to complete case analysis and the missingness pattern approach, which uses a different propensity score model for each pattern of missingness, and a third multiple imputation approach in which the propensity score parameters are combined rather than the propensity scores themselves (MIpar). Under a missing at random mechanism, complete case and missingness pattern analyses were biased in most cases for estimating the marginal treatment effect, whereas multiple imputation approaches were approximately unbiased as long as the outcome was included in the imputation model. Only MIte was unbiased in all the studied scenarios and Rubin's rules provided good variance estimates for MIte. The propensity score estimated in the MIte approach showed good balancing properties. In conclusion, when using multiple imputation in the inverse probability of treatment weighting context, MIte with the outcome included in the imputation model is the preferred approach.
Advanced Multiple Processor Configuration Study. Final Report.
ERIC Educational Resources Information Center
Clymer, S. J.
This summary of a study on multiple processor configurations includes the objectives, background, approach, and results of research undertaken to provide the Air Force with a generalized model of computer processor combinations for use in the evaluation of proposed flight training simulator computational designs. An analysis of a real-time flight…
Application of fuzzy set and Dempster-Shafer theory to organic geochemistry interpretation
NASA Technical Reports Server (NTRS)
Kim, C. S.; Isaksen, G. H.
1993-01-01
An application of fuzzy sets and Dempster Shafter Theory (DST) in modeling the interpretational process of organic geochemistry data for predicting the level of maturities of oil and source rock samples is presented. This was accomplished by (1) representing linguistic imprecision and imprecision associated with experience by a fuzzy set theory, (2) capturing the probabilistic nature of imperfect evidences by a DST, and (3) combining multiple evidences by utilizing John Yen's generalized Dempster-Shafter Theory (GDST), which allows DST to deal with fuzzy information. The current prototype provides collective beliefs on the predicted levels of maturity by combining multiple evidences through GDST's rule of combination.
Generation of animation sequences of three dimensional models
NASA Technical Reports Server (NTRS)
Poi, Sharon (Inventor); Bell, Brad N. (Inventor)
1990-01-01
The invention is directed toward a method and apparatus for generating an animated sequence through the movement of three-dimensional graphical models. A plurality of pre-defined graphical models are stored and manipulated in response to interactive commands or by means of a pre-defined command file. The models may be combined as part of a hierarchical structure to represent physical systems without need to create a separate model which represents the combined system. System motion is simulated through the introduction of translation, rotation and scaling parameters upon a model within the system. The motion is then transmitted down through the system hierarchy of models in accordance with hierarchical definitions and joint movement limitations. The present invention also calls for a method of editing hierarchical structure in response to interactive commands or a command file such that a model may be included, deleted, copied or moved within multiple system model hierarchies. The present invention also calls for the definition of multiple viewpoints or cameras which may exist as part of a system hierarchy or as an independent camera. The simulated movement of the models and systems is graphically displayed on a monitor and a frame is recorded by means of a video controller. Multiple movement and hierarchy manipulations are then recorded as a sequence of frames which may be played back as an animation sequence on a video cassette recorder.
Spotlight on elotuzumab in the treatment of multiple myeloma: the evidence to date
Weisel, Katja
2016-01-01
Despite advances in the treatment of multiple myeloma, it remains an incurable disease, with relapses and resistances frequently observed. Recently, immunotherapies, in particular, monoclonal antibodies, have become important treatment options in anticancer therapies. Elotuzumab is a humanized monoclonal antibody to signaling lymphocytic activation molecule F7, which is highly expressed on myeloma cells and, to a lower extent, on selected leukocyte subsets such as natural killer cells. By directly activating natural killer cells and by antibody-dependent cell-mediated cytotoxicity, elotuzumab exhibits a dual mechanism of action leading to myeloma cell death with minimal effects on normal tissue. In several nonclinical models of multiple myeloma, elotuzumab was effective as a single agent and in combination with standard myeloma treatments, supporting the use of elotuzumab in patients. In combination with lenalidomide and dexamethasone, elotuzumab showed a significant increase in tumor response rates and progression-free survival in patients with relapsed and/or refractory multiple myeloma. This review summarizes the nonclinical and clinical development of elotuzumab as a single agent and in combination with established therapies for the treatment of multiple myeloma. PMID:27785050
Wang, Lv; Lu, Fang-Lin; Wang, Chong; Tan, Meng-Wei; Xu, Zhi-yun
2014-12-01
The Society of Thoracic Surgeons 2008 cardiac surgery risk models have been developed for heart valve surgery with and without coronary artery bypass grafting. The aim of our study was to evaluate the performance of Society of Thoracic Surgeons 2008 cardiac risk models in Chinese patients undergoing single valve surgery and the predicted mortality rates of those undergoing multiple valve surgery derived from the Society of Thoracic Surgeons 2008 risk models. A total of 12,170 patients underwent heart valve surgery from January 2008 to December 2011. Combined congenital heart surgery and aortal surgery cases were excluded. A relatively small number of valve surgery combinations were excluded. The final research population included the following isolated heart valve surgery types: aortic valve replacement, mitral valve replacement, and mitral valve repair. The following combined valve surgery types were included: mitral valve replacement plus tricuspid valve repair, mitral valve replacement plus aortic valve replacement, and mitral valve replacement plus aortic valve replacement and tricuspid valve repair. Evaluation was performed by using the Hosmer-Lemeshow test and C-statistics. Data from 9846 patients were analyzed. The Society of Thoracic Surgeons 2008 cardiac risk models showed reasonable discrimination and poor calibration (C-statistic, 0.712; P = .00006 in Hosmer-Lemeshow test). Society of Thoracic Surgeons 2008 models had better discrimination (C-statistic, 0.734) and calibration (P = .5805) in patients undergoing isolated valve surgery than in patients undergoing multiple valve surgery (C-statistic, 0.694; P = .00002 in Hosmer-Lemeshow test). Estimates derived from the Society of Thoracic Surgeons 2008 models exceeded the mortality rates of multiple valve surgery (observed/expected ratios of 1.44 for multiple valve surgery and 1.17 for single valve surgery). The Society of Thoracic Surgeons 2008 cardiac surgery risk models performed well when predicting the mortality for Chinese patients undergoing valve surgery. The Society of Thoracic Surgeons 2008 models were suitable for single valve surgery in a Chinese population; estimates of mortality for multiple valve surgery derived from the Society of Thoracic Surgeons 2008 models were less accurate. Copyright © 2014 The American Association for Thoracic Surgery. Published by Elsevier Inc. All rights reserved.
On temporal stochastic modeling of precipitation, nesting models across scales
NASA Astrophysics Data System (ADS)
Paschalis, Athanasios; Molnar, Peter; Fatichi, Simone; Burlando, Paolo
2014-01-01
We analyze the performance of composite stochastic models of temporal precipitation which can satisfactorily reproduce precipitation properties across a wide range of temporal scales. The rationale is that a combination of stochastic precipitation models which are most appropriate for specific limited temporal scales leads to better overall performance across a wider range of scales than single models alone. We investigate different model combinations. For the coarse (daily) scale these are models based on Alternating renewal processes, Markov chains, and Poisson cluster models, which are then combined with a microcanonical Multiplicative Random Cascade model to disaggregate precipitation to finer (minute) scales. The composite models were tested on data at four sites in different climates. The results show that model combinations improve the performance in key statistics such as probability distributions of precipitation depth, autocorrelation structure, intermittency, reproduction of extremes, compared to single models. At the same time they remain reasonably parsimonious. No model combination was found to outperform the others at all sites and for all statistics, however we provide insight on the capabilities of specific model combinations. The results for the four different climates are similar, which suggests a degree of generality and wider applicability of the approach.
Miller, Martin L; Molinelli, Evan J; Nair, Jayasree S; Sheikh, Tahir; Samy, Rita; Jing, Xiaohong; He, Qin; Korkut, Anil; Crago, Aimee M; Singer, Samuel; Schwartz, Gary K; Sander, Chris
2013-09-24
Dedifferentiated liposarcoma (DDLS) is a rare but aggressive cancer with high recurrence and low response rates to targeted therapies. Increasing treatment efficacy may require combinations of targeted agents that counteract the effects of multiple abnormalities. To identify a possible multicomponent therapy, we performed a combinatorial drug screen in a DDLS-derived cell line and identified cyclin-dependent kinase 4 (CDK4) and insulin-like growth factor 1 receptor (IGF1R) as synergistic drug targets. We measured the phosphorylation of multiple proteins and cell viability in response to systematic drug combinations and derived computational models of the signaling network. These models predict that the observed synergy in reducing cell viability with CDK4 and IGF1R inhibitors depends on the activity of the AKT pathway. Experiments confirmed that combined inhibition of CDK4 and IGF1R cooperatively suppresses the activation of proteins within the AKT pathway. Consistent with these findings, synergistic reductions in cell viability were also found when combining CDK4 inhibition with inhibition of either AKT or epidermal growth factor receptor (EGFR), another receptor similar to IGF1R that activates AKT. Thus, network models derived from context-specific proteomic measurements of systematically perturbed cancer cells may reveal cancer-specific signaling mechanisms and aid in the design of effective combination therapies.
Miller, Martin L.; Molinelli, Evan J.; Nair, Jayasree S.; Sheikh, Tahir; Samy, Rita; Jing, Xiaohong; He, Qin; Korkut, Anil; Crago, Aimee M.; Singer, Samuel; Schwartz, Gary K.; Sander, Chris
2014-01-01
Dedifferentiated liposarcoma (DDLS) is a rare but aggressive cancer with high recurrence and low response rates to targeted therapies. Increasing treatment efficacy may require combinations of targeted agents that counteract the effects of multiple abnormalities. To identify a possible multicomponent therapy, we performed a combinatorial drug screen in a DDLS-derived cell line and identified cyclin-dependent kinase 4 (CDK4) and insulin-like growth factor 1 receptor (IGF1R) as synergistic drug targets. We measured the phosphorylation of multiple proteins and cell viability in response to systematic drug combinations and derived computational models of the signaling network. These models predict that the observed synergy in reducing cell viability with CDK4 and IGF1R inhibitors depend on activity of the AKT pathway. Experiments confirmed that combined inhibition of CDK4 and IGF1R cooperatively suppresses the activation of proteins within the AKT pathway. Consistent with these findings, synergistic reductions in cell viability were also found when combining CDK4 inhibition with inhibition of either AKT or epidermal growth factor receptor (EGFR), another receptor similar to IGF1R that activates AKT. Thus, network models derived from context-specific proteomic measurements of systematically perturbed cancer cells may reveal cancer-specific signaling mechanisms and aid in the design of effective combination therapies. PMID:24065146
NASA Astrophysics Data System (ADS)
Ren, Y.
2017-12-01
Context Land surface temperatures (LSTs) spatio-temporal distribution pattern of urban forests are influenced by many ecological factors; the identification of interaction between these factors can improve simulations and predictions of spatial patterns of urban cold islands. This quantitative research requires an integrated method that combines multiple sources data with spatial statistical analysis. Objectives The purpose of this study was to clarify urban forest LST influence interaction between anthropogenic activities and multiple ecological factors using cluster analysis of hot and cold spots and Geogdetector model. We introduced the hypothesis that anthropogenic activity interacts with certain ecological factors, and their combination influences urban forests LST. We also assumed that spatio-temporal distributions of urban forest LST should be similar to those of ecological factors and can be represented quantitatively. Methods We used Jinjiang as a representative city in China as a case study. Population density was employed to represent anthropogenic activity. We built up a multi-source data (forest inventory, digital elevation models (DEM), population, and remote sensing imagery) on a unified urban scale to support urban forest LST influence interaction research. Through a combination of spatial statistical analysis results, multi-source spatial data, and Geogdetector model, the interaction mechanisms of urban forest LST were revealed. Results Although different ecological factors have different influences on forest LST, in two periods with different hot spots and cold spots, the patch area and dominant tree species were the main factors contributing to LST clustering in urban forests. The interaction between anthropogenic activity and multiple ecological factors increased LST in urban forest stands, linearly and nonlinearly. Strong interactions between elevation and dominant species were generally observed and were prevalent in either hot or cold spots areas in different years. Conclusions In conclusion, a combination of spatial statistics and GeogDetector models should be effective for quantitatively evaluating interactive relationships among ecological factors, anthropogenic activity and LST.
Kim, Ki Hwan; Park, Sung-Hong
2017-04-01
The balanced steady-state free precession (bSSFP) MR sequence is frequently used in clinics, but is sensitive to off-resonance effects, which can cause banding artifacts. Often multiple bSSFP datasets are acquired at different phase cycling (PC) angles and then combined in a special way for banding artifact suppression. Many strategies of combining the datasets have been suggested for banding artifact suppression, but there are still limitations in their performance, especially when the number of phase-cycled bSSFP datasets is small. The purpose of this study is to develop a learning-based model to combine the multiple phase-cycled bSSFP datasets for better banding artifact suppression. Multilayer perceptron (MLP) is a feedforward artificial neural network consisting of three layers of input, hidden, and output layers. MLP models were trained by input bSSFP datasets acquired from human brain and knee at 3T, which were separately performed for two and four PC angles. Banding-free bSSFP images were generated by maximum-intensity projection (MIP) of 8 or 12 phase-cycled datasets and were used as targets for training the output layer. The trained MLP models were applied to another brain and knee datasets acquired with different scan parameters and also to multiple phase-cycled bSSFP functional MRI datasets acquired on rat brain at 9.4T, in comparison with the conventional MIP method. Simulations were also performed to validate the MLP approach. Both the simulations and human experiments demonstrated that MLP suppressed banding artifacts significantly, superior to MIP in both banding artifact suppression and SNR efficiency. MLP demonstrated superior performance over MIP for the 9.4T fMRI data as well, which was not used for training the models, while visually preserving the fMRI maps very well. Artificial neural network is a promising technique for combining multiple phase-cycled bSSFP datasets for banding artifact suppression. Copyright © 2016 Elsevier Inc. All rights reserved.
Ning, Shaoyang; Xu, Hongquan; Al-Shyoukh, Ibrahim; Feng, Jiaying; Sun, Ren
2014-10-30
Combination chemotherapy with multiple drugs has been widely applied to cancer treatment owing to enhanced efficacy and reduced drug resistance. For drug combination experiment analysis, response surface modeling has been commonly adopted. In this paper, we introduce a Hill-based global response surface model and provide an application of the model to a 512-run drug combination experiment with three chemicals, namely AG490, U0126, and indirubin-3 ' -monoxime (I-3-M), on lung cancer cells. The results demonstrate generally improved goodness of fit of our model from the traditional polynomial model, as well as the original Hill model on the basis of fixed-ratio drug combinations. We identify different dose-effect patterns between normal and cancer cells on the basis of our model, which indicates the potential effectiveness of the drug combination in cancer treatment. Meanwhile, drug interactions are analyzed both qualitatively and quantitatively. The distinct interaction patterns between U0126 and I-3-M on two types of cells uncovered by the model could be a further indicator of the efficacy of the drug combination. Copyright © 2014 John Wiley & Sons, Ltd.
Human Activity Recognition by Combining a Small Number of Classifiers.
Nazabal, Alfredo; Garcia-Moreno, Pablo; Artes-Rodriguez, Antonio; Ghahramani, Zoubin
2016-09-01
We consider the problem of daily human activity recognition (HAR) using multiple wireless inertial sensors, and specifically, HAR systems with a very low number of sensors, each one providing an estimation of the performed activities. We propose new Bayesian models to combine the output of the sensors. The models are based on a soft outputs combination of individual classifiers to deal with the small number of sensors. We also incorporate the dynamic nature of human activities as a first-order homogeneous Markov chain. We develop both inductive and transductive inference methods for each model to be employed in supervised and semisupervised situations, respectively. Using different real HAR databases, we compare our classifiers combination models against a single classifier that employs all the signals from the sensors. Our models exhibit consistently a reduction of the error rate and an increase of robustness against sensor failures. Our models also outperform other classifiers combination models that do not consider soft outputs and an Markovian structure of the human activities.
High power diode laser Master Oscillator-Power Amplifier (MOPA)
NASA Technical Reports Server (NTRS)
Andrews, John R.; Mouroulis, P.; Wicks, G.
1994-01-01
High power multiple quantum well AlGaAs diode laser master oscillator - power amplifier (MOPA) systems were examined both experimentally and theoretically. For two pass operation, it was found that powers in excess of 0.3 W per 100 micrometers of facet length were achievable while maintaining diffraction-limited beam quality. Internal electrical-to-optical conversion efficiencies as high as 25 percent were observed at an internal amplifier gain of 9 dB. Theoretical modeling of multiple quantum well amplifiers was done using appropriate rate equations and a heuristic model of the carrier density dependent gain. The model gave a qualitative agreement with the experimental results. In addition, the model allowed exploration of a wider design space for the amplifiers. The model predicted that internal electrical-to-optical conversion efficiencies in excess of 50 percent should be achievable with careful system design. The model predicted that no global optimum design exists, but gain, efficiency, and optical confinement (coupling efficiency) can be mutually adjusted to meet a specific system requirement. A three quantum well, low optical confinement amplifier was fabricated using molecular beam epitaxial growth. Coherent beam combining of two high power amplifiers injected from a common master oscillator was also examined. Coherent beam combining with an efficiency of 93 percent resulted in a single beam having diffraction-limited characteristics. This beam combining efficiency is a world record result for such a system. Interferometric observations of the output of the amplifier indicated that spatial mode matching was a significant factor in the less than perfect beam combining. Finally, the system issues of arrays of amplifiers in a coherent beam combining system were investigated. Based upon experimentally observed parameters coherent beam combining could result in a megawatt-scale coherent beam with a 10 percent electrical-to-optical conversion efficiency.
Throughput and latency programmable optical transceiver by using DSP and FEC control.
Tanimura, Takahito; Hoshida, Takeshi; Kato, Tomoyuki; Watanabe, Shigeki; Suzuki, Makoto; Morikawa, Hiroyuki
2017-05-15
We propose and experimentally demonstrate a proof-of-concept of a programmable optical transceiver that enables simultaneous optimization of multiple programmable parameters (modulation format, symbol rate, power allocation, and FEC) for satisfying throughput, signal quality, and latency requirements. The proposed optical transceiver also accommodates multiple sub-channels that can transport different optical signals with different requirements. Multi-degree-of-freedom of the parameters often leads to difficulty in finding the optimum combination among the parameters due to an explosion of the number of combinations. The proposed optical transceiver reduces the number of combinations and finds feasible sets of programmable parameters by using constraints of the parameters combined with a precise analytical model. For precise BER prediction with the specified set of parameters, we model the sub-channel BER as a function of OSNR, modulation formats, symbol rates, and power difference between sub-channels. Next, we formulate simple constraints of the parameters and combine the constraints with the analytical model to seek feasible sets of programmable parameters. Finally, we experimentally demonstrate the end-to-end operation of the proposed optical transceiver with offline manner including low-density parity-check (LDPC) FEC encoding and decoding under a specific use case with latency-sensitive application and 40-km transmission.
Hao, Xu; Yujun, Sun; Xinjie, Wang; Jin, Wang; Yao, Fu
2015-01-01
A multiple linear model was developed for individual tree crown width of Cunninghamia lanceolata (Lamb.) Hook in Fujian province, southeast China. Data were obtained from 55 sample plots of pure China-fir plantation stands. An Ordinary Linear Least Squares (OLS) regression was used to establish the crown width model. To adjust for correlations between observations from the same sample plots, we developed one level linear mixed-effects (LME) models based on the multiple linear model, which take into account the random effects of plots. The best random effects combinations for the LME models were determined by the Akaike's information criterion, the Bayesian information criterion and the -2logarithm likelihood. Heteroscedasticity was reduced by three residual variance functions: the power function, the exponential function and the constant plus power function. The spatial correlation was modeled by three correlation structures: the first-order autoregressive structure [AR(1)], a combination of first-order autoregressive and moving average structures [ARMA(1,1)], and the compound symmetry structure (CS). Then, the LME model was compared to the multiple linear model using the absolute mean residual (AMR), the root mean square error (RMSE), and the adjusted coefficient of determination (adj-R2). For individual tree crown width models, the one level LME model showed the best performance. An independent dataset was used to test the performance of the models and to demonstrate the advantage of calibrating LME models.
Combining Multiple Knowledge Sources for Speech Recognition
1988-09-15
Thus, the first is thle to clarify the pronunciationt ( TASSEAJ for the acronym TASA !). best adaptation sentence, the second sentence, whens addled...10 rapid adapltati,,n sen- tenrces, and 15 spell-i,, de phrases. 6101 resource rirailageo lei SPEAKER-DEPENDENT DATABASE sentences were randortily...combining the smoothed phoneme models with the de - system tested on a standard database using two well de . tailed context models. BYBLOS makes maximal use
Advances in modeling soil erosion after disturbance on rangelands
USDA-ARS?s Scientific Manuscript database
Research has been undertaken to develop process based models that predict soil erosion rate after disturbance on rangelands. In these models soil detachment is predicted as a combination of multiple erosion processes, rain splash and thin sheet flow (splash and sheet) detachment and concentrated flo...
Kleber, Christian; Becker, Christopher A; Malysch, Tom; Reinhold, Jens M; Tsitsilonis, Serafeim; Duda, Georg N; Schmidt-Bleek, Katharina; Schaser, Klaus D
2015-07-01
Hemorrhagic shock (hS) interacts with the posttraumatic immune response and fracture healing in multiple trauma. Due to the lack of a long-term survival multiple trauma animal models, no standardized analysis of fracture healing referring the impact of multiple trauma on fracture healing was performed. We propose a new long-term survival (21 days) murine multiple trauma model combining hS (microsurgical cannulation of carotid artery, withdrawl of blood and continuously blood pressure measurement), femoral (osteotomy/external fixation) and tibial fracture (3-point bending technique/antegrade nail). The posttraumatic immune response was measured via IL-6, sIL-6R ELISA. The hS was investigated via macrohemodynamics, blood gas analysis, wet-dry lung ration and histologic analysis of the shock organs. We proposed a new murine long-term survival (21 days) multiple trauma model mimicking clinical relevant injury patterns and previously published human posttraumatic immune response. Based on blood gas analysis and histologic analysis of shock organs we characterized and standardized our murine multiple trauma model. Furthermore, we revealed hemorrhagic shock as a causative factor that triggers sIL-6R formation underscoring the fundamental pathophysiologic role of the transsignaling mechanism in multiple trauma. © 2015 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.
Optimal frame-by-frame result combination strategy for OCR in video stream
NASA Astrophysics Data System (ADS)
Bulatov, Konstantin; Lynchenko, Aleksander; Krivtsov, Valeriy
2018-04-01
This paper describes the problem of combining classification results of multiple observations of one object. This task can be regarded as a particular case of a decision-making using a combination of experts votes with calculated weights. The accuracy of various methods of combining the classification results depending on different models of input data is investigated on the example of frame-by-frame character recognition in a video stream. Experimentally it is shown that the strategy of choosing a single most competent expert in case of input data without irrelevant observations has an advantage (in this case irrelevant means with character localization and segmentation errors). At the same time this work demonstrates the advantage of combining several most competent experts according to multiplication rule or voting if irrelevant samples are present in the input data.
Drag reduction of a car model by linear genetic programming control
NASA Astrophysics Data System (ADS)
Li, Ruiying; Noack, Bernd R.; Cordier, Laurent; Borée, Jacques; Harambat, Fabien
2017-08-01
We investigate open- and closed-loop active control for aerodynamic drag reduction of a car model. Turbulent flow around a blunt-edged Ahmed body is examined at ReH≈ 3× 105 based on body height. The actuation is performed with pulsed jets at all trailing edges (multiple inputs) combined with a Coanda deflection surface. The flow is monitored with 16 pressure sensors distributed at the rear side (multiple outputs). We apply a recently developed model-free control strategy building on genetic programming in Dracopoulos and Kent (Neural Comput Appl 6:214-228, 1997) and Gautier et al. (J Fluid Mech 770:424-441, 2015). The optimized control laws comprise periodic forcing, multi-frequency forcing and sensor-based feedback including also time-history information feedback and combinations thereof. Key enabler is linear genetic programming (LGP) as powerful regression technique for optimizing the multiple-input multiple-output control laws. The proposed LGP control can select the best open- or closed-loop control in an unsupervised manner. Approximately 33% base pressure recovery associated with 22% drag reduction is achieved in all considered classes of control laws. Intriguingly, the feedback actuation emulates periodic high-frequency forcing. In addition, the control identified automatically the only sensor which listens to high-frequency flow components with good signal to noise ratio. Our control strategy is, in principle, applicable to all multiple actuators and sensors experiments.
Multiplicative Measurements of a Trait Anxiety Scale as Predictors of Burnout
ERIC Educational Resources Information Center
Cremades, J. Gualberto; Wated, Guillermo; Wiggins, Matthew S.
2011-01-01
The purpose of the present study was to investigate whether combining the two dimensions of anxiety (i.e., intensity and direction) by using a multiplicative model would strengthen the prediction of burnout. Collegiate athletes (N = 157) completed the Athlete Burnout Questionnaire as well as a trait version of the Competitive State Anxiety…
Considerations for Creating Multi-Language Personality Norms: A Three-Component Model of Error
ERIC Educational Resources Information Center
Meyer, Kevin D.; Foster, Jeff L.
2008-01-01
With the increasing globalization of human resources practices, a commensurate increase in demand has occurred for multi-language ("global") personality norms for use in selection and development efforts. The combination of data from multiple translations of a personality assessment into a single norm engenders error from multiple sources. This…
ERIC Educational Resources Information Center
Tynan, Joshua J.; Somers, Cheryl L.; Gleason, Jamie H.; Markman, Barry S.; Yoon, Jina
2015-01-01
With Bronfenbrenner's (1977) ecological theory and other multifactor models (e.g. Pianta, 1999; Prinstein, Boergers, & Spirito, 2001) underlying this study design, the purpose was to examine, simultaneously, key variables in multiple life contexts (microsystem, mesosystem, exosystem levels) for their individual and combined roles in predicting…
Integrated presentation of ecological risk from multiple stressors
NASA Astrophysics Data System (ADS)
Goussen, Benoit; Price, Oliver R.; Rendal, Cecilie; Ashauer, Roman
2016-10-01
Current environmental risk assessments (ERA) do not account explicitly for ecological factors (e.g. species composition, temperature or food availability) and multiple stressors. Assessing mixtures of chemical and ecological stressors is needed as well as accounting for variability in environmental conditions and uncertainty of data and models. Here we propose a novel probabilistic ERA framework to overcome these limitations, which focusses on visualising assessment outcomes by construct-ing and interpreting prevalence plots as a quantitative prediction of risk. Key components include environmental scenarios that integrate exposure and ecology, and ecological modelling of relevant endpoints to assess the effect of a combination of stressors. Our illustrative results demonstrate the importance of regional differences in environmental conditions and the confounding interactions of stressors. Using this framework and prevalence plots provides a risk-based approach that combines risk assessment and risk management in a meaningful way and presents a truly mechanistic alternative to the threshold approach. Even whilst research continues to improve the underlying models and data, regulators and decision makers can already use the framework and prevalence plots. The integration of multiple stressors, environmental conditions and variability makes ERA more relevant and realistic.
Integrated presentation of ecological risk from multiple stressors.
Goussen, Benoit; Price, Oliver R; Rendal, Cecilie; Ashauer, Roman
2016-10-26
Current environmental risk assessments (ERA) do not account explicitly for ecological factors (e.g. species composition, temperature or food availability) and multiple stressors. Assessing mixtures of chemical and ecological stressors is needed as well as accounting for variability in environmental conditions and uncertainty of data and models. Here we propose a novel probabilistic ERA framework to overcome these limitations, which focusses on visualising assessment outcomes by construct-ing and interpreting prevalence plots as a quantitative prediction of risk. Key components include environmental scenarios that integrate exposure and ecology, and ecological modelling of relevant endpoints to assess the effect of a combination of stressors. Our illustrative results demonstrate the importance of regional differences in environmental conditions and the confounding interactions of stressors. Using this framework and prevalence plots provides a risk-based approach that combines risk assessment and risk management in a meaningful way and presents a truly mechanistic alternative to the threshold approach. Even whilst research continues to improve the underlying models and data, regulators and decision makers can already use the framework and prevalence plots. The integration of multiple stressors, environmental conditions and variability makes ERA more relevant and realistic.
Interactive Visualization of DGA Data Based on Multiple Views
NASA Astrophysics Data System (ADS)
Geng, Yujie; Lin, Ying; Ma, Yan; Guo, Zhihong; Gu, Chao; Wang, Mingtao
2017-01-01
The commission and operation of dissolved gas analysis (DGA) online monitoring makes up for the weakness of traditional DGA method. However, volume and high-dimensional DGA data brings a huge challenge for monitoring and analysis. In this paper, we present a novel interactive visualization model of DGA data based on multiple views. This model imitates multi-angle analysis by combining parallel coordinates, scatter plot matrix and data table. By offering brush, collaborative filter and focus + context technology, this model provides a convenient and flexible interactive way to analyze and understand the DGA data.
Zhong-xiang, Feng; Shi-sheng, Lu; Wei-hua, Zhang; Nan-nan, Zhang
2014-01-01
In order to build a combined model which can meet the variation rule of death toll data for road traffic accidents and can reflect the influence of multiple factors on traffic accidents and improve prediction accuracy for accidents, the Verhulst model was built based on the number of death tolls for road traffic accidents in China from 2002 to 2011; and car ownership, population, GDP, highway freight volume, highway passenger transportation volume, and highway mileage were chosen as the factors to build the death toll multivariate linear regression model. Then the two models were combined to be a combined prediction model which has weight coefficient. Shapley value method was applied to calculate the weight coefficient by assessing contributions. Finally, the combined model was used to recalculate the number of death tolls from 2002 to 2011, and the combined model was compared with the Verhulst and multivariate linear regression models. The results showed that the new model could not only characterize the death toll data characteristics but also quantify the degree of influence to the death toll by each influencing factor and had high accuracy as well as strong practicability. PMID:25610454
Feng, Zhong-xiang; Lu, Shi-sheng; Zhang, Wei-hua; Zhang, Nan-nan
2014-01-01
In order to build a combined model which can meet the variation rule of death toll data for road traffic accidents and can reflect the influence of multiple factors on traffic accidents and improve prediction accuracy for accidents, the Verhulst model was built based on the number of death tolls for road traffic accidents in China from 2002 to 2011; and car ownership, population, GDP, highway freight volume, highway passenger transportation volume, and highway mileage were chosen as the factors to build the death toll multivariate linear regression model. Then the two models were combined to be a combined prediction model which has weight coefficient. Shapley value method was applied to calculate the weight coefficient by assessing contributions. Finally, the combined model was used to recalculate the number of death tolls from 2002 to 2011, and the combined model was compared with the Verhulst and multivariate linear regression models. The results showed that the new model could not only characterize the death toll data characteristics but also quantify the degree of influence to the death toll by each influencing factor and had high accuracy as well as strong practicability.
The Performance of IRT Model Selection Methods with Mixed-Format Tests
ERIC Educational Resources Information Center
Whittaker, Tiffany A.; Chang, Wanchen; Dodd, Barbara G.
2012-01-01
When tests consist of multiple-choice and constructed-response items, researchers are confronted with the question of which item response theory (IRT) model combination will appropriately represent the data collected from these mixed-format tests. This simulation study examined the performance of six model selection criteria, including the…
Spacecraft Multiple Array Communication System Performance Analysis
NASA Technical Reports Server (NTRS)
Hwu, Shian U.; Desilva, Kanishka; Sham, Catherine C.
2010-01-01
The Communication Systems Simulation Laboratory (CSSL) at the NASA Johnson Space Center is tasked to perform spacecraft and ground network communication system simulations, design validation, and performance verification. The CSSL has developed simulation tools that model spacecraft communication systems and the space and ground environment in which the tools operate. In this paper, a spacecraft communication system with multiple arrays is simulated. Multiple array combined technique is used to increase the radio frequency coverage and data rate performance. The technique is to achieve phase coherence among the phased arrays to combine the signals at the targeting receiver constructively. There are many technical challenges in spacecraft integration with a high transmit power communication system. The array combining technique can improve the communication system data rate and coverage performances without increasing the system transmit power requirements. Example simulation results indicate significant performance improvement can be achieved with phase coherence implementation.
NASA Astrophysics Data System (ADS)
Huang, Lu; Jiang, Yuyang; Chen, Yuzong
2017-01-01
Synergistic drug combinations enable enhanced therapeutics. Their discovery typically involves the measurement and assessment of drug combination index (CI), which can be facilitated by the development and applications of in-silico CI predictive tools. In this work, we developed and tested the ability of a mathematical model of drug-targeted EGFR-ERK pathway in predicting CIs and in analyzing multiple synergistic drug combinations against observations. Our mathematical model was validated against the literature reported signaling, drug response dynamics, and EGFR-MEK drug combination effect. The predicted CIs and combination therapeutic effects of the EGFR-BRaf, BRaf-MEK, FTI-MEK, and FTI-BRaf inhibitor combinations showed consistent synergism. Our results suggest that existing pathway models may be potentially extended for developing drug-targeted pathway models to predict drug combination CI values, isobolograms, and drug-response surfaces as well as to analyze the dynamics of individual and combinations of drugs. With our model, the efficacy of potential drug combinations can be predicted. Our method complements the developed in-silico methods (e.g. the chemogenomic profile and the statistically-inferenced network models) by predicting drug combination effects from the perspectives of pathway dynamics using experimental or validated molecular kinetic constants, thereby facilitating the collective prediction of drug combination effects in diverse ranges of disease systems.
NASA Astrophysics Data System (ADS)
Thornton, P. E.; Nacp Site Synthesis Participants
2010-12-01
The North American Carbon Program (NACP) synthesis effort includes an extensive intercomparison of modeled and observed ecosystem states and fluxes preformed with multiple models across multiple sites. The participating models span a range of complexity and intended application, while the participating sites cover a broad range of natural and managed ecosystems in North America, from the subtropics to arctic tundra, and coastal to interior climates. A unique characteristic of this collaborative effort is that multiple independent observations are available at all sites: fluxes are measured with the eddy covariance technique, and standard biometric and field sampling methods provide estimates of standing stock and annual production in multiple categories. In addition, multiple modeling approaches are employed to make predictions at each site, varying, for example, in the use of diagnostic vs. prognostic leaf area index. Given multiple independent observational constraints and multiple classes of model, we evaluate the internal consistency of observations at each site, and use this information to extend previously derived estimates of uncertainty in the flux observations. Model results are then compared with all available observations and models are ranked according to their consistency with each type of observation (high frequency flux measurement, carbon stock, annual production). We demonstrate a range of internal consistency across the sites, and show that some models which perform well against one observational metric perform poorly against others. We use this analysis to construct a hypothesis for combining eddy covariance, biometrics, and other standard physiological and ecological measurements which, as data collection proceeded over several years, would present an increasingly challenging target for next generation models.
Flood extent and water level estimation from SAR using data-model integration
NASA Astrophysics Data System (ADS)
Ajadi, O. A.; Meyer, F. J.
2017-12-01
Synthetic Aperture Radar (SAR) images have long been recognized as a valuable data source for flood mapping. Compared to other sources, SAR's weather and illumination independence and large area coverage at high spatial resolution supports reliable, frequent, and detailed observations of developing flood events. Accordingly, SAR has the potential to greatly aid in the near real-time monitoring of natural hazards, such as flood detection, if combined with automated image processing. This research works towards increasing the reliability and temporal sampling of SAR-derived flood hazard information by integrating information from multiple SAR sensors and SAR modalities (images and Interferometric SAR (InSAR) coherence) and by combining SAR-derived change detection information with hydrologic and hydraulic flood forecast models. First, the combination of multi-temporal SAR intensity images and coherence information for generating flood extent maps is introduced. The application of least-squares estimation integrates flood information from multiple SAR sensors, thus increasing the temporal sampling. SAR-based flood extent information will be combined with a Digital Elevation Model (DEM) to reduce false alarms and to estimate water depth and flood volume. The SAR-based flood extent map is assimilated into the Hydrologic Engineering Center River Analysis System (Hec-RAS) model to aid in hydraulic model calibration. The developed technology is improving the accuracy of flood information by exploiting information from data and models. It also provides enhanced flood information to decision-makers supporting the response to flood extent and improving emergency relief efforts.
Automatic Prediction of Protein 3D Structures by Probabilistic Multi-template Homology Modeling.
Meier, Armin; Söding, Johannes
2015-10-01
Homology modeling predicts the 3D structure of a query protein based on the sequence alignment with one or more template proteins of known structure. Its great importance for biological research is owed to its speed, simplicity, reliability and wide applicability, covering more than half of the residues in protein sequence space. Although multiple templates have been shown to generally increase model quality over single templates, the information from multiple templates has so far been combined using empirically motivated, heuristic approaches. We present here a rigorous statistical framework for multi-template homology modeling. First, we find that the query proteins' atomic distance restraints can be accurately described by two-component Gaussian mixtures. This insight allowed us to apply the standard laws of probability theory to combine restraints from multiple templates. Second, we derive theoretically optimal weights to correct for the redundancy among related templates. Third, a heuristic template selection strategy is proposed. We improve the average GDT-ha model quality score by 11% over single template modeling and by 6.5% over a conventional multi-template approach on a set of 1000 query proteins. Robustness with respect to wrong constraints is likewise improved. We have integrated our multi-template modeling approach with the popular MODELLER homology modeling software in our free HHpred server http://toolkit.tuebingen.mpg.de/hhpred and also offer open source software for running MODELLER with the new restraints at https://bitbucket.org/soedinglab/hh-suite.
NASA Astrophysics Data System (ADS)
Yuan, Cadmus C. A.
2015-12-01
Optical ray tracing modeling applied Beer-Lambert method in the single luminescence material system to model the white light pattern from blue LED light source. This paper extends such algorithm to a mixed multiple luminescence material system by introducing the equivalent excitation and emission spectrum of individual luminescence materials. The quantum efficiency numbers of individual material and self-absorption of the multiple luminescence material system are considered as well. By this combination, researchers are able to model the luminescence characteristics of LED chip-scaled packaging (CSP), which provides simple process steps and the freedom of the luminescence material geometrical dimension. The method will be first validated by the experimental results. Afterward, a further parametric investigation has been then conducted.
Choi, Ted; Eskin, Eleazar
2013-01-01
Gene expression data, in conjunction with information on genetic variants, have enabled studies to identify expression quantitative trait loci (eQTLs) or polymorphic locations in the genome that are associated with expression levels. Moreover, recent technological developments and cost decreases have further enabled studies to collect expression data in multiple tissues. One advantage of multiple tissue datasets is that studies can combine results from different tissues to identify eQTLs more accurately than examining each tissue separately. The idea of aggregating results of multiple tissues is closely related to the idea of meta-analysis which aggregates results of multiple genome-wide association studies to improve the power to detect associations. In principle, meta-analysis methods can be used to combine results from multiple tissues. However, eQTLs may have effects in only a single tissue, in all tissues, or in a subset of tissues with possibly different effect sizes. This heterogeneity in terms of effects across multiple tissues presents a key challenge to detect eQTLs. In this paper, we develop a framework that leverages two popular meta-analysis methods that address effect size heterogeneity to detect eQTLs across multiple tissues. We show by using simulations and multiple tissue data from mouse that our approach detects many eQTLs undetected by traditional eQTL methods. Additionally, our method provides an interpretation framework that accurately predicts whether an eQTL has an effect in a particular tissue. PMID:23785294
Large Interstellar Polarisation Survey:The Dust Elongation When Combining Optical-Submm Polarisation
NASA Astrophysics Data System (ADS)
Siebenmorgen, Ralf; Voschinnikov, N.; Bagnulo, S.; Cox, N.; Cami, J.
2017-10-01
The Planck mission has shown that dust properties of the diffuse ISM varies on a large scale and we present variability on a small scales. We present FORS spectro-polarimetry obtained by the Large Interstellar Polarisation Survey along 60 sight-lines. We fit these combined with extinction data by a silicate and carbon dust model with grain sizes ranging from the molecular to the sub-mic. domain. Large silicates of prolate shape account for the observed polarisation. For 37 sight-lines we complement our data set with UVES high-resolution spectra that establish the presence of single or multiple clouds along individual sight-lines. We find correlations between extinction and Serkowski parameters with the dust model and that the presence of multiple clouds depolarises the incoming radiation. However, there is a degeneracy in the dust model between alignment efficiency and the elongation of the grains. This degeneracy can be broken by combining polarization data in the optical-to-submm. This is of wide general interest as it improves the accuracy of deriving dust masses. We show that a flat IR/submm polarisation spectrum with substantial polarisation is predicted from dust models.
Multiple-Tumor Analysis with MS_Combo Model (Use with BMDS Wizard)
Exercises and procedures on setting up and using the MS_Combo Wizard. The MS_Combo model provides BMD and BMDL estimates for the risk of getting one or more tumors for any combination of tumors observed in a single bioassay.
Design of Xen Hybrid Multiple Police Model
NASA Astrophysics Data System (ADS)
Sun, Lei; Lin, Renhao; Zhu, Xianwei
2017-10-01
Virtualization Technology has attracted more and more attention. As a popular open-source virtualization tools, XEN is used more and more frequently. Xsm, XEN security model, has also been widespread concern. The safety status classification has not been established in the XSM, and it uses the virtual machine as a managed object to make Dom0 a unique administrative domain that does not meet the minimum privilege. According to these questions, we design a Hybrid multiple police model named SV_HMPMD that organically integrates multiple single security policy models include DTE,RBAC,BLP. It can fullfill the requirement of confidentiality and integrity for security model and use different particle size to different domain. In order to improve BLP’s practicability, the model introduce multi-level security labels. In order to divide the privilege in detail, we combine DTE with RBAC. In order to oversize privilege, we limit the privilege of domain0.
Distribution of model uncertainty across multiple data streams
NASA Astrophysics Data System (ADS)
Wutzler, Thomas
2014-05-01
When confronting biogeochemical models with a diversity of observational data streams, we are faced with the problem of weighing the data streams. Without weighing or multiple blocked cost functions, model uncertainty is allocated to the sparse data streams and possible bias in processes that are strongly constraint is exported to processes that are constrained by sparse data streams only. In this study we propose an approach that aims at making model uncertainty a factor of observations uncertainty, that is constant over all data streams. Further we propose an implementation based on Monte-Carlo Markov chain sampling combined with simulated annealing that is able to determine this variance factor. The method is exemplified both with very simple models, artificial data and with an inversion of the DALEC ecosystem carbon model against multiple observations of Howland forest. We argue that the presented approach is able to help and maybe resolve the problem of bias export to sparse data streams.
In vivo diagnosis of skin cancer using polarized and multiple scattered light spectroscopy
NASA Astrophysics Data System (ADS)
Bartlett, Matthew Allen
This thesis research presents the development of a non-invasive diagnostic technique for distinguishing between skin cancer, moles, and normal skin using polarized and multiple scattered light spectroscopy. Polarized light incident on the skin is single scattered by the epidermal layer and multiple scattered by the dermal layer. The epidermal light maintains its initial polarization while the light from the dermal layer becomes randomized and multiple scattered. Mie theory was used to model the epidermal light as the scattering from the intercellular organelles. The dermal signal was modeled as the diffusion of light through a localized semi-homogeneous volume. These models were confirmed using skin phantom experiments, studied with in vitro cell cultures, and applied to human skin for in vivo testing. A CCD-based spectroscopy system was developed to perform all these experiments. The probe and the theory were tested on skin phantoms of latex spheres on top of a solid phantom. We next extended our phantom study to include in vitro cells on top of the solid phantom. Optical fluorescent microscope images revealed at least four distinct scatterers including mitochondria, nucleoli, nuclei, and cell membranes. Single scattering measurements on the mammalian cells consistently produced PSD's in the size range of the mitochondria. The clinical portion of the study consisted of in vivo measurements on cancer, mole, and normal skin spots. The clinical study combined the single scattering model from the phantom and in vitro cell studies with the diffusion model for multiple scattered light. When parameters from both layers were combined, we found that a sensitivity of 100% and 77% can be obtained for detecting cancers and moles, respectively, given the number of lesions examined.
Automatically updating predictive modeling workflows support decision-making in drug design.
Muegge, Ingo; Bentzien, Jörg; Mukherjee, Prasenjit; Hughes, Robert O
2016-09-01
Using predictive models for early decision-making in drug discovery has become standard practice. We suggest that model building needs to be automated with minimum input and low technical maintenance requirements. Models perform best when tailored to answering specific compound optimization related questions. If qualitative answers are required, 2-bin classification models are preferred. Integrating predictive modeling results with structural information stimulates better decision making. For in silico models supporting rapid structure-activity relationship cycles the performance deteriorates within weeks. Frequent automated updates of predictive models ensure best predictions. Consensus between multiple modeling approaches increases the prediction confidence. Combining qualified and nonqualified data optimally uses all available information. Dose predictions provide a holistic alternative to multiple individual property predictions for reaching complex decisions.
ERIC Educational Resources Information Center
Barrett, Gerald V.; And Others
The relative contribution of motivation to ability measures in predicting performance criteria of sales personnel from successive fiscal periods was investigated. In this context, the merits of a multiplicative and additive combination of motivation and ability measures were examined. The relationship between satisfaction and motivation and…
NASA Astrophysics Data System (ADS)
Hu, Meng-Han; Chen, Xiao-Jing; Ye, Peng-Chao; Chen, Xi; Shi, Yi-Jian; Zhai, Guang-Tao; Yang, Xiao-Kang
2016-11-01
The aim of this study was to use mid-infrared spectroscopy coupled with multiple model population analysis based on Monte Carlo-uninformative variable elimination for rapidly estimating the copper content of Tegillarca granosa. Copper-specific wavelengths were first extracted from the whole spectra, and subsequently, a least square-support vector machine was used to develop the prediction models. Compared with the prediction model based on full wavelengths, models that used 100 multiple MC-UVE selected wavelengths without and with bin operation showed comparable performances with Rp (root mean square error of Prediction) of 0.97 (14.60 mg/kg) and 0.94 (20.85 mg/kg) versus 0.96 (17.27 mg/kg), as well as ratio of percent deviation (number of wavelength) of 2.77 (407) and 1.84 (45) versus 2.32 (1762). The obtained results demonstrated that the mid-infrared technique could be used for estimating copper content in T. granosa. In addition, the proposed multiple model population analysis can eliminate uninformative, weakly informative and interfering wavelengths effectively, that substantially reduced the model complexity and computation time.
The development of interior noise and vibration criteria
NASA Technical Reports Server (NTRS)
Leatherwood, J. D.; Clevenson, S. A.; Stephens, D. G.
1990-01-01
A generalized model was developed for estimating passenger discomfort response to combined noise and vibration. This model accounts for broadband noise and vibration spectra and multiple axes of vibration as well as the interactive effects of combined noise and vibration. The model has the unique capability of transforming individual components of noise/vibration environment into subjective comfort units and then combining these comfort units to produce a total index of passenger discomfort and useful sub-indices that typify passenger comfort within the environment. An overview of the model development is presented including the methodology employed, major elements of the model, model applications, and a brief description of a commercially available portable ride comfort meter based directly upon the model algorithms. Also discussed are potential criteria formats that account for the interactive effects of noise and vibration on human discomfort response.
A. Weiskittel; D. Maguire; R. Monserud
2007-01-01
Hybrid models offer the opportunity to improve future growth projections by combining advantages of both empirical and process-based modeling approaches. Hybrid models have been constructed in several regions and their performance relative to a purely empirical approach has varied. A hybrid model was constructed for intensively managed Douglas-fir plantations in the...
Hu, Jinsong; Van Valckenborgh, Els; Xu, Dehui; Menu, Eline; De Raeve, Hendrik; De Bruyne, Elke; De Bryune, Elke; Xu, Song; Van Camp, Ben; Handisides, Damian; Hart, Charles P; Vanderkerken, Karin
2013-09-01
Recently, we showed that hypoxia is a critical microenvironmental factor in multiple myeloma, and that the hypoxia-activated prodrug TH-302 selectively targets hypoxic multiple myeloma cells and improves multiple disease parameters in vivo. To explore approaches for sensitizing multiple myeloma cells to TH-302, we evaluated in this study the antitumor effect of TH-302 in combination with the clinically used proteasome inhibitor bortezomib. First, we show that TH-302 and bortezomib synergistically induce apoptosis in multiple myeloma cell lines in vitro. Second, we confirm that this synergism is related to the activation of caspase cascades and is mediated by changes of Bcl-2 family proteins. The combination treatment induces enhanced cleavage of caspase-3/8/9 and PARP, and therefore triggers apoptosis and enhances the cleavage of proapoptotic BH3-only protein BAD and BID as well as the antiapoptotic protein Mcl-1. In particular, TH-302 can abrogate the accumulation of antiapoptotic Mcl-1 induced by bortezomib, and decreases the expression of the prosurvival proteins Bcl-2 and Bcl-xL. Furthermore, we found that the induction of the proapoptotic BH3-only proteins PUMA (p53-upregulated modulator of apoptosis) and NOXA is associated with this synergism. In response to the genotoxic and endoplasmic reticulum stresses by TH-302 and bortezomib, the expression of PUMA and NOXA were upregulated in p53-dependent and -independent manners. Finally, in the murine 5T33MMvv model, we showed that the combination of TH-302 and bortezomib can improve multiple disease parameters and significantly prolong the survival of diseased mice. In conclusion, our studies provide a rationale for clinical evaluation of the combination of TH-302 and bortezomib in patients with multiple myeloma.
Bayesian module identification from multiple noisy networks.
Zamani Dadaneh, Siamak; Qian, Xiaoning
2016-12-01
Module identification has been studied extensively in order to gain deeper understanding of complex systems, such as social networks as well as biological networks. Modules are often defined as groups of vertices in these networks that are topologically cohesive with similar interaction patterns with the rest of the vertices. Most of the existing module identification algorithms assume that the given networks are faithfully measured without errors. However, in many real-world applications, for example, when analyzing protein-protein interaction networks from high-throughput profiling techniques, there is significant noise with both false positive and missing links between vertices. In this paper, we propose a new model for more robust module identification by taking advantage of multiple observed networks with significant noise so that signals in multiple networks can be strengthened and help improve the solution quality by combining information from various sources. We adopt a hierarchical Bayesian model to integrate multiple noisy snapshots that capture the underlying modular structure of the networks under study. By introducing a latent root assignment matrix and its relations to instantaneous module assignments in all the observed networks to capture the underlying modular structure and combine information across multiple networks, an efficient variational Bayes algorithm can be derived to accurately and robustly identify the underlying modules from multiple noisy networks. Experiments on synthetic and protein-protein interaction data sets show that our proposed model enhances both the accuracy and resolution in detecting cohesive modules, and it is less vulnerable to noise in the observed data. In addition, it shows higher power in predicting missing edges compared to individual-network methods.
Simulation of Plant Physiological Process Using Fuzzy Variables
Daniel L. Schmoldt
1991-01-01
Qualitative modelling can help us understand and project effects of multiple stresses on trees. It is not practical to collect and correlate empirical data for all combinations of plant/environments and human/climate stresses, especially for mature trees in natural settings. Therefore, a mechanistic model was developed to describe ecophysiological processes. This model...
Thomas W. Bonnot; Frank R. Thompson; Joshua J. Millspaugh; D. Todd Jones-Farland
2013-01-01
Efforts to conserve regional biodiversity in the face of global climate change, habitat loss and fragmentation will depend on approaches that consider population processes at multiple scales. By combining habitat and demographic modeling, landscape-based population viability models effectively relate small-scale habitat and landscape patterns to regional population...
NASA Astrophysics Data System (ADS)
Ikegami, Seiji
2017-09-01
The switching model (PSM) developed in the previous paper is extended to obtain an ;extended switching model (ESM). In the ESM, the mixt electronic-and-nuclear energy-loss region, in addition to the electronic and nuclear energy-loss regions in PSM, is taken into account analytically and appropriately. This model is combined with a small-angle multiple scattering range theory considering both nuclear and electronic stopping effects developed by Marwick-Sigmund and Valdes-Arista to formulate a improved range theory. The ESM is also combined with the multiple scattering theory with non-small angle approximation by Goudsmit-Saunderson. Furthermore, we applied ESM to lateral spread model of Marwick-Sigmund. Numerical calculations of the entire distribution functions including one of the mixt region are roughly and approximately possible. However, exact numerical calculation may be impossible. Consequently, several preliminary numerical calculations of the electronic, mixt, and nuclear regions are performed to examine their underlying behavior with respect to the incident energy, the scattering angle, the outgoing projectile intensity, and the target thickness. We show the numerical results not only of PSM and but also of ESM. Both numerical results are shown in the present paper for the first time. Since the theoretical relations are constructed using reduced variables, the calculations are made only on the case of C colliding on C.
NASA Astrophysics Data System (ADS)
Laakso, Thomas A.
2018-01-01
A combination of two anoxygenic pathways of photosynthesis could have helped to warm early Earth, according to geochemical models. These metabolisms, and attendant biogeochemical feedbacks, could have worked to counter the faint young Sun.
Using multipollutant models to understand the combined health effects of exposure to multiple pollutants is becoming more common. However, the complex relationships between pollutants and differing degrees of exposure error across pollutants can make health effect estimates from ...
Background: Using multipollutant models to understand combined health effects of exposure to multiple pollutants is becoming more common. However, complex relationships between pollutants and differing degrees of exposure error across pollutants can make health effect estimates f...
Chesi, Marta; Matthews, Geoffrey M.; Garbitt, Victoria M.; Palmer, Stephen E.; Shortt, Jake; Lefebure, Marcus; Stewart, A. Keith; Johnstone, Ricky W.
2012-01-01
The attrition rate for anticancer drugs entering clinical trials is unacceptably high. For multiple myeloma (MM), we postulate that this is because of preclinical models that overemphasize the antiproliferative activity of drugs, and clinical trials performed in refractory end-stage patients. We validate the Vk*MYC transgenic mouse as a faithful model to predict single-agent drug activity in MM with a positive predictive value of 67% (4 of 6) for clinical activity, and a negative predictive value of 86% (6 of 7) for clinical inactivity. We identify 4 novel agents that should be prioritized for evaluation in clinical trials. Transplantation of Vk*MYC tumor cells into congenic mice selected for a more aggressive disease that models end-stage drug-resistant MM and responds only to combinations of drugs with single-agent activity in untreated Vk*MYC MM. We predict that combinations of standard agents, histone deacetylase inhibitors, bromodomain inhibitors, and hypoxia-activated prodrugs will demonstrate efficacy in the treatment of relapsed MM. PMID:22451422
Witkiewicz, Agnieszka K; Balaji, Uthra; Eslinger, Cody; McMillan, Elizabeth; Conway, William; Posner, Bruce; Mills, Gordon B; O'Reilly, Eileen M; Knudsen, Erik S
2016-08-16
Pancreatic ductal adenocarcinoma (PDAC) harbors the worst prognosis of any common solid tumor, and multiple failed clinical trials indicate therapeutic recalcitrance. Here, we use exome sequencing of patient tumors and find multiple conserved genetic alterations. However, the majority of tumors exhibit no clearly defined therapeutic target. High-throughput drug screens using patient-derived cell lines found rare examples of sensitivity to monotherapy, with most models requiring combination therapy. Using PDX models, we confirmed the effectiveness and selectivity of the identified treatment responses. Out of more than 500 single and combination drug regimens tested, no single treatment was effective for the majority of PDAC tumors, and each case had unique sensitivity profiles that could not be predicted using genetic analyses. These data indicate a shortcoming of reliance on genetic analysis to predict efficacy of currently available agents against PDAC and suggest that sensitivity profiling of patient-derived models could inform personalized therapy design for PDAC. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.
Photoneutron cross sections for 59Co : Systematic uncertainties of data from various experiments
NASA Astrophysics Data System (ADS)
Varlamov, V. V.; Davydov, A. I.; Ishkhanov, B. S.
2017-09-01
Data on partial photoneutron reaction cross sections (γ ,1n), (γ ,2n), and (γ ,3n) for 59Co obtained in two experiments carried out at Livermore (USA) were analyzed. The sources of radiation in both experiments were the monoenergetic photon beams from the annihilation in flight of relativistic positrons. The total yield was sorted by the neutron multiplicity, taking into account the difference in the neutron energy spectra for different multiplicity. The two quoted studies differ in the method of determining the neutron. Significant systematic disagreements between the results of the two experiments exist. They are considered to be caused by large systematic uncertainties in partial cross sections, since they do not satisfy physical criteria for reliability of the data. To obtain reliable cross sections of partial and total photoneutron reactions a new method combining experimental data and theoretical evaluation was used. It is based on the experimental neutron yield cross section which is rather independent of neutron multiplicity and the transitional neutron multiplicity functions of the combined photonucleon reaction model (CPNRM). The model transitional multiplicity functions were used for the decomposition of the neutron yield cross section into the contributions of partial reactions. The results of the new evaluation noticeably differ from the partial cross sections obtained in the two experimental studies are under discussion.
Godinez, William J; Rohr, Karl
2015-02-01
Tracking subcellular structures as well as viral structures displayed as 'particles' in fluorescence microscopy images yields quantitative information on the underlying dynamical processes. We have developed an approach for tracking multiple fluorescent particles based on probabilistic data association. The approach combines a localization scheme that uses a bottom-up strategy based on the spot-enhancing filter as well as a top-down strategy based on an ellipsoidal sampling scheme that uses the Gaussian probability distributions computed by a Kalman filter. The localization scheme yields multiple measurements that are incorporated into the Kalman filter via a combined innovation, where the association probabilities are interpreted as weights calculated using an image likelihood. To track objects in close proximity, we compute the support of each image position relative to the neighboring objects of a tracked object and use this support to recalculate the weights. To cope with multiple motion models, we integrated the interacting multiple model algorithm. The approach has been successfully applied to synthetic 2-D and 3-D images as well as to real 2-D and 3-D microscopy images, and the performance has been quantified. In addition, the approach was successfully applied to the 2-D and 3-D image data of the recent Particle Tracking Challenge at the IEEE International Symposium on Biomedical Imaging (ISBI) 2012.
Price competition and equilibrium analysis in multiple hybrid channel supply chain
NASA Astrophysics Data System (ADS)
Kuang, Guihua; Wang, Aihu; Sha, Jin
2017-06-01
The amazing boom of Internet and logistics industry prompts more and more enterprises to sell commodity through multiple channels. Such market conditions make the participants of multiple hybrid channel supply chain compete each other in traditional and direct channel at the same time. This paper builds a two-echelon supply chain model with a single manufacturer and a single retailer who both can choose different channel or channel combination for their own sales, then, discusses the price competition and calculates the equilibrium price under different sales channel selection combinations. Our analysis shows that no matter the manufacturer and retailer choose same or different channel price to compete, the equilibrium price does not necessarily exist the equilibrium price in the multiple hybrid channel supply chain and wholesale price change is not always able to coordinate supply chain completely. We also present the sufficient and necessary conditions for the existence of equilibrium price and coordination wholesale price.
Blinov, Michael L.; Moraru, Ion I.
2011-01-01
Multi-state molecules and multi-component complexes are commonly involved in cellular signaling. Accounting for molecules that have multiple potential states, such as a protein that may be phosphorylated on multiple residues, and molecules that combine to form heterogeneous complexes located among multiple compartments, generates an effect of combinatorial complexity. Models involving relatively few signaling molecules can include thousands of distinct chemical species. Several software tools (StochSim, BioNetGen) are already available to deal with combinatorial complexity. Such tools need information standards if models are to be shared, jointly evaluated and developed. Here we discuss XML conventions that can be adopted for modeling biochemical reaction networks described by user-specified reaction rules. These could form a basis for possible future extensions of the Systems Biology Markup Language (SBML). PMID:21464833
Moghadam, Samira; Erfanmanesh, Maryam; Esmaeilzadeh, Abdolreza
2017-11-01
An autoimmune demyelination disease of the Central Nervous System, Multiple Sclerosis, is a chronic inflammation which mostly involves young adults. Suffering people face functional loss with a severe pain. Most current MS treatments are focused on the immune response suppression. Approved drugs suppress the inflammatory process, but factually, there is no definite cure for Multiple Sclerosis. Recently developed knowledge has demonstrated that gene and cell therapy as a hopeful approach in tissue regeneration. The authors propose a novel combined immune gene therapy for Multiple Sclerosis treatment using anti-inflammatory and remyelination of Interleukine-35 and Hepatocyte Growth Factor properties, respectively. In this hypothesis Interleukine-35 and Hepatocyte Growth Factor introduce to Mesenchymal Stem Cells of EAE mouse model via an adenovirus based vector. It is expected that Interleukine-35 and Hepatocyte Growth Factor genes expressed from MSCs could effectively perform in immunotherapy of Multiple Sclerosis. Copyright © 2017. Published by Elsevier Ltd.
The U.S. EPA's SHEDS-Multimedia model was applied to enhance the understanding of children's exposures and doses to multiple pyrethroid pesticides, including major contributing chemicals and pathways. This paper presents combined dietary and residential exposure estimates and cum...
ERIC Educational Resources Information Center
McArdle, John J.; Grimm, Kevin J.; Hamagami, Fumiaki; Bowles, Ryan P.; Meredith, William
2009-01-01
The authors use multiple-sample longitudinal data from different test batteries to examine propositions about changes in constructs over the life span. The data come from 3 classic studies on intellectual abilities in which, in combination, 441 persons were repeatedly measured as many as 16 times over 70 years. They measured cognitive constructs…
Multiple Access Interference Reduction Using Received Response Code Sequence for DS-CDMA UWB System
NASA Astrophysics Data System (ADS)
Toh, Keat Beng; Tachikawa, Shin'ichi
This paper proposes a combination of novel Received Response (RR) sequence at the transmitter and a Matched Filter-RAKE (MF-RAKE) combining scheme receiver system for the Direct Sequence-Code Division Multiple Access Ultra Wideband (DS-CDMA UWB) multipath channel model. This paper also demonstrates the effectiveness of the RR sequence in Multiple Access Interference (MAI) reduction for the DS-CDMA UWB system. It suggests that by using conventional binary code sequence such as the M sequence or the Gold sequence, there is a possibility of generating extra MAI in the UWB system. Therefore, it is quite difficult to collect the energy efficiently although the RAKE reception method is applied at the receiver. The main purpose of the proposed system is to overcome the performance degradation for UWB transmission due to the occurrence of MAI during multiple accessing in the DS-CDMA UWB system. The proposed system improves the system performance by improving the RAKE reception performance using the RR sequence which can reduce the MAI effect significantly. Simulation results verify that significant improvement can be obtained by the proposed system in the UWB multipath channel models.
Visual Prediction Error Spreads Across Object Features in Human Visual Cortex
Summerfield, Christopher; Egner, Tobias
2016-01-01
Visual cognition is thought to rely heavily on contextual expectations. Accordingly, previous studies have revealed distinct neural signatures for expected versus unexpected stimuli in visual cortex. However, it is presently unknown how the brain combines multiple concurrent stimulus expectations such as those we have for different features of a familiar object. To understand how an unexpected object feature affects the simultaneous processing of other expected feature(s), we combined human fMRI with a task that independently manipulated expectations for color and motion features of moving-dot stimuli. Behavioral data and neural signals from visual cortex were then interrogated to adjudicate between three possible ways in which prediction error (surprise) in the processing of one feature might affect the concurrent processing of another, expected feature: (1) feature processing may be independent; (2) surprise might “spread” from the unexpected to the expected feature, rendering the entire object unexpected; or (3) pairing a surprising feature with an expected feature might promote the inference that the two features are not in fact part of the same object. To formalize these rival hypotheses, we implemented them in a simple computational model of multifeature expectations. Across a range of analyses, behavior and visual neural signals consistently supported a model that assumes a mixing of prediction error signals across features: surprise in one object feature spreads to its other feature(s), thus rendering the entire object unexpected. These results reveal neurocomputational principles of multifeature expectations and indicate that objects are the unit of selection for predictive vision. SIGNIFICANCE STATEMENT We address a key question in predictive visual cognition: how does the brain combine multiple concurrent expectations for different features of a single object such as its color and motion trajectory? By combining a behavioral protocol that independently varies expectation of (and attention to) multiple object features with computational modeling and fMRI, we demonstrate that behavior and fMRI activity patterns in visual cortex are best accounted for by a model in which prediction error in one object feature spreads to other object features. These results demonstrate how predictive vision forms object-level expectations out of multiple independent features. PMID:27810936
Extended Parrondo's game and Brownian ratchets: strong and weak Parrondo effect.
Wu, Degang; Szeto, Kwok Yip
2014-02-01
Inspired by the flashing ratchet, Parrondo's game presents an apparently paradoxical situation. Parrondo's game consists of two individual games, game A and game B. Game A is a slightly losing coin-tossing game. Game B has two coins, with an integer parameter M. If the current cumulative capital (in discrete unit) is a multiple of M, an unfavorable coin p(b) is used, otherwise a favorable p(g) coin is used. Paradoxically, a combination of game A and game B could lead to a winning game, which is the Parrondo effect. We extend the original Parrondo's game to include the possibility of M being either M(1) or M(2). Also, we distinguish between strong Parrondo effect, i.e., two losing games combine to form a winning game, and weak Parrondo effect, i.e., two games combine to form a better-performing game. We find that when M(2) is not a multiple of M(1), the combination of B(M(1)) and B(M(2)) has strong and weak Parrondo effect for some subsets in the parameter space (p(b),p(g)), while there is neither strong nor weak effect when M(2) is a multiple of M(1). Furthermore, when M(2) is not a multiple of M(1), a stochastic mixture of game A may cancel the strong and weak Parrondo effect. Following a discretization scheme in the literature of Parrondo's game, we establish a link between our extended Parrondo's game with the analysis of discrete Brownian ratchet. We find a relation between the Parrondo effect of our extended model to the macroscopic bias in a discrete ratchet. The slope of a ratchet potential can be mapped to the fair game condition in the extended model, so that under some conditions, the macroscopic bias in a discrete ratchet can provide a good predictor for the game performance of the extended model. On the other hand, our extended model suggests a design of a ratchet in which the potential is a mixture of two periodic potentials.
Integrated presentation of ecological risk from multiple stressors
Goussen, Benoit; Price, Oliver R.; Rendal, Cecilie; Ashauer, Roman
2016-01-01
Current environmental risk assessments (ERA) do not account explicitly for ecological factors (e.g. species composition, temperature or food availability) and multiple stressors. Assessing mixtures of chemical and ecological stressors is needed as well as accounting for variability in environmental conditions and uncertainty of data and models. Here we propose a novel probabilistic ERA framework to overcome these limitations, which focusses on visualising assessment outcomes by construct-ing and interpreting prevalence plots as a quantitative prediction of risk. Key components include environmental scenarios that integrate exposure and ecology, and ecological modelling of relevant endpoints to assess the effect of a combination of stressors. Our illustrative results demonstrate the importance of regional differences in environmental conditions and the confounding interactions of stressors. Using this framework and prevalence plots provides a risk-based approach that combines risk assessment and risk management in a meaningful way and presents a truly mechanistic alternative to the threshold approach. Even whilst research continues to improve the underlying models and data, regulators and decision makers can already use the framework and prevalence plots. The integration of multiple stressors, environmental conditions and variability makes ERA more relevant and realistic. PMID:27782171
Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong
2017-06-19
A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification.
Combining fungal biopesticides and insecticide-treated bednets to enhance malaria control.
Hancock, Penelope A
2009-10-01
In developing strategies to control malaria vectors, there is increased interest in biological methods that do not cause instant vector mortality, but have sublethal and lethal effects at different ages and stages in the mosquito life cycle. These techniques, particularly if integrated with other vector control interventions, may produce substantial reductions in malaria transmission due to the total effect of alterations to multiple life history parameters at relevant points in the life-cycle and transmission-cycle of the vector. To quantify this effect, an analytically tractable gonotrophic cycle model of mosquito-malaria interactions is developed that unites existing continuous and discrete feeding cycle approaches. As a case study, the combined use of fungal biopesticides and insecticide treated bednets (ITNs) is considered. Low values of the equilibrium EIR and human prevalence were obtained when fungal biopesticides and ITNs were combined, even for scenarios where each intervention acting alone had relatively little impact. The effect of the combined interventions on the equilibrium EIR was at least as strong as the multiplicative effect of both interventions. For scenarios representing difficult conditions for malaria control, due to high transmission intensity and widespread insecticide resistance, the effect of the combined interventions on the equilibrium EIR was greater than the multiplicative effect, as a result of synergistic interactions between the interventions. Fungal biopesticide application was found to be most effective when ITN coverage was high, producing significant reductions in equilibrium prevalence for low levels of biopesticide coverage. By incorporating biological mechanisms relevant to vectorial capacity, continuous-time vector population models can increase their applicability to integrated vector management.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peters, Junenette L., E-mail: petersj@bu.edu; Patricia Fabian, M., E-mail: pfabian@bu.edu; Levy, Jonathan I., E-mail: jonlevy@bu.edu
High blood pressure is associated with exposure to multiple chemical and non-chemical risk factors, but epidemiological analyses to date have not assessed the combined effects of both chemical and non-chemical stressors on human populations in the context of cumulative risk assessment. We developed a novel modeling approach to evaluate the combined impact of lead, cadmium, polychlorinated biphenyls (PCBs), and multiple non-chemical risk factors on four blood pressure measures using data for adults aged ≥20 years from the National Health and Nutrition Examination Survey (1999–2008). We developed predictive models for chemical and other stressors. Structural equation models were applied to accountmore » for complex associations among predictors of stressors as well as blood pressure. Models showed that blood lead, serum PCBs, and established non-chemical stressors were significantly associated with blood pressure. Lead was the chemical stressor most predictive of diastolic blood pressure and mean arterial pressure, while PCBs had a greater influence on systolic blood pressure and pulse pressure, and blood cadmium was not a significant predictor of blood pressure. The simultaneously fit exposure models explained 34%, 43% and 52% of the variance for lead, cadmium and PCBs, respectively. The structural equation models were developed using predictors available from public data streams (e.g., U.S. Census), which would allow the models to be applied to any U.S. population exposed to these multiple stressors in order to identify high risk subpopulations, direct intervention strategies, and inform public policy. - Highlights: • We evaluated joint impact of chemical and non-chemical stressors on blood pressure. • We built predictive models for lead, cadmium and polychlorinated biphenyls (PCBs). • Our approach allows joint evaluation of predictors from population-specific data. • Lead, PCBs and established non-chemical stressors were related to blood pressure. • Framework allows cumulative risk assessment in specific geographic settings.« less
Lattice Entertain You: Paper Modeling of the 14 Bravais Lattices on Youtube
ERIC Educational Resources Information Center
Sein, Lawrence T., Jr.; Sein, Sarajane E.
2015-01-01
A system for the construction of double-sided paper models of the 14 Bravais lattices, and important crystal structures derived from them, is described. The system allows the combination of multiple unit cells, so as to better represent the overall three-dimensional structure. Students and instructors can view the models in use on the popular…
Díaz, Tania; Rodríguez, Vanina; Lozano, Ester; Mena, Mari-Pau; Calderón, Marcos; Rosiñol, Laura; Martínez, Antonio; Tovar, Natalia; Pérez-Galán, Patricia; Bladé, Joan; Roué, Gaël; de Larrea, Carlos Fernández
2017-01-01
Most patients with multiple myeloma treated with current therapies, including immunomodulatory drugs, eventually develop relapsed/refractory disease. Clinical activity of lenalidomide relies on degradation of Ikaros and the consequent reduction in IRF4 expression, both required for myeloma cell survival and involved in the regulation of MYC transcription. Thus, we sought to determine the combinational effect of an MYC-interfering therapy with lenalidomide/dexamethasone. We analyzed the potential therapeutic effect of the combination of the BET bromodomain inhibitor CPI203 with the lenalidomide/dexamethasone regimen in myeloma cell lines. CPI203 exerted a dose-dependent cell growth inhibition in cell lines, indeed in lenalidomide/dexamethasone-resistant cells (median response at 0.5 μM: 65.4%), characterized by G1 cell cycle blockade and a concomitant inhibition of MYC and Ikaros signaling. These effects were potentiated by the addition of lenalidomide/dexamethasone. Results were validated in primary plasma cells from patients with multiple myeloma co-cultured with the mesenchymal stromal cell line stromaNKtert. Consistently, the drug combination evoked a 50% reduction in cell proliferation and correlated with basal Ikaros mRNA expression levels (P=0.04). Finally, in a SCID mouse xenotransplant model of myeloma, addition of CPI203 to lenalidomide/dexamethasone decreased tumor burden, evidenced by a lower glucose uptake and increase in the growth arrest marker GADD45B, with simultaneous downregulation of key transcription factors such as MYC, Ikaros and IRF4. Taken together, our data show that the combination of a BET bromodomain inhibitor with a lenalidomide-based regimen may represent a therapeutic approach to improve the response in relapsed/refractory patients with multiple myeloma, even in cases with suboptimal prior response to immunomodulatory drugs. PMID:28751557
Detection of person borne IEDs using multiple cooperative sensors
NASA Astrophysics Data System (ADS)
MacIntosh, Scott; Deming, Ross; Hansen, Thorkild; Kishan, Neel; Tang, Ling; Shea, Jing; Lang, Stephen
2011-06-01
The use of multiple cooperative sensors for the detection of person borne IEDs is investigated. The purpose of the effort is to evaluate the performance benefits of adding multiple sensor data streams into an aided threat detection algorithm, and a quantitative analysis of which sensor data combinations improve overall detection performance. Testing includes both mannequins and human subjects with simulated suicide bomb devices of various configurations, materials, sizes and metal content. Aided threat recognition algorithms are being developed to test detection performance of individual sensors against combined fused sensors inputs. Sensors investigated include active and passive millimeter wave imaging systems, passive infrared, 3-D profiling sensors and acoustic imaging. The paper describes the experimental set-up and outlines the methodology behind a decision fusion algorithm-based on the concept of a "body model".
A physics based method for combining multiple anatomy models with application to medical simulation.
Zhu, Yanong; Magee, Derek; Ratnalingam, Rishya; Kessel, David
2009-01-01
We present a physics based approach to the construction of anatomy models by combining components from different sources; different image modalities, protocols, and patients. Given an initial anatomy, a mass-spring model is generated which mimics the physical properties of the solid anatomy components. This helps maintain valid spatial relationships between the components, as well as the validity of their shapes. Combination can be either replacing/modifying an existing component, or inserting a new component. The external forces that deform the model components to fit the new shape are estimated from Gradient Vector Flow and Distance Transform maps. We demonstrate the applicability and validity of the described approach in the area of medical simulation, by showing the processes of non-rigid surface alignment, component replacement, and component insertion.
Identifying the sources of dissolved inorganic nitrogen (DIN) in estuaries is complicated by the multiple sources, temporal variability in inputs, and variations in transport. We used a hydrodynamic model to simulate the transport and uptake of three sources of DIN (oceanic, riv...
Meta-Analysis of Scale Reliability Using Latent Variable Modeling
ERIC Educational Resources Information Center
Raykov, Tenko; Marcoulides, George A.
2013-01-01
A latent variable modeling approach is outlined that can be used for meta-analysis of reliability coefficients of multicomponent measuring instruments. Important limitations of efforts to combine composite reliability findings across multiple studies are initially pointed out. A reliability synthesis procedure is discussed that is based on…
Protein fold recognition using geometric kernel data fusion.
Zakeri, Pooya; Jeuris, Ben; Vandebril, Raf; Moreau, Yves
2014-07-01
Various approaches based on features extracted from protein sequences and often machine learning methods have been used in the prediction of protein folds. Finding an efficient technique for integrating these different protein features has received increasing attention. In particular, kernel methods are an interesting class of techniques for integrating heterogeneous data. Various methods have been proposed to fuse multiple kernels. Most techniques for multiple kernel learning focus on learning a convex linear combination of base kernels. In addition to the limitation of linear combinations, working with such approaches could cause a loss of potentially useful information. We design several techniques to combine kernel matrices by taking more involved, geometry inspired means of these matrices instead of convex linear combinations. We consider various sequence-based protein features including information extracted directly from position-specific scoring matrices and local sequence alignment. We evaluate our methods for classification on the SCOP PDB-40D benchmark dataset for protein fold recognition. The best overall accuracy on the protein fold recognition test set obtained by our methods is ∼ 86.7%. This is an improvement over the results of the best existing approach. Moreover, our computational model has been developed by incorporating the functional domain composition of proteins through a hybridization model. It is observed that by using our proposed hybridization model, the protein fold recognition accuracy is further improved to 89.30%. Furthermore, we investigate the performance of our approach on the protein remote homology detection problem by fusing multiple string kernels. The MATLAB code used for our proposed geometric kernel fusion frameworks are publicly available at http://people.cs.kuleuven.be/∼raf.vandebril/homepage/software/geomean.php?menu=5/. © The Author 2014. Published by Oxford University Press.
Nystrom, Elizabeth A.; Burns, Douglas A.
2011-01-01
TOPMODEL uses a topographic wetness index computed from surface-elevation data to simulate streamflow and subsurface-saturation state, represented by the saturation deficit. Depth to water table was computed from simulated saturation-deficit values using computed soil properties. In the Fishing Brook Watershed, TOPMODEL was calibrated to the natural logarithm of streamflow at the study area outlet and depth to water table at Sixmile Wetland using a combined multiple-objective function. Runoff and depth to water table responded differently to some of the model parameters, and the combined multiple-objective function balanced the goodness-of-fit of the model realizations with respect to these parameters. Results show that TOPMODEL reasonably simulated runoff and depth to water table during the study period. The simulated runoff had a Nash-Sutcliffe efficiency of 0.738, but the model underpredicted total runoff by 14 percent. Depth to water table computed from simulated saturation-deficit values matched observed water-table depth moderately well; the root mean squared error of absolute depth to water table was 91 millimeters (mm), compared to the mean observed depth to water table of 205 mm. The correlation coefficient for temporal depth-to-water-table fluctuations was 0.624. The variability of the TOPMODEL simulations was assessed using prediction intervals grouped using the combined multiple-objective function. The calibrated TOPMODEL results for the entire study area were applied to several subwatersheds within the study area using computed hydrogeomorphic properties of the subwatersheds.
Results of the Simulation of the HTR-Proteus Core 4.2 Using PEBBED-COMBINE: FY10 Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hans Gougar
2010-07-01
ABSTRACT The Idaho National Laboratory’s deterministic neutronics analysis codes and methods were applied to the computation of the core multiplication factor of the HTR-Proteus pebble bed reactor critical facility. This report is a follow-on to INL/EXT-09-16620 in which the same calculation was performed but using earlier versions of the codes and less developed methods. In that report, results indicated that the cross sections generated using COMBINE-7.0 did not yield satisfactory estimates of keff. It was concluded in the report that the modeling of control rods was not satisfactory. In the past year, improvements to the homogenization capability in COMBINE havemore » enabled the explicit modeling of TRIS particles, pebbles, and heterogeneous core zones including control rod regions using a new multi-scale version of COMBINE in which the 1-dimensional discrete ordinate transport code ANISN has been integrated. The new COMBINE is shown to yield benchmark quality results for pebble unit cell models, the first step in preparing few-group diffusion parameters for core simulations. In this report, the full critical core is modeled once again but with cross sections generated using the capabilities and physics of the improved COMBINE code. The new PEBBED-COMBINE model enables the exact modeling of the pebbles and control rod region along with better approximation to structures in the reflector. Initial results for the core multiplication factor indicate significant improvement in the INL’s tools for modeling the neutronic properties of a pebble bed reactor. Errors on the order of 1.6-2.5% in keff are obtained; a significant improvement over the 5-6% error observed in the earlier This is acceptable for a code system and model in the early stages of development but still too high for a production code. Analysis of a simpler core model indicates an over-prediction of the flux in the low end of the thermal spectrum. Causes of this discrepancy are under investigation. New homogenization techniques and assumptions were used in this analysis and as such, they require further confirmation and validation. Further refinement and review of the complex Proteus core model are likely to reduce the errors even further.« less
Combining Search Engines for Comparative Proteomics
Tabb, David
2012-01-01
Many proteomics laboratories have found spectral counting to be an ideal way to recognize biomarkers that differentiate cohorts of samples. This approach assumes that proteins that differ in quantity between samples will generate different numbers of identifiable tandem mass spectra. Increasingly, researchers are employing multiple search engines to maximize the identifications generated from data collections. This talk evaluates four strategies to combine information from multiple search engines in comparative proteomics. The “Count Sum” model pools the spectra across search engines. The “Vote Counting” model combines the judgments from each search engine by protein. Two other models employ parametric and non-parametric analyses of protein-specific p-values from different search engines. We evaluated the four strategies in two different data sets. The ABRF iPRG 2009 study generated five LC-MS/MS analyses of “red” E. coli and five analyses of “yellow” E. coli. NCI CPTAC Study 6 generated five concentrations of Sigma UPS1 spiked into a yeast background. All data were identified with X!Tandem, Sequest, MyriMatch, and TagRecon. For both sample types, “Vote Counting” appeared to manage the diverse identification sets most effectively, yielding heightened discrimination as more search engines were added.
Comparing multiple imputation methods for systematically missing subject-level data.
Kline, David; Andridge, Rebecca; Kaizar, Eloise
2017-06-01
When conducting research synthesis, the collection of studies that will be combined often do not measure the same set of variables, which creates missing data. When the studies to combine are longitudinal, missing data can occur on the observation-level (time-varying) or the subject-level (non-time-varying). Traditionally, the focus of missing data methods for longitudinal data has been on missing observation-level variables. In this paper, we focus on missing subject-level variables and compare two multiple imputation approaches: a joint modeling approach and a sequential conditional modeling approach. We find the joint modeling approach to be preferable to the sequential conditional approach, except when the covariance structure of the repeated outcome for each individual has homogenous variance and exchangeable correlation. Specifically, the regression coefficient estimates from an analysis incorporating imputed values based on the sequential conditional method are attenuated and less efficient than those from the joint method. Remarkably, the estimates from the sequential conditional method are often less efficient than a complete case analysis, which, in the context of research synthesis, implies that we lose efficiency by combining studies. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Alfano, Robert R. (Inventor); Cai, Wei (Inventor)
2007-01-01
A reconstruction technique for reducing computation burden in the 3D image processes, wherein the reconstruction procedure comprises an inverse and a forward model. The inverse model uses a hybrid dual Fourier algorithm that combines a 2D Fourier inversion with a 1D matrix inversion to thereby provide high-speed inverse computations. The inverse algorithm uses a hybrid transfer to provide fast Fourier inversion for data of multiple sources and multiple detectors. The forward model is based on an analytical cumulant solution of a radiative transfer equation. The accurate analytical form of the solution to the radiative transfer equation provides an efficient formalism for fast computation of the forward model.
NASA Technical Reports Server (NTRS)
Mei, Chuh; Shen, Mo-How
1987-01-01
Multiple-mode nonlinear forced vibration of a beam was analyzed by the finite element method. Inplane (longitudinal) displacement and inertia (IDI) are considered in the formulation. By combining the finite element method and nonlinear theory, more realistic models of structural response are obtained more easily and faster.
Plessis, Anne; Hafemeister, Christoph; Wilkins, Olivia; Gonzaga, Zennia Jean; Meyer, Rachel Sarah; Pires, Inês; Müller, Christian; Septiningsih, Endang M; Bonneau, Richard; Purugganan, Michael
2015-11-26
Plants rely on transcriptional dynamics to respond to multiple climatic fluctuations and contexts in nature. We analyzed the genome-wide gene expression patterns of rice (Oryza sativa) growing in rainfed and irrigated fields during two distinct tropical seasons and determined simple linear models that relate transcriptomic variation to climatic fluctuations. These models combine multiple environmental parameters to account for patterns of expression in the field of co-expressed gene clusters. We examined the similarities of our environmental models between tropical and temperate field conditions, using previously published data. We found that field type and macroclimate had broad impacts on transcriptional responses to environmental fluctuations, especially for genes involved in photosynthesis and development. Nevertheless, variation in solar radiation and temperature at the timescale of hours had reproducible effects across environmental contexts. These results provide a basis for broad-based predictive modeling of plant gene expression in the field.
Sabourin, Jeremy; Nobel, Andrew B.; Valdar, William
2014-01-01
Genomewide association studies sometimes identify loci at which both the number and identities of the underlying causal variants are ambiguous. In such cases, statistical methods that model effects of multiple SNPs simultaneously can help disentangle the observed patterns of association and provide information about how those SNPs could be prioritized for follow-up studies. Current multi-SNP methods, however, tend to assume that SNP effects are well captured by additive genetics; yet when genetic dominance is present, this assumption translates to reduced power and faulty prioritizations. We describe a statistical procedure for prioritizing SNPs at GWAS loci that efficiently models both additive and dominance effects. Our method, LLARRMA-dawg, combines a group LASSO procedure for sparse modeling of multiple SNP effects with a resampling procedure based on fractional observation weights; it estimates for each SNP the robustness of association with the phenotype both to sampling variation and to competing explanations from other SNPs. In producing a SNP prioritization that best identifies underlying true signals, we show that: our method easily outperforms a single marker analysis; when additive-only signals are present, our joint model for additive and dominance is equivalent to or only slightly less powerful than modeling additive-only effects; and, when dominance signals are present, even in combination with substantial additive effects, our joint model is unequivocally more powerful than a model assuming additivity. We also describe how performance can be improved through calibrated randomized penalization, and discuss how dominance in ungenotyped SNPs can be incorporated through either heterozygote dosage or multiple imputation. PMID:25417853
Dyson, Greg; Frikke-Schmidt, Ruth; Nordestgaard, Børge G; Tybjaerg-Hansen, Anne; Sing, Charles F
2009-05-01
This article extends the Patient Rule-Induction Method (PRIM) for modeling cumulative incidence of disease developed by Dyson et al. (Genet Epidemiol 31:515-527) to include the simultaneous consideration of non-additive combinations of predictor variables, a significance test of each combination, an adjustment for multiple testing and a confidence interval for the estimate of the cumulative incidence of disease in each partition. We employ the partitioning algorithm component of the Combinatorial Partitioning Method to construct combinations of predictors, permutation testing to assess the significance of each combination, theoretical arguments for incorporating a multiple testing adjustment and bootstrap resampling to produce the confidence intervals. An illustration of this revised PRIM utilizing a sample of 2,258 European male participants from the Copenhagen City Heart Study is presented that assesses the utility of genetic variants in predicting the presence of ischemic heart disease beyond the established risk factors.
Dyson, Greg; Frikke-Schmidt, Ruth; Nordestgaard, Børge G.; Tybjærg-Hansen, Anne; Sing, Charles F.
2009-01-01
This paper extends the Patient Rule-Induction Method (PRIM) for modeling cumulative incidence of disease developed by Dyson et al. (2007) to include the simultaneous consideration of non-additive combinations of predictor variables, a significance test of each combination, an adjustment for multiple testing and a confidence interval for the estimate of the cumulative incidence of disease in each partition. We employ the partitioning algorithm component of the Combinatorial Partitioning Method (CPM) to construct combinations of predictors, permutation testing to assess the significance of each combination, theoretical arguments for incorporating a multiple testing adjustment and bootstrap resampling to produce the confidence intervals. An illustration of this revised PRIM utilizing a sample of 2258 European male participants from the Copenhagen City Heart Study is presented that assesses the utility of genetic variants in predicting the presence of ischemic heart disease beyond the established risk factors. PMID:19025787
Focks, Andreas; Belgers, Dick; Boerwinkel, Marie-Claire; Buijse, Laura; Roessink, Ivo; Van den Brink, Paul J
2018-05-01
Exposure patterns in ecotoxicological experiments often do not match the exposure profiles for which a risk assessment needs to be performed. This limitation can be overcome by using toxicokinetic-toxicodynamic (TKTD) models for the prediction of effects under time-variable exposure. For the use of TKTD models in the environmental risk assessment of chemicals, it is required to calibrate and validate the model for specific compound-species combinations. In this study, the survival of macroinvertebrates after exposure to the neonicotinoid insecticide was modelled using TKTD models from the General Unified Threshold models of Survival (GUTS) framework. The models were calibrated on existing survival data from acute or chronic tests under static exposure regime. Validation experiments were performed for two sets of species-compound combinations: one set focussed on multiple species sensitivity to a single compound: imidacloprid, and the other set on the effects of multiple compounds for a single species, i.e., the three neonicotinoid compounds imidacloprid, thiacloprid and thiamethoxam, on the survival of the mayfly Cloeon dipterum. The calibrated models were used to predict survival over time, including uncertainty ranges, for the different time-variable exposure profiles used in the validation experiments. From the comparison between observed and predicted survival, it appeared that the accuracy of the model predictions was acceptable for four of five tested species in the multiple species data set. For compounds such as neonicotinoids, which are known to have the potential to show increased toxicity under prolonged exposure, the calibration and validation of TKTD models for survival needs to be performed ideally by considering calibration data from both acute and chronic tests.
A Crack Growth Evaluation Method for Interacting Multiple Cracks
NASA Astrophysics Data System (ADS)
Kamaya, Masayuki
When stress corrosion cracking or corrosion fatigue occurs, multiple cracks are frequently initiated in the same area. According to section XI of the ASME Boiler and Pressure Vessel Code, multiple cracks are considered as a single combined crack in crack growth analysis, if the specified conditions are satisfied. In crack growth processes, however, no prescription for the interference between multiple cracks is given in this code. The JSME Post-Construction Code, issued in May 2000, prescribes the conditions of crack coalescence in the crack growth process. This study aimed to extend this prescription to more general cases. A simulation model was applied, to simulate the crack growth process, taking into account the interference between two cracks. This model made it possible to analyze multiple crack growth behaviors for many cases (e. g. different relative position and length) that could not be studied by experiment only. Based on these analyses, a new crack growth analysis method was suggested for taking into account the interference between multiple cracks.
Multiple graph regularized protein domain ranking.
Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin
2012-11-19
Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.
Multiple graph regularized protein domain ranking
2012-01-01
Background Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. Results To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. Conclusion The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. PMID:23157331
NASA Astrophysics Data System (ADS)
Everaert, Gert; Deschutter, Yana; De Troch, Marleen; Janssen, Colin R.; De Schamphelaere, Karel
2018-05-01
The effect of multiple stressors on marine ecosystems remains poorly understood and most of the knowledge available is related to phytoplankton. To partly address this knowledge gap, we tested if combining multimodel inference with generalized additive modelling could quantify the relative contribution of environmental variables on the population dynamics of a zooplankton species in the Belgian part of the North Sea. Hence, we have quantified the relative contribution of oceanographic variables (e.g. water temperature, salinity, nutrient concentrations, and chlorophyll a concentrations) and anthropogenic chemicals (i.e. polychlorinated biphenyls) to the density of Acartia clausi. We found that models with water temperature and chlorophyll a concentration explained ca. 73% of the population density of the marine copepod. Multimodel inference in combination with regression-based models are a generic way to disentangle and quantify multiple stressor-induced changes in marine ecosystems. Future-oriented simulations of copepod densities suggested increased copepod densities under predicted environmental changes.
Wang, Jinjia; Zhang, Yanna
2015-02-01
Brain-computer interface (BCI) systems identify brain signals through extracting features from them. In view of the limitations of the autoregressive model feature extraction method and the traditional principal component analysis to deal with the multichannel signals, this paper presents a multichannel feature extraction method that multivariate autoregressive (MVAR) model combined with the multiple-linear principal component analysis (MPCA), and used for magnetoencephalography (MEG) signals and electroencephalograph (EEG) signals recognition. Firstly, we calculated the MVAR model coefficient matrix of the MEG/EEG signals using this method, and then reduced the dimensions to a lower one, using MPCA. Finally, we recognized brain signals by Bayes Classifier. The key innovation we introduced in our investigation showed that we extended the traditional single-channel feature extraction method to the case of multi-channel one. We then carried out the experiments using the data groups of IV-III and IV - I. The experimental results proved that the method proposed in this paper was feasible.
Towal, R Blythe; Mormann, Milica; Koch, Christof
2013-10-01
Many decisions we make require visually identifying and evaluating numerous alternatives quickly. These usually vary in reward, or value, and in low-level visual properties, such as saliency. Both saliency and value influence the final decision. In particular, saliency affects fixation locations and durations, which are predictive of choices. However, it is unknown how saliency propagates to the final decision. Moreover, the relative influence of saliency and value is unclear. Here we address these questions with an integrated model that combines a perceptual decision process about where and when to look with an economic decision process about what to choose. The perceptual decision process is modeled as a drift-diffusion model (DDM) process for each alternative. Using psychophysical data from a multiple-alternative, forced-choice task, in which subjects have to pick one food item from a crowded display via eye movements, we test four models where each DDM process is driven by (i) saliency or (ii) value alone or (iii) an additive or (iv) a multiplicative combination of both. We find that models including both saliency and value weighted in a one-third to two-thirds ratio (saliency-to-value) significantly outperform models based on either quantity alone. These eye fixation patterns modulate an economic decision process, also described as a DDM process driven by value. Our combined model quantitatively explains fixation patterns and choices with similar or better accuracy than previous models, suggesting that visual saliency has a smaller, but significant, influence than value and that saliency affects choices indirectly through perceptual decisions that modulate economic decisions.
Towal, R. Blythe; Mormann, Milica; Koch, Christof
2013-01-01
Many decisions we make require visually identifying and evaluating numerous alternatives quickly. These usually vary in reward, or value, and in low-level visual properties, such as saliency. Both saliency and value influence the final decision. In particular, saliency affects fixation locations and durations, which are predictive of choices. However, it is unknown how saliency propagates to the final decision. Moreover, the relative influence of saliency and value is unclear. Here we address these questions with an integrated model that combines a perceptual decision process about where and when to look with an economic decision process about what to choose. The perceptual decision process is modeled as a drift–diffusion model (DDM) process for each alternative. Using psychophysical data from a multiple-alternative, forced-choice task, in which subjects have to pick one food item from a crowded display via eye movements, we test four models where each DDM process is driven by (i) saliency or (ii) value alone or (iii) an additive or (iv) a multiplicative combination of both. We find that models including both saliency and value weighted in a one-third to two-thirds ratio (saliency-to-value) significantly outperform models based on either quantity alone. These eye fixation patterns modulate an economic decision process, also described as a DDM process driven by value. Our combined model quantitatively explains fixation patterns and choices with similar or better accuracy than previous models, suggesting that visual saliency has a smaller, but significant, influence than value and that saliency affects choices indirectly through perceptual decisions that modulate economic decisions. PMID:24019496
Almalik, Osama; Nijhuis, Michiel B; van den Heuvel, Edwin R
2014-01-01
Shelf-life estimation usually requires that at least three registration batches are tested for stability at multiple storage conditions. The shelf-life estimates are often obtained by linear regression analysis per storage condition, an approach implicitly suggested by ICH guideline Q1E. A linear regression analysis combining all data from multiple storage conditions was recently proposed in the literature when variances are homogeneous across storage conditions. The combined analysis is expected to perform better than the separate analysis per storage condition, since pooling data would lead to an improved estimate of the variation and higher numbers of degrees of freedom, but this is not evident for shelf-life estimation. Indeed, the two approaches treat the observed initial batch results, the intercepts in the model, and poolability of batches differently, which may eliminate or reduce the expected advantage of the combined approach with respect to the separate approach. Therefore, a simulation study was performed to compare the distribution of simulated shelf-life estimates on several characteristics between the two approaches and to quantify the difference in shelf-life estimates. In general, the combined statistical analysis does estimate the true shelf life more consistently and precisely than the analysis per storage condition, but it did not outperform the separate analysis in all circumstances.
Li, Miao; Li, Jun; Zhou, Yiyu
2015-12-08
The problem of jointly detecting and tracking multiple targets from the raw observations of an infrared focal plane array is a challenging task, especially for the case with uncertain target dynamics. In this paper a multi-model labeled multi-Bernoulli (MM-LMB) track-before-detect method is proposed within the labeled random finite sets (RFS) framework. The proposed track-before-detect method consists of two parts-MM-LMB filter and MM-LMB smoother. For the MM-LMB filter, original LMB filter is applied to track-before-detect based on target and measurement models, and is integrated with the interacting multiple models (IMM) approach to accommodate the uncertainty of target dynamics. For the MM-LMB smoother, taking advantage of the track labels and posterior model transition probability, the single-model single-target smoother is extended to a multi-model multi-target smoother. A Sequential Monte Carlo approach is also presented to implement the proposed method. Simulation results show the proposed method can effectively achieve tracking continuity for multiple maneuvering targets. In addition, compared with the forward filtering alone, our method is more robust due to its combination of forward filtering and backward smoothing.
Li, Miao; Li, Jun; Zhou, Yiyu
2015-01-01
The problem of jointly detecting and tracking multiple targets from the raw observations of an infrared focal plane array is a challenging task, especially for the case with uncertain target dynamics. In this paper a multi-model labeled multi-Bernoulli (MM-LMB) track-before-detect method is proposed within the labeled random finite sets (RFS) framework. The proposed track-before-detect method consists of two parts—MM-LMB filter and MM-LMB smoother. For the MM-LMB filter, original LMB filter is applied to track-before-detect based on target and measurement models, and is integrated with the interacting multiple models (IMM) approach to accommodate the uncertainty of target dynamics. For the MM-LMB smoother, taking advantage of the track labels and posterior model transition probability, the single-model single-target smoother is extended to a multi-model multi-target smoother. A Sequential Monte Carlo approach is also presented to implement the proposed method. Simulation results show the proposed method can effectively achieve tracking continuity for multiple maneuvering targets. In addition, compared with the forward filtering alone, our method is more robust due to its combination of forward filtering and backward smoothing. PMID:26670234
Visualization of the Eastern Renewable Generation Integration Study: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gruchalla, Kenny; Novacheck, Joshua; Bloom, Aaron
The Eastern Renewable Generation Integration Study (ERGIS), explores the operational impacts of the wide spread adoption of wind and solar photovoltaics (PV) resources in the U.S. Eastern Interconnection and Quebec Interconnection (collectively, EI). In order to understand some of the economic and reliability challenges of managing hundreds of gigawatts of wind and PV generation, we developed state of the art tools, data, and models for simulating power system operations using hourly unit commitment and 5-minute economic dispatch over an entire year. Using NREL's high-performance computing capabilities and new methodologies to model operations, we found that the EI, as simulated withmore » evolutionary change in 2026, could balance the variability and uncertainty of wind and PV at a 5-minute level under a variety of conditions. A large-scale display and a combination of multiple coordinated views and small multiples were used to visually analyze the four large highly multivariate scenarios with high spatial and temporal resolutions. state of the art tools, data, and models for simulating power system operations using hourly unit commitment and 5-minute economic dispatch over an entire year. Using NRELs high-performance computing capabilities and new methodologies to model operations, we found that the EI, as simulated with evolutionary change in 2026, could balance the variability and uncertainty of wind and PV at a 5-minute level under a variety of conditions. A large-scale display and a combination of multiple coordinated views and small multiples were used to visually analyze the four large highly multivariate scenarios with high spatial and temporal resolutions.« less
To understand the combined health effects of exposure to ambient air pollutant mixtures, it is becoming more common to include multiple pollutants in epidemiologic models. However, the complex spatial and temporal pattern of ambient pollutant concentrations and related exposures ...
A mixed integer program to model spatial wildfire behavior and suppression placement decisions
Erin J. Belval; Yu Wei; Michael Bevers
2015-01-01
Wildfire suppression combines multiple objectives and dynamic fire behavior to form a complex problem for decision makers. This paper presents a mixed integer program designed to explore integrating spatial fire behavior and suppression placement decisions into a mathematical programming framework. Fire behavior and suppression placement decisions are modeled using...
Approximation of reliabilities for multiple-trait model with maternal effects.
Strabel, T; Misztal, I; Bertrand, J K
2001-04-01
Reliabilities for a multiple-trait maternal model were obtained by combining reliabilities obtained from single-trait models. Single-trait reliabilities were obtained using an approximation that supported models with additive and permanent environmental effects. For the direct effect, the maternal and permanent environmental variances were assigned to the residual. For the maternal effect, variance of the direct effect was assigned to the residual. Data included 10,550 birth weight, 11,819 weaning weight, and 3,617 postweaning gain records of Senepol cattle. Reliabilities were obtained by generalized inversion and by using single-trait and multiple-trait approximation methods. Some reliabilities obtained by inversion were negative because inbreeding was ignored in calculating the inverse of the relationship matrix. The multiple-trait approximation method reduced the bias of approximation when compared with the single-trait method. The correlations between reliabilities obtained by inversion and by multiple-trait procedures for the direct effect were 0.85 for birth weight, 0.94 for weaning weight, and 0.96 for postweaning gain. Correlations for maternal effects for birth weight and weaning weight were 0.96 to 0.98 for both approximations. Further improvements can be achieved by refining the single-trait procedures.
Lombardi, Federica; Golla, Kalyan; Fitzpatrick, Darren J.; Casey, Fergal P.; Moran, Niamh; Shields, Denis C.
2015-01-01
Identifying effective therapeutic drug combinations that modulate complex signaling pathways in platelets is central to the advancement of effective anti-thrombotic therapies. However, there is no systems model of the platelet that predicts responses to different inhibitor combinations. We developed an approach which goes beyond current inhibitor-inhibitor combination screening to efficiently consider other signaling aspects that may give insights into the behaviour of the platelet as a system. We investigated combinations of platelet inhibitors and activators. We evaluated three distinct strands of information, namely: activator-inhibitor combination screens (testing a panel of inhibitors against a panel of activators); inhibitor-inhibitor synergy screens; and activator-activator synergy screens. We demonstrated how these analyses may be efficiently performed, both experimentally and computationally, to identify particular combinations of most interest. Robust tests of activator-activator synergy and of inhibitor-inhibitor synergy required combinations to show significant excesses over the double doses of each component. Modeling identified multiple effects of an inhibitor of the P2Y12 ADP receptor, and complementarity between inhibitor-inhibitor synergy effects and activator-inhibitor combination effects. This approach accelerates the mapping of combination effects of compounds to develop combinations that may be therapeutically beneficial. We integrated the three information sources into a unified model that predicted the benefits of a triple drug combination targeting ADP, thromboxane and thrombin signaling. PMID:25875950
The Research of Multiple Attenuation Based on Feedback Iteration and Independent Component Analysis
NASA Astrophysics Data System (ADS)
Xu, X.; Tong, S.; Wang, L.
2017-12-01
How to solve the problem of multiple suppression is a difficult problem in seismic data processing. The traditional technology for multiple attenuation is based on the principle of the minimum output energy of the seismic signal, this criterion is based on the second order statistics, and it can't achieve the multiple attenuation when the primaries and multiples are non-orthogonal. In order to solve the above problems, we combine the feedback iteration method based on the wave equation and the improved independent component analysis (ICA) based on high order statistics to suppress the multiple waves. We first use iterative feedback method to predict the free surface multiples of each order. Then, in order to predict multiples from real multiple in amplitude and phase, we design an expanded pseudo multi-channel matching filtering method to get a more accurate matching multiple result. Finally, we present the improved fast ICA algorithm which is based on the maximum non-Gauss criterion of output signal to the matching multiples and get better separation results of the primaries and the multiples. The advantage of our method is that we don't need any priori information to the prediction of the multiples, and can have a better separation result. The method has been applied to several synthetic data generated by finite-difference model technique and the Sigsbee2B model multiple data, the primaries and multiples are non-orthogonal in these models. The experiments show that after three to four iterations, we can get the perfect multiple results. Using our matching method and Fast ICA adaptive multiple subtraction, we can not only effectively preserve the effective wave energy in seismic records, but also can effectively suppress the free surface multiples, especially the multiples related to the middle and deep areas.
Attention Modulates Spatial Precision in Multiple-Object Tracking.
Srivastava, Nisheeth; Vul, Ed
2016-01-01
We present a computational model of multiple-object tracking that makes trial-level predictions about the allocation of visual attention and the effect of this allocation on observers' ability to track multiple objects simultaneously. This model follows the intuition that increased attention to a location increases the spatial resolution of its internal representation. Using a combination of empirical and computational experiments, we demonstrate the existence of a tight coupling between cognitive and perceptual resources in this task: Low-level tracking of objects generates bottom-up predictions of error likelihood, and high-level attention allocation selectively reduces error probabilities in attended locations while increasing it at non-attended locations. Whereas earlier models of multiple-object tracking have predicted the big picture relationship between stimulus complexity and response accuracy, our approach makes accurate predictions of both the macro-scale effect of target number and velocity on tracking difficulty and micro-scale variations in difficulty across individual trials and targets arising from the idiosyncratic within-trial interactions of targets and distractors. Copyright © 2016 Cognitive Science Society, Inc.
NASA Technical Reports Server (NTRS)
Netterfield, C. B.; Ade, P. A. R.; Bock, J. J.; Bond, J. R.; Borrill, J.; Boscaleri, A.; Coble, K.; Contaldi, C. R.; Crill, B. P.; Bernardis, P. de;
2001-01-01
This paper presents a measurement of the angular power spectrum of the Cosmic Microwave Background from l = 75 to l = 1025 (10' to 5 degrees) from a combined analysis of four 150 GHz channels in the BOOMERANG experiment. The spectrum contains multiple peaks and minima, as predicted by standard adiabatic-inflationary models in which the primordial plasma undergoes acoustic oscillations.
A rat model of concurrent combined injuries (polytrauma)
Akscyn, Robert M; Franklin, J Lee; Gavrikova, Tatyana A; Schwacha, Martin G; Messina, Joseph L
2015-01-01
Polytrauma, a combination of injuries to more than one body part or organ system, is common in modern warfare and in automobile and industrial accidents. The combination of injuries can include burn injury, fracture, hemorrhage, trauma to the extremities, and trauma to specific organ systems. To investigate the effects of combined injuries, we have developed a new and highly reproducible model of polytrauma. This model combines burn injury with soft tissue and gastrointestinal (GI) tract trauma. Male Sprague Dawley rats were subjected to a 15-20% total body surface area scald burn, or a single puncture of the cecum with a G30 needle, or the combination of both injuries (polytrauma). Unlike many ‘double hit’ models, the injuries in our model were performed simultaneously. We asked whether multiple minor injuries, when combined, would result in a distinct phenotype, different from single minor injuries or a more severe single injury. There were differences between the single injuries and polytrauma in the maintenance of blood glucose, body temperature, body weight, hepatic mRNA and circulating levels of TNF-α, IL-1β and IL-6, and hepatic ER-stress. It has been suggested that models utilizing combinatorial injuries may be needed to more accurately model the human condition. We believe our model is ideal for studying the complex sequelae of polytrauma, which differs from single injuries. Insights gained from this model may suggest better treatment options to improve patient outcomes. PMID:26884923
Does mechanism of drug action matter to inform rational polytherapy in epilepsy?
Giussani, Giorgia; Beghi, Ettore
2013-05-01
When monotherapy for epilepsy fails, add-on therapy is an alternative option. There are several possible antiepileptic drug combinations based on their different and multiple mechanisms of action and pharmacokinetic interactions. However, only when benefits of drug combinations outweigh the harms, polytherapy can be defined as "rational". In the past 20 years, second generation AEDs have been marketed, some of which have better defined mechanisms of action and better pharmacokinetic profile. The mechanisms of action of AEDs involve, among others, blockade of voltage-gated sodium channels, blockade of voltage-gated calcium channel, activation of the ionotropic GABAA receptor and increase of GABA levels at the synaptic cleft, blockade of glutamate receptors, binding to synaptic vesicle protein 2A, and opening of KCNQ (Kv7) potassium channels. Aim of this review was to examine published reports on AEDs combinations in animal models and humans focusing on mechanisms of action and pharmacokinetic interactions. Studies in animals have shown that AED combinations are more effective when using drugs with different mechanisms of action. The most effective combination was found using a drug with a single mechanism of action and another with multiple mechanisms of action. In humans some combinations between a blocker of voltage-gated sodium channels and a drug with multiple mechanisms of action may be synergistic. Future studies are necessary to better define rational combinations and complementary mechanisms of action, considering also pharmacokinetic interactions and measures of toxicity and not only drug efficacy.
Morris, Martina; Leslie-Cook, Ayn; Akom, Eniko; Stephen, Aloo; Sherard, Donna
2014-01-01
We compare estimates of multiple and concurrent sexual partnerships from Demographic and Health Surveys (DHS) with comparable Population Services International (PSI) surveys in four African countries (Kenya, Lesotho, Uganda, Zambia). DHS data produce significantly lower estimates of all indicators for both sexes in all countries. PSI estimates of multiple partnerships are 1.7 times higher [1.4 for men (M), 3.0 for women (W)], cumulative prevalence of concurrency is 2.4 times higher (2.2 M, 2.7 W), the point prevalence of concurrency is 3.5 times higher (3.5 M, 3.3 W), and the fraction of multi-partnered persons who report concurrency last year is 1.4 times higher (1.6 M, 0.9 W). These findings provide strong empirical evidence that DHS surveys systematically underestimate levels of multiple and concurrent partnerships. The underestimates will contaminate both empirical analyses of the link between sexual behavior and HIV infection, and theoretical models for combination prevention that use these data for inputs. PMID:24077973
Morris, Martina; Vu, Lung; Leslie-Cook, Ayn; Akom, Eniko; Stephen, Aloo; Sherard, Donna
2014-04-01
We compare estimates of multiple and concurrent sexual partnerships from Demographic and Health Surveys (DHS) with comparable Population Services International (PSI) surveys in four African countries (Kenya, Lesotho, Uganda, Zambia). DHS data produce significantly lower estimates of all indicators for both sexes in all countries. PSI estimates of multiple partnerships are 1.7 times higher [1.4 for men (M), 3.0 for women (W)], cumulative prevalence of concurrency is 2.4 times higher (2.2 M, 2.7 W), the point prevalence of concurrency is 3.5 times higher (3.5 M, 3.3 W), and the fraction of multi-partnered persons who report concurrency last year is 1.4 times higher (1.6 M, 0.9 W). These findings provide strong empirical evidence that DHS surveys systematically underestimate levels of multiple and concurrent partnerships. The underestimates will contaminate both empirical analyses of the link between sexual behavior and HIV infection, and theoretical models for combination prevention that use these data for inputs.
Collaborative Research: Atmospheric Pressure Microplasma Chemistry-Photon Synergies Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Graves, David
Combining the effects of low temperature, atmospheric pressure microplasmas and microplasma photon sources shows greatly expanded range of applications of each of them. The plasma sources create active chemical species and these can be activated further by addition of photons and associated photochemistry. There are many ways to combine the effects of plasma chemistry and photochemistry, especially if there are multiple phases present. The project combines construction of appropriate test experimental systems, various spectroscopic diagnostics and mathematical modeling.
GeneSilico protein structure prediction meta-server.
Kurowski, Michal A; Bujnicki, Janusz M
2003-07-01
Rigorous assessments of protein structure prediction have demonstrated that fold recognition methods can identify remote similarities between proteins when standard sequence search methods fail. It has been shown that the accuracy of predictions is improved when refined multiple sequence alignments are used instead of single sequences and if different methods are combined to generate a consensus model. There are several meta-servers available that integrate protein structure predictions performed by various methods, but they do not allow for submission of user-defined multiple sequence alignments and they seldom offer confidentiality of the results. We developed a novel WWW gateway for protein structure prediction, which combines the useful features of other meta-servers available, but with much greater flexibility of the input. The user may submit an amino acid sequence or a multiple sequence alignment to a set of methods for primary, secondary and tertiary structure prediction. Fold-recognition results (target-template alignments) are converted into full-atom 3D models and the quality of these models is uniformly assessed. A consensus between different FR methods is also inferred. The results are conveniently presented on-line on a single web page over a secure, password-protected connection. The GeneSilico protein structure prediction meta-server is freely available for academic users at http://genesilico.pl/meta.
GeneSilico protein structure prediction meta-server
Kurowski, Michal A.; Bujnicki, Janusz M.
2003-01-01
Rigorous assessments of protein structure prediction have demonstrated that fold recognition methods can identify remote similarities between proteins when standard sequence search methods fail. It has been shown that the accuracy of predictions is improved when refined multiple sequence alignments are used instead of single sequences and if different methods are combined to generate a consensus model. There are several meta-servers available that integrate protein structure predictions performed by various methods, but they do not allow for submission of user-defined multiple sequence alignments and they seldom offer confidentiality of the results. We developed a novel WWW gateway for protein structure prediction, which combines the useful features of other meta-servers available, but with much greater flexibility of the input. The user may submit an amino acid sequence or a multiple sequence alignment to a set of methods for primary, secondary and tertiary structure prediction. Fold-recognition results (target-template alignments) are converted into full-atom 3D models and the quality of these models is uniformly assessed. A consensus between different FR methods is also inferred. The results are conveniently presented on-line on a single web page over a secure, password-protected connection. The GeneSilico protein structure prediction meta-server is freely available for academic users at http://genesilico.pl/meta. PMID:12824313
Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong
2017-01-01
A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification. PMID:28629202
Integrating neuroinformatics tools in TheVirtualBrain.
Woodman, M Marmaduke; Pezard, Laurent; Domide, Lia; Knock, Stuart A; Sanz-Leon, Paula; Mersmann, Jochen; McIntosh, Anthony R; Jirsa, Viktor
2014-01-01
TheVirtualBrain (TVB) is a neuroinformatics Python package representing the convergence of clinical, systems, and theoretical neuroscience in the analysis, visualization and modeling of neural and neuroimaging dynamics. TVB is composed of a flexible simulator for neural dynamics measured across scales from local populations to large-scale dynamics measured by electroencephalography (EEG), magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI), and core analytic and visualization functions, all accessible through a web browser user interface. A datatype system modeling neuroscientific data ties together these pieces with persistent data storage, based on a combination of SQL and HDF5. These datatypes combine with adapters allowing TVB to integrate other algorithms or computational systems. TVB provides infrastructure for multiple projects and multiple users, possibly participating under multiple roles. For example, a clinician might import patient data to identify several potential lesion points in the patient's connectome. A modeler, working on the same project, tests these points for viability through whole brain simulation, based on the patient's connectome, and subsequent analysis of dynamical features. TVB also drives research forward: the simulator itself represents the culmination of several simulation frameworks in the modeling literature. The availability of the numerical methods, set of neural mass models and forward solutions allows for the construction of a wide range of brain-scale simulation scenarios. This paper briefly outlines the history and motivation for TVB, describing the framework and simulator, giving usage examples in the web UI and Python scripting.
Integrating neuroinformatics tools in TheVirtualBrain
Woodman, M. Marmaduke; Pezard, Laurent; Domide, Lia; Knock, Stuart A.; Sanz-Leon, Paula; Mersmann, Jochen; McIntosh, Anthony R.; Jirsa, Viktor
2014-01-01
TheVirtualBrain (TVB) is a neuroinformatics Python package representing the convergence of clinical, systems, and theoretical neuroscience in the analysis, visualization and modeling of neural and neuroimaging dynamics. TVB is composed of a flexible simulator for neural dynamics measured across scales from local populations to large-scale dynamics measured by electroencephalography (EEG), magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI), and core analytic and visualization functions, all accessible through a web browser user interface. A datatype system modeling neuroscientific data ties together these pieces with persistent data storage, based on a combination of SQL and HDF5. These datatypes combine with adapters allowing TVB to integrate other algorithms or computational systems. TVB provides infrastructure for multiple projects and multiple users, possibly participating under multiple roles. For example, a clinician might import patient data to identify several potential lesion points in the patient's connectome. A modeler, working on the same project, tests these points for viability through whole brain simulation, based on the patient's connectome, and subsequent analysis of dynamical features. TVB also drives research forward: the simulator itself represents the culmination of several simulation frameworks in the modeling literature. The availability of the numerical methods, set of neural mass models and forward solutions allows for the construction of a wide range of brain-scale simulation scenarios. This paper briefly outlines the history and motivation for TVB, describing the framework and simulator, giving usage examples in the web UI and Python scripting. PMID:24795617
New generation of elastic network models.
López-Blanco, José Ramón; Chacón, Pablo
2016-04-01
The intrinsic flexibility of proteins and nucleic acids can be grasped from remarkably simple mechanical models of particles connected by springs. In recent decades, Elastic Network Models (ENMs) combined with Normal Model Analysis widely confirmed their ability to predict biologically relevant motions of biomolecules and soon became a popular methodology to reveal large-scale dynamics in multiple structural biology scenarios. The simplicity, robustness, low computational cost, and relatively high accuracy are the reasons behind the success of ENMs. This review focuses on recent advances in the development and application of ENMs, paying particular attention to combinations with experimental data. Successful application scenarios include large macromolecular machines, structural refinement, docking, and evolutionary conservation. Copyright © 2015 Elsevier Ltd. All rights reserved.
Psychosocial Intervention for Young Children With Chronic Tics
2018-06-18
Tourette's Syndrome; Tourette's Disorder; Tourette's Disease; Tourette Disorder; Tourette Disease; Tic Disorder, Combined Vocal and Multiple Motor; Multiple Motor and Vocal Tic Disorder, Combined; Gilles de La Tourette's Disease; Gilles de la Tourette Syndrome; Gilles De La Tourette's Syndrome; Combined Vocal and Multiple Motor Tic Disorder; Combined Multiple Motor and Vocal Tic Disorder; Chronic Motor and Vocal Tic Disorder
Multiple-Point Temperature Gradient Algorithm for Ring Laser Gyroscope Bias Compensation
Li, Geng; Zhang, Pengfei; Wei, Guo; Xie, Yuanping; Yu, Xudong; Long, Xingwu
2015-01-01
To further improve ring laser gyroscope (RLG) bias stability, a multiple-point temperature gradient algorithm is proposed for RLG bias compensation in this paper. Based on the multiple-point temperature measurement system, a complete thermo-image of the RLG block is developed. Combined with the multiple-point temperature gradients between different points of the RLG block, the particle swarm optimization algorithm is used to tune the support vector machine (SVM) parameters, and an optimized design for selecting the thermometer locations is also discussed. The experimental results validate the superiority of the introduced method and enhance the precision and generalizability in the RLG bias compensation model. PMID:26633401
Nigam, Ravi; Schlosser, Ralf W; Lloyd, Lyle L
2006-09-01
Matrix strategies employing parts of speech arranged in systematic language matrices and milieu language teaching strategies have been successfully used to teach word combining skills to children who have cognitive disabilities and some functional speech. The present study investigated the acquisition and generalized production of two-term semantic relationships in a new population using new types of symbols. Three children with cognitive disabilities and little or no functional speech were taught to combine graphic symbols. The matrix strategy and the mand-model procedure were used concomitantly as intervention procedures. A multiple probe design across sets of action-object combinations with generalization probes of untrained combinations was used to teach the production of graphic symbol combinations. Results indicated that two of the three children learned the early syntactic-semantic rule of combining action-object symbols and demonstrated generalization to untrained action-object combinations and generalization across trainers. The results and future directions for research are discussed.
ERIC Educational Resources Information Center
Wang, Ning; Stahl, John
2012-01-01
This article discusses the use of the Many-Facets Rasch Model, via the FACETS computer program (Linacre, 2006a), to scale job/practice analysis survey data as well as to combine multiple rating scales into single composite weights representing the tasks' relative importance. Results from the Many-Facets Rasch Model are compared with those…
Modeling complex tone perception: grouping harmonics with combination-sensitive neurons.
Medvedev, Andrei V; Chiao, Faye; Kanwal, Jagmeet S
2002-06-01
Perception of complex communication sounds is a major function of the auditory system. To create a coherent precept of these sounds the auditory system may instantaneously group or bind multiple harmonics within complex sounds. This perception strategy simplifies further processing of complex sounds and facilitates their meaningful integration with other sensory inputs. Based on experimental data and a realistic model, we propose that associative learning of combinations of harmonic frequencies and nonlinear facilitation of responses to those combinations, also referred to as "combination-sensitivity," are important for spectral grouping. For our model, we simulated combination sensitivity using Hebbian and associative types of synaptic plasticity in auditory neurons. We also provided a parallel tonotopic input that converges and diverges within the network. Neurons in higher-order layers of the network exhibited an emergent property of multifrequency tuning that is consistent with experimental findings. Furthermore, this network had the capacity to "recognize" the pitch or fundamental frequency of a harmonic tone complex even when the fundamental frequency itself was missing.
A novel visual saliency analysis model based on dynamic multiple feature combination strategy
NASA Astrophysics Data System (ADS)
Lv, Jing; Ye, Qi; Lv, Wen; Zhang, Libao
2017-06-01
The human visual system can quickly focus on a small number of salient objects. This process was known as visual saliency analysis and these salient objects are called focus of attention (FOA). The visual saliency analysis mechanism can be used to extract the salient regions and analyze saliency of object in an image, which is time-saving and can avoid unnecessary costs of computing resources. In this paper, a novel visual saliency analysis model based on dynamic multiple feature combination strategy is introduced. In the proposed model, we first generate multi-scale feature maps of intensity, color and orientation features using Gaussian pyramids and the center-surround difference. Then, we evaluate the contribution of all feature maps to the saliency map according to the area of salient regions and their average intensity, and attach different weights to different features according to their importance. Finally, we choose the largest salient region generated by the region growing method to perform the evaluation. Experimental results show that the proposed model cannot only achieve higher accuracy in saliency map computation compared with other traditional saliency analysis models, but also extract salient regions with arbitrary shapes, which is of great value for the image analysis and understanding.
Language Model Combination and Adaptation Using Weighted Finite State Transducers
NASA Technical Reports Server (NTRS)
Liu, X.; Gales, M. J. F.; Hieronymus, J. L.; Woodland, P. C.
2010-01-01
In speech recognition systems language model (LMs) are often constructed by training and combining multiple n-gram models. They can be either used to represent different genres or tasks found in diverse text sources, or capture stochastic properties of different linguistic symbol sequences, for example, syllables and words. Unsupervised LM adaption may also be used to further improve robustness to varying styles or tasks. When using these techniques, extensive software changes are often required. In this paper an alternative and more general approach based on weighted finite state transducers (WFSTs) is investigated for LM combination and adaptation. As it is entirely based on well-defined WFST operations, minimum change to decoding tools is needed. A wide range of LM combination configurations can be flexibly supported. An efficient on-the-fly WFST decoding algorithm is also proposed. Significant error rate gains of 7.3% relative were obtained on a state-of-the-art broadcast audio recognition task using a history dependently adapted multi-level LM modelling both syllable and word sequences
NASA Technical Reports Server (NTRS)
Berrier, B. L.; Leavitt, L. D.; Bangert, L. S.
1985-01-01
An investigation has been conducted in the Langley 16 Foot Transonic Tunnel to determine the weight flow measurement characteristics of a multiple critical Venturi system and the nozzle discharge coefficient characteristics of a series of convergent calibration nozzles. The effects on model discharge coefficient of nozzle throat area, model choke plate open area, nozzle pressure ratio, jet total temperature, and number and combination of operating Venturis were investigated. Tests were conducted at static conditions (tunnel wind off) at nozzle pressure ratios from 1.3 to 7.0.
Fish, Brian L; Gao, Feng; Narayanan, Jayashree; Bergom, Carmen; Jacobs, Elizabeth R; Cohen, Eric P; Moulder, John E; Orschell, Christie M; Medhora, Meetha
2016-11-01
The NIAID Radiation and Nuclear Countermeasures Program is developing medical agents to mitigate the acute and delayed effects of radiation that may occur from a radionuclear attack or accident. To date, most such medical countermeasures have been developed for single organ injuries. Angiotensin converting enzyme (ACE) inhibitors have been used to mitigate radiation-induced lung, skin, brain, and renal injuries in rats. ACE inhibitors have also been reported to decrease normal tissue complication in radiation oncology patients. In the current study, the authors have developed a rat partial-body irradiation (leg-out PBI) model with minimal bone marrow sparing (one leg shielded) that results in acute and late injuries to multiple organs. In this model, the ACE inhibitor lisinopril (at ~24 mg m d started orally in the drinking water at 7 d after irradiation and continued to ≥150 d) mitigated late effects in the lungs and kidneys after 12.5-Gy leg-out PBI. Also in this model, a short course of saline hydration and antibiotics mitigated acute radiation syndrome following doses as high as 13 Gy. Combining this supportive care with the lisinopril regimen mitigated overall morbidity for up to 150 d after 13-Gy leg-out PBI. Furthermore, lisinopril was an effective mitigator in the presence of the growth factor G-CSF (100 μg kg d from days 1-14), which is FDA-approved for use in a radionuclear event. In summary, by combining lisinopril (FDA-approved for other indications) with hydration and antibiotics, acute and delayed radiation injuries in multiple organs were mitigated.
NASA Astrophysics Data System (ADS)
Rajab, Jasim M.; MatJafri, M. Z.; Lim, H. S.
2013-06-01
This study encompasses columnar ozone modelling in the peninsular Malaysia. Data of eight atmospheric parameters [air surface temperature (AST), carbon monoxide (CO), methane (CH4), water vapour (H2Ovapour), skin surface temperature (SSKT), atmosphere temperature (AT), relative humidity (RH), and mean surface pressure (MSP)] data set, retrieved from NASA's Atmospheric Infrared Sounder (AIRS), for the entire period (2003-2008) was employed to develop models to predict the value of columnar ozone (O3) in study area. The combined method, which is based on using both multiple regressions combined with principal component analysis (PCA) modelling, was used to predict columnar ozone. This combined approach was utilized to improve the prediction accuracy of columnar ozone. Separate analysis was carried out for north east monsoon (NEM) and south west monsoon (SWM) seasons. The O3 was negatively correlated with CH4, H2Ovapour, RH, and MSP, whereas it was positively correlated with CO, AST, SSKT, and AT during both the NEM and SWM season periods. Multiple regression analysis was used to fit the columnar ozone data using the atmospheric parameter's variables as predictors. A variable selection method based on high loading of varimax rotated principal components was used to acquire subsets of the predictor variables to be comprised in the linear regression model of the atmospheric parameter's variables. It was found that the increase in columnar O3 value is associated with an increase in the values of AST, SSKT, AT, and CO and with a drop in the levels of CH4, H2Ovapour, RH, and MSP. The result of fitting the best models for the columnar O3 value using eight of the independent variables gave about the same values of the R (≈0.93) and R2 (≈0.86) for both the NEM and SWM seasons. The common variables that appeared in both regression equations were SSKT, CH4 and RH, and the principal precursor of the columnar O3 value in both the NEM and SWM seasons was SSKT.
NASA Astrophysics Data System (ADS)
van Soesbergen, A. J. J.; Mulligan, M.
2014-01-01
This paper describes the application of WaterWorld (www.policysupport.org/waterworld) to the Peruvian Amazon, an area that is increasingly under pressure from deforestation and water pollution as a result of population growth, rural-to-urban migration and oil and gas extraction, potentially impacting both water quantity and water quality. By applying single and combined plausible scenarios of climate change, deforestation around existing and planned roads, population growth and rural-urban migration, mining and oil and gas exploitation, we explore the potential combined impacts of these multiple changes on water resources in the Peruvian Amazon.
Briefing paper : toward a best practice model for managed lanes in Texas.
DOT National Transportation Integrated Search
2013-09-01
Over the past two decades, agencies : have increasingly implemented managed : lanes (MLs) to mitigate growing urban traffic : congestion in the United States. Multiple operating : projects : representing a combination : of HOV-to-HOT conversions a...
Selected National Cancer Institute Breast Cancer Research Topics | NIH MedlinePlus the Magazine
... effective treatments for these women. The Integrative Cancer Biology Program combines experimental and clinical research with mathematical modeling to gain new insights into cancer biology, prevention, diagnostics, and treatments. Multiple centers are developing ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Middleton, Richard Stephen
2017-05-22
This presentation is part of US-China Clean Coal project and describes the impact of power plant cycling, techno economic modeling of combined IGCC and CCS, integrated capacity generation decision making for power utilities, and a new decision support tool for integrated assessment of CCUS.
ERIC Educational Resources Information Center
Edgecombe, Nikki; Jaggars, Shanna Smith; Baker, Elaine DeLott; Bailey, Thomas
2013-01-01
Originally designed for students who test into at least two levels of developmental education in a particular subject area, FastStart is a compressed course program model launched in 2005 at the Community College of Denver (CCD). The program combines multiple semester-length courses into a single intensive semester, while providing case…
Video Modeling by Experts with Video Feedback to Enhance Gymnastics Skills
ERIC Educational Resources Information Center
Boyer, Eva; Miltenberger, Raymond G.; Batsche, Catherine; Fogel, Victoria
2009-01-01
The effects of combining video modeling by experts with video feedback were analyzed with 4 female competitive gymnasts (7 to 10 years old) in a multiple baseline design across behaviors. During the intervention, after the gymnast performed a specific gymnastics skill, she viewed a video segment showing an expert gymnast performing the same skill…
Data Modeling Challenges of Advanced Interoperability.
Blobel, Bernd; Oemig, Frank; Ruotsalainen, Pekka
2018-01-01
Progressive health paradigms, involving many different disciplines and combining multiple policy domains, requires advanced interoperability solutions. This results in special challenges for modeling health systems. The paper discusses classification systems for data models and enterprise business architectures and compares them with the ISO Reference Architecture. On that basis, existing definitions, specifications and standards of data models for interoperability are evaluated and their limitations are discussed. Amendments to correctly use those models and to better meet the aforementioned challenges are offered.
Azimuthal Anisotropy in U +U and Au +Au Collisions at RHIC
NASA Astrophysics Data System (ADS)
Adamczyk, L.; Adkins, J. K.; Agakishiev, G.; Aggarwal, M. M.; Ahammed, Z.; Alekseev, I.; Alford, J.; Aparin, A.; Arkhipkin, D.; Aschenauer, E. C.; Averichev, G. S.; Banerjee, A.; Bellwied, R.; Bhasin, A.; Bhati, A. K.; Bhattarai, P.; Bielcik, J.; Bielcikova, J.; Bland, L. C.; Bordyuzhin, I. G.; Bouchet, J.; Brandin, A. V.; Bunzarov, I.; Burton, T. P.; Butterworth, J.; Caines, H.; Calderón de la Barca Sánchez, M.; Campbell, J. M.; Cebra, D.; Cervantes, M. C.; Chakaberia, I.; Chaloupka, P.; Chang, Z.; Chattopadhyay, S.; Chen, J. H.; Chen, X.; Cheng, J.; Cherney, M.; Christie, W.; Contin, G.; Crawford, H. J.; Das, S.; De Silva, L. C.; Debbe, R. R.; Dedovich, T. G.; Deng, J.; Derevschikov, A. A.; di Ruzza, B.; Didenko, L.; Dilks, C.; Dong, X.; Drachenberg, J. L.; Draper, J. E.; Du, C. M.; Dunkelberger, L. E.; Dunlop, J. C.; Efimov, L. G.; Engelage, J.; Eppley, G.; Esha, R.; Evdokimov, O.; Eyser, O.; Fatemi, R.; Fazio, S.; Federic, P.; Fedorisin, J.; Feng, Z.; Filip, P.; Fisyak, Y.; Flores, C. E.; Fulek, L.; Gagliardi, C. A.; Garand, D.; Geurts, F.; Gibson, A.; Girard, M.; Greiner, L.; Grosnick, D.; Gunarathne, D. S.; Guo, Y.; Gupta, S.; Gupta, A.; Guryn, W.; Hamad, A.; Hamed, A.; Haque, R.; Harris, J. W.; He, L.; Heppelmann, S.; Heppelmann, S.; Hirsch, A.; Hoffmann, G. W.; Hofman, D. J.; Horvat, S.; Huang, H. Z.; Huang, B.; Huang, X.; Huck, P.; Humanic, T. J.; Igo, G.; Jacobs, W. W.; Jang, H.; Jiang, K.; Judd, E. G.; Kabana, S.; Kalinkin, D.; Kang, K.; Kauder, K.; Ke, H. W.; Keane, D.; Kechechyan, A.; Khan, Z. H.; Kikola, D. P.; Kisel, I.; Kisiel, A.; Koetke, D. D.; Kollegger, T.; Kosarzewski, L. K.; Kotchenda, L.; Kraishan, A. F.; Kravtsov, P.; Krueger, K.; Kulakov, I.; Kumar, L.; Kycia, R. A.; Lamont, M. A. C.; Landgraf, J. M.; Landry, K. D.; Lauret, J.; Lebedev, A.; Lednicky, R.; Lee, J. H.; Li, W.; Li, Y.; Li, C.; Li, Z. M.; Li, X.; Li, X.; Lisa, M. A.; Liu, F.; Ljubicic, T.; Llope, W. J.; Lomnitz, M.; Longacre, R. S.; Luo, X.; Ma, L.; Ma, R.; Ma, Y. G.; Ma, G. L.; Magdy, N.; Majka, R.; Manion, A.; Margetis, S.; Markert, C.; Masui, H.; Matis, H. S.; McDonald, D.; Meehan, K.; Minaev, N. G.; Mioduszewski, S.; Mohanty, B.; Mondal, M. M.; Morozov, D. A.; Mustafa, M. K.; Nandi, B. K.; Nasim, Md.; Nayak, T. K.; Nigmatkulov, G.; Nogach, L. V.; Noh, S. Y.; Novak, J.; Nurushev, S. B.; Odyniec, G.; Ogawa, A.; Oh, K.; Okorokov, V.; Olvitt, D. L.; Page, B. S.; Pak, R.; Pan, Y. X.; Pandit, Y.; Panebratsev, Y.; Pawlik, B.; Pei, H.; Perkins, C.; Peterson, A.; Pile, P.; Planinic, M.; Pluta, J.; Poljak, N.; Poniatowska, K.; Porter, J.; Posik, M.; Poskanzer, A. M.; Pruthi, N. K.; Putschke, J.; Qiu, H.; Quintero, A.; Ramachandran, S.; Raniwala, S.; Raniwala, R.; Ray, R. L.; Ritter, H. G.; Roberts, J. B.; Rogachevskiy, O. V.; Romero, J. L.; Roy, A.; Ruan, L.; Rusnak, J.; Rusnakova, O.; Sahoo, N. R.; Sahu, P. K.; Sakrejda, I.; Salur, S.; Sandweiss, J.; Sarkar, A.; Schambach, J.; Scharenberg, R. P.; Schmah, A. M.; Schmidke, W. B.; Schmitz, N.; Seger, J.; Seyboth, P.; Shah, N.; Shahaliev, E.; Shanmuganathan, P. V.; Shao, M.; Sharma, B.; Sharma, M. K.; Shen, W. Q.; Shi, S. S.; Shou, Q. Y.; Sichtermann, E. P.; Sikora, R.; Simko, M.; Skoby, M. J.; Smirnov, D.; Smirnov, N.; Song, L.; Sorensen, P.; Spinka, H. M.; Srivastava, B.; Stanislaus, T. D. S.; Stepanov, M.; Stock, R.; Strikhanov, M.; Stringfellow, B.; Sumbera, M.; Summa, B. J.; Sun, X.; Sun, X. M.; Sun, Z.; Sun, Y.; Surrow, B.; Svirida, D. N.; Szelezniak, M. A.; Tang, Z.; Tang, A. H.; Tarnowsky, T.; Tawfik, A. N.; Thomas, J. H.; Timmins, A. R.; Tlusty, D.; Tokarev, M.; Trentalange, S.; Tribble, R. E.; Tribedy, P.; Tripathy, S. K.; Trzeciak, B. A.; Tsai, O. D.; Ullrich, T.; Underwood, D. G.; Upsal, I.; Van Buren, G.; van Nieuwenhuizen, G.; Vandenbroucke, M.; Varma, R.; Vasiliev, A. N.; Vertesi, R.; Videbaek, F.; Viyogi, Y. P.; Vokal, S.; Voloshin, S. A.; Vossen, A.; Wang, F.; Wang, Y.; Wang, H.; Wang, J. S.; Wang, Y.; Wang, G.; Webb, G.; Webb, J. C.; Wen, L.; Westfall, G. D.; Wieman, H.; Wissink, S. W.; Witt, R.; Wu, Y. F.; Xiao, Z.; Xie, W.; Xin, K.; Xu, Y. F.; Xu, N.; Xu, Z.; Xu, Q. H.; Xu, H.; Yang, Y.; Yang, Y.; Yang, C.; Yang, S.; Yang, Q.; Ye, Z.; Yepes, P.; Yi, L.; Yip, K.; Yoo, I.-K.; Yu, N.; Zbroszczyk, H.; Zha, W.; Zhang, X. P.; Zhang, J. B.; Zhang, J.; Zhang, Z.; Zhang, S.; Zhang, Y.; Zhang, J. L.; Zhao, F.; Zhao, J.; Zhong, C.; Zhou, L.; Zhu, X.; Zoulkarneeva, Y.; Zyzak, M.; STAR Collaboration
2015-11-01
Collisions between prolate uranium nuclei are used to study how particle production and azimuthal anisotropies depend on initial geometry in heavy-ion collisions. We report the two- and four-particle cumulants, v2{2 } and v2{4 }, for charged hadrons from U +U collisions at √{sNN }=193 GeV and Au +Au collisions at √{sNN}=200 GeV . Nearly fully overlapping collisions are selected based on the energy deposited by spectators in zero degree calorimeters (ZDCs). Within this sample, the observed dependence of v2{2 } on multiplicity demonstrates that ZDC information combined with multiplicity can preferentially select different overlap configurations in U +U collisions. We also show that v2 vs multiplicity can be better described by models, such as gluon saturation or quark participant models, that eliminate the dependence of the multiplicity on the number of binary nucleon-nucleon collisions.
Azimuthal anisotophy in U + U and Au + Au collisions at RHIC
Adamczyk, L.
2015-11-24
Collisions between prolate uranium nuclei are used to study how particle production and azimuthal anisotropies depend on initial geometry in heavy-ion collisions. We report the two- and four-particle cumulants, v 2{2} and v 2{4}, for charged hadrons from U+U collisions at √ SNN = 193 GeV and Au+Au collisions at √ SNN = 200 GeV. Nearly fully overlapping collisions are selected based on the energy deposited by spectators in zero degree calorimeters (ZDCs). Within this sample, the observed dependence of v 2{2} on multiplicity demonstrates that ZDC information combined with multiplicity can preferentially select different overlap configurations in U+U collisions.more » As a result, we also show that v 2 vs multiplicity can be better described by models, such as gluon saturation or quark participant models, that eliminate the dependence of the multiplicity on the number of binary nucleon-nucleon collisions.« less
NASA Astrophysics Data System (ADS)
Mioulet, L.; Bideault, G.; Chatelain, C.; Paquet, T.; Brunessaux, S.
2015-01-01
The BLSTM-CTC is a novel recurrent neural network architecture that has outperformed previous state of the art algorithms in tasks such as speech recognition or handwriting recognition. It has the ability to process long term dependencies in temporal signals in order to label unsegmented data. This paper describes different ways of combining features using a BLSTM-CTC architecture. Not only do we explore the low level combination (feature space combination) but we also explore high level combination (decoding combination) and mid-level (internal system representation combination). The results are compared on the RIMES word database. Our results show that the low level combination works best, thanks to the powerful data modeling of the LSTM neurons.
Variable selection under multiple imputation using the bootstrap in a prognostic study
Heymans, Martijn W; van Buuren, Stef; Knol, Dirk L; van Mechelen, Willem; de Vet, Henrica CW
2007-01-01
Background Missing data is a challenging problem in many prognostic studies. Multiple imputation (MI) accounts for imputation uncertainty that allows for adequate statistical testing. We developed and tested a methodology combining MI with bootstrapping techniques for studying prognostic variable selection. Method In our prospective cohort study we merged data from three different randomized controlled trials (RCTs) to assess prognostic variables for chronicity of low back pain. Among the outcome and prognostic variables data were missing in the range of 0 and 48.1%. We used four methods to investigate the influence of respectively sampling and imputation variation: MI only, bootstrap only, and two methods that combine MI and bootstrapping. Variables were selected based on the inclusion frequency of each prognostic variable, i.e. the proportion of times that the variable appeared in the model. The discriminative and calibrative abilities of prognostic models developed by the four methods were assessed at different inclusion levels. Results We found that the effect of imputation variation on the inclusion frequency was larger than the effect of sampling variation. When MI and bootstrapping were combined at the range of 0% (full model) to 90% of variable selection, bootstrap corrected c-index values of 0.70 to 0.71 and slope values of 0.64 to 0.86 were found. Conclusion We recommend to account for both imputation and sampling variation in sets of missing data. The new procedure of combining MI with bootstrapping for variable selection, results in multivariable prognostic models with good performance and is therefore attractive to apply on data sets with missing values. PMID:17629912
Akkermans, Simen; Noriega Fernandez, Estefanía; Logist, Filip; Van Impe, Jan F
2017-01-02
Efficient modelling of the microbial growth rate can be performed by combining the effects of individual conditions in a multiplicative way, known as the gamma concept. However, several studies have illustrated that interactions between different effects should be taken into account at stressing environmental conditions to achieve a more accurate description of the growth rate. In this research, a novel approach for modeling the interactions between the effects of environmental conditions on the microbial growth rate is introduced. As a case study, the effect of temperature and pH on the growth rate of Escherichia coli K12 is modeled, based on a set of computer controlled bioreactor experiments performed under static environmental conditions. The models compared in this case study are the gamma model, the model of Augustin and Carlier (2000), the model of Le Marc et al. (2002) and the novel multiplicative interaction model, developed in this paper. This novel model enables the separate identification of interactions between the effects of two (or more) environmental conditions. The comparison of these models focuses on the accuracy, interpretability and compatibility with efficient modeling approaches. Moreover, for the separate effects of temperature and pH, new cardinal parameter model structures are proposed. The novel interaction model contributes to a generic modeling approach, resulting in predictive models that are (i) accurate, (ii) easily identifiable with a limited work load, (iii) modular, and (iv) biologically interpretable. Copyright © 2016. Published by Elsevier B.V.
Li, YuHui; Jin, FeiTeng
2017-01-01
The inversion design approach is a very useful tool for the complex multiple-input-multiple-output nonlinear systems to implement the decoupling control goal, such as the airplane model and spacecraft model. In this work, the flight control law is proposed using the neural-based inversion design method associated with the nonlinear compensation for a general longitudinal model of the airplane. First, the nonlinear mathematic model is converted to the equivalent linear model based on the feedback linearization theory. Then, the flight control law integrated with this inversion model is developed to stabilize the nonlinear system and relieve the coupling effect. Afterwards, the inversion control combined with the neural network and nonlinear portion is presented to improve the transient performance and attenuate the uncertain effects on both external disturbances and model errors. Finally, the simulation results demonstrate the effectiveness of this controller. PMID:29410680
Liu, Fei; Feng, Lei; Lou, Bing-gan; Sun, Guang-ming; Wang, Lian-ping; He, Yong
2010-07-01
The combinational-stimulated bands were used to develop linear and nonlinear calibrations for the early detection of sclerotinia of oilseed rape (Brassica napus L.). Eighty healthy and 100 Sclerotinia leaf samples were scanned, and different preprocessing methods combined with successive projections algorithm (SPA) were applied to develop partial least squares (PLS) discriminant models, multiple linear regression (MLR) and least squares-support vector machine (LS-SVM) models. The results indicated that the optimal full-spectrum PLS model was achieved by direct orthogonal signal correction (DOSC), then De-trending and Raw spectra with correct recognition ratio of 100%, 95.7% and 95.7%, respectively. When using combinational-stimulated bands, the optimal linear models were SPA-MLR (DOSC) and SPA-PLS (DOSC) with correct recognition ratio of 100%. All SPA-LSSVM models using DOSC, De-trending and Raw spectra achieved perfect results with recognition of 100%. The overall results demonstrated that it was feasible to use combinational-stimulated bands for the early detection of Sclerotinia of oilseed rape, and DOSC-SPA was a powerful way for informative wavelength selection. This method supplied a new approach to the early detection and portable monitoring instrument of sclerotinia.
Technical note: Combining quantile forecasts and predictive distributions of streamflows
NASA Astrophysics Data System (ADS)
Bogner, Konrad; Liechti, Katharina; Zappa, Massimiliano
2017-11-01
The enhanced availability of many different hydro-meteorological modelling and forecasting systems raises the issue of how to optimally combine this great deal of information. Especially the usage of deterministic and probabilistic forecasts with sometimes widely divergent predicted future streamflow values makes it even more complicated for decision makers to sift out the relevant information. In this study multiple streamflow forecast information will be aggregated based on several different predictive distributions, and quantile forecasts. For this combination the Bayesian model averaging (BMA) approach, the non-homogeneous Gaussian regression (NGR), also known as the ensemble model output statistic (EMOS) techniques, and a novel method called Beta-transformed linear pooling (BLP) will be applied. By the help of the quantile score (QS) and the continuous ranked probability score (CRPS), the combination results for the Sihl River in Switzerland with about 5 years of forecast data will be compared and the differences between the raw and optimally combined forecasts will be highlighted. The results demonstrate the importance of applying proper forecast combination methods for decision makers in the field of flood and water resource management.
NASA Astrophysics Data System (ADS)
Schenke, Björn; Tribedy, Prithwish; Venugopalan, Raju
2012-09-01
The event-by-event multiplicity distribution, the energy densities and energy density weighted eccentricity moments ɛn (up to n=6) at early times in heavy-ion collisions at both the BNL Relativistic Heavy Ion Collider (RHIC) (s=200GeV) and the CERN Large Hardron Collider (LHC) (s=2.76TeV) are computed in the IP-Glasma model. This framework combines the impact parameter dependent saturation model (IP-Sat) for nucleon parton distributions (constrained by HERA deeply inelastic scattering data) with an event-by-event classical Yang-Mills description of early-time gluon fields in heavy-ion collisions. The model produces multiplicity distributions that are convolutions of negative binomial distributions without further assumptions or parameters. In the limit of large dense systems, the n-particle gluon distribution predicted by the Glasma-flux tube model is demonstrated to be nonperturbatively robust. In the general case, the effect of additional geometrical fluctuations is quantified. The eccentricity moments are compared to the MC-KLN model; a noteworthy feature is that fluctuation dominated odd moments are consistently larger than in the MC-KLN model.
NASA Astrophysics Data System (ADS)
Huang, C. L.; Hsu, N. S.; Yeh, W. W. G.; Hsieh, I. H.
2017-12-01
This study develops an innovative calibration method for regional groundwater modeling by using multi-class empirical orthogonal functions (EOFs). The developed method is an iterative approach. Prior to carrying out the iterative procedures, the groundwater storage hydrographs associated with the observation wells are calculated. The combined multi-class EOF amplitudes and EOF expansion coefficients of the storage hydrographs are then used to compute the initial gauss of the temporal and spatial pattern of multiple recharges. The initial guess of the hydrogeological parameters are also assigned according to in-situ pumping experiment. The recharges include net rainfall recharge and boundary recharge, and the hydrogeological parameters are riverbed leakage conductivity, horizontal hydraulic conductivity, vertical hydraulic conductivity, storage coefficient, and specific yield. The first step of the iterative algorithm is to conduct the numerical model (i.e. MODFLOW) by the initial guess / adjusted values of the recharges and parameters. Second, in order to determine the best EOF combination of the error storage hydrographs for determining the correction vectors, the objective function is devised as minimizing the root mean square error (RMSE) of the simulated storage hydrographs. The error storage hydrograph are the differences between the storage hydrographs computed from observed and simulated groundwater level fluctuations. Third, adjust the values of recharges and parameters and repeat the iterative procedures until the stopping criterion is reached. The established methodology was applied to the groundwater system of Ming-Chu Basin, Taiwan. The study period is from January 1st to December 2ed in 2012. Results showed that the optimal EOF combination for the multiple recharges and hydrogeological parameters can decrease the RMSE of the simulated storage hydrographs dramatically within three calibration iterations. It represents that the iterative approach that using EOF techniques can capture the groundwater flow tendency and detects the correction vector of the simulated error sources. Hence, the established EOF-based methodology can effectively and accurately identify the multiple recharges and hydrogeological parameters.
Enhancing the Modeling of PFOA Pharmacokinetics with Bayesian Analysis
The detail sufficient to describe the pharmacokinetics (PK) for perfluorooctanoic acid (PFOA) and the methods necessary to combine information from multiple data sets are both subjects of ongoing investigation. Bayesian analysis provides tools to accommodate these goals. We exa...
Plessis, Anne; Hafemeister, Christoph; Wilkins, Olivia; Gonzaga, Zennia Jean; Meyer, Rachel Sarah; Pires, Inês; Müller, Christian; Septiningsih, Endang M; Bonneau, Richard; Purugganan, Michael
2015-01-01
Plants rely on transcriptional dynamics to respond to multiple climatic fluctuations and contexts in nature. We analyzed the genome-wide gene expression patterns of rice (Oryza sativa) growing in rainfed and irrigated fields during two distinct tropical seasons and determined simple linear models that relate transcriptomic variation to climatic fluctuations. These models combine multiple environmental parameters to account for patterns of expression in the field of co-expressed gene clusters. We examined the similarities of our environmental models between tropical and temperate field conditions, using previously published data. We found that field type and macroclimate had broad impacts on transcriptional responses to environmental fluctuations, especially for genes involved in photosynthesis and development. Nevertheless, variation in solar radiation and temperature at the timescale of hours had reproducible effects across environmental contexts. These results provide a basis for broad-based predictive modeling of plant gene expression in the field. DOI: http://dx.doi.org/10.7554/eLife.08411.001 PMID:26609814
Application-Driven No-Reference Quality Assessment for Dermoscopy Images With Multiple Distortions.
Xie, Fengying; Lu, Yanan; Bovik, Alan C; Jiang, Zhiguo; Meng, Rusong
2016-06-01
Dermoscopy images often suffer from blur and uneven illumination distortions that occur during acquisition, which can adversely influence consequent automatic image analysis results on potential lesion objects. The purpose of this paper is to deploy an algorithm that can automatically assess the quality of dermoscopy images. Such an algorithm could be used to direct image recapture or correction. We describe an application-driven no-reference image quality assessment (IQA) model for dermoscopy images affected by possibly multiple distortions. For this purpose, we created a multiple distortion dataset of dermoscopy images impaired by varying degrees of blur and uneven illumination. The basis of this model is two single distortion IQA metrics that are sensitive to blur and uneven illumination, respectively. The outputs of these two metrics are combined to predict the quality of multiply distorted dermoscopy images using a fuzzy neural network. Unlike traditional IQA algorithms, which use human subjective score as ground truth, here ground truth is driven by the application, and generated according to the degree of influence of the distortions on lesion analysis. The experimental results reveal that the proposed model delivers accurate and stable quality prediction results for dermoscopy images impaired by multiple distortions. The proposed model is effective for quality assessment of multiple distorted dermoscopy images. An application-driven concept for IQA is introduced, and at the same time, a solution framework for the IQA of multiple distortions is proposed.
Validation of a Sensor-Driven Modeling Paradigm for Multiple Source Reconstruction with FFT-07 Data
2009-05-01
operational warning and reporting (information) systems that combine automated data acquisition, analysis , source reconstruction, display and distribution of...report and to incorporate this operational ca- pability into the integrative multiscale urban modeling system implemented in the com- putational...Journal of Fluid Mechanics, 180, 529–556. [27] Flesch, T., Wilson, J. D., and Yee, E. (1995), Backward- time Lagrangian stochastic dispersion models
Sequential Exposure of Bortezomib and Vorinostat is Synergistic in Multiple Myeloma Cells
Nanavati, Charvi; Mager, Donald E.
2018-01-01
Purpose To examine the combination of bortezomib and vorinostat in multiple myeloma cells (U266) and xenografts, and to assess the nature of their potential interactions with semi-mechanistic pharmacodynamic models and biomarkers. Methods U266 proliferation was examined for a range of bortezomib and vorinostat exposure times and concentrations (alone and in combination). A non-competitive interaction model was used with interaction parameters that reflect the nature of drug interactions after simultaneous and sequential exposures. p21 and cleaved PARP were measured using immunoblotting to assess critical biomarker dynamics. For xenografts, data were extracted from literature and modeled with a PK/PD model with an interaction parameter. Results Estimated model parameters for simultaneous in vitro and xenograft treatments suggested additive drug effects. The sequence of bortezomib preincubation for 24 hours, followed by vorinostat for 24 hours, resulted in an estimated interaction term significantly less than 1, suggesting synergistic effects. p21 and cleaved PARP were also up-regulated the most in this sequence. Conclusions Semi-mechanistic pharmacodynamic modeling suggests synergistic pharmacodynamic interactions for the sequential administration of bortezomib followed by vorinostat. Increased p21 and cleaved PARP expression can potentially explain mechanisms of their enhanced effects, which require further PK/PD systems analysis to suggest an optimal dosing regimen. PMID:28101809
Galic, Nika; Sullivan, Lauren L; Grimm, Volker; Forbes, Valery E
2018-04-01
Ecosystems are exposed to multiple stressors which can compromise functioning and service delivery. These stressors often co-occur and interact in different ways which are not yet fully understood. Here, we applied a population model representing a freshwater amphipod feeding on leaf litter in forested streams. We simulated impacts of hypothetical stressors, individually and in pairwise combinations that target the individuals' feeding, maintenance, growth and reproduction. Impacts were quantified by examining responses at three levels of biological organisation: individual-level body sizes and cumulative reproduction, population-level abundance and biomass and ecosystem-level leaf litter decomposition. Interactive effects of multiple stressors at the individual level were mostly antagonistic, that is, less negative than expected. Most population- and ecosystem-level responses to multiple stressors were stronger than expected from an additive model, that is, synergistic. Our results suggest that across levels of biological organisation responses to multiple stressors are rarely only additive. We suggest methods for efficiently quantifying impacts of multiple stressors at different levels of biological organisation. © 2018 John Wiley & Sons Ltd/CNRS.
A novel multi-item joint replenishment problem considering multiple type discounts.
Cui, Ligang; Zhang, Yajun; Deng, Jie; Xu, Maozeng
2018-01-01
In business replenishment, discount offers of multi-item may either provide different discount schedules with a single discount type, or provide schedules with multiple discount types. The paper investigates the joint effects of multiple discount schemes on the decisions of multi-item joint replenishment. In this paper, a joint replenishment problem (JRP) model, considering three discount (all-unit discount, incremental discount, total volume discount) offers simultaneously, is constructed to determine the basic cycle time and joint replenishment frequencies of multi-item. To solve the proposed problem, a heuristic algorithm is proposed to find the optimal solutions and the corresponding total cost of the JRP model. Numerical experiment is performed to test the algorithm and the computational results of JRPs under different discount combinations show different significance in the replenishment cost reduction.
NASA Astrophysics Data System (ADS)
Catharine, D.; Strong, C.; Lin, J. C.; Cherkaev, E.; Mitchell, L.; Stephens, B. B.; Ehleringer, J. R.
2016-12-01
The rising level of atmospheric carbon dioxide (CO2), driven by anthropogenic emissions, is the leading cause of enhanced radiative forcing. Increasing societal interest in reducing anthropogenic greenhouse gas emissions call for a computationally efficient method of evaluating anthropogenic CO2 source emissions, particularly if future mitigation actions are to be developed. A multiple-box atmospheric transport model was constructed in conjunction with a pre-existing fossil fuel CO2 emission inventory to estimate near-surface CO2 mole fractions and the associated anthropogenic CO2 emissions in the Salt Lake Valley (SLV) of northern Utah, a metropolitan area with a population of 1 million. A 15-year multi-site dataset of observed CO2 mole fractions is used in conjunction with the multiple-box model to develop an efficient method to constrain anthropogenic emissions through inverse modeling. Preliminary results of the multiple-box model CO2 inversion indicate that the pre-existing anthropogenic emission inventory may over-estimate CO2 emissions in the SLV. In addition, inversion results displaying a complex spatial and temporal distribution of urban emissions, including the effects of residential development and vehicular traffic will be discussed.
A New Variational Approach for Multiplicative Noise and Blur Removal
Ullah, Asmat; Chen, Wen; Khan, Mushtaq Ahmad; Sun, HongGuang
2017-01-01
This paper proposes a new variational model for joint multiplicative denoising and deblurring. It combines a total generalized variation filter (which has been proved to be able to reduce the blocky-effects by being aware of high-order smoothness) and shearlet transform (that effectively preserves anisotropic image features such as sharp edges, curves and so on). The new model takes the advantage of both regularizers since it is able to minimize the staircase effects while preserving sharp edges, textures and other fine image details. The existence and uniqueness of a solution to the proposed variational model is also discussed. The resulting energy functional is then solved by using alternating direction method of multipliers. Numerical experiments showing that the proposed model achieves satisfactory restoration results, both visually and quantitatively in handling the blur (motion, Gaussian, disk, and Moffat) and multiplicative noise (Gaussian, Gamma, or Rayleigh) reduction. A comparison with other recent methods in this field is provided as well. The proposed model can also be applied for restoring both single and multi-channel images contaminated with multiplicative noise, and permit cross-channel blurs when the underlying image has more than one channel. Numerical tests on color images are conducted to demonstrate the effectiveness of the proposed model. PMID:28141802
Sources of Variability in Physical Activity Among Inactive People with Multiple Sclerosis.
Uszynski, Marcin K; Herring, Matthew P; Casey, Blathin; Hayes, Sara; Gallagher, Stephen; Motl, Robert W; Coote, Susan
2018-04-01
Evidence supports that physical activity (PA) improves symptoms of multiple sclerosis (MS). Although application of principles from Social Cognitive Theory (SCT) may facilitate positive changes in PA behaviour among people with multiple sclerosis (pwMS), the constructs often explain limited variance in PA. This study investigated the extent to which MS symptoms, including fatigue, depression, and walking limitations combined with the SCT constructs, explained more variance in PA than SCT constructs alone among pwMS. Baseline data, including objectively assessed PA, exercise self-efficacy, goal setting, outcome expectations, 6-min walk test, fatigue and depression, from 65 participants of the Step It Up randomized controlled trial completed in Ireland (2016), were included. Multiple regression models quantified variance explained in PA and independent associations of (1) SCT constructs, (2) symptoms and (3) SCT constructs and symptoms. Model 1 included exercise self-efficacy, exercise goal setting and multidimensional outcomes expectations for exercise and explained ~14% of the variance in PA (R 2 =0.144, p < 0.05). Model 2 included walking limitations, fatigue and depression and explained 20% of the variance in PA (R 2 =0.196, p < 0.01). Model 3 combined models 1 and 2 and explained variance increased to ~29% (R 2 =0.288; p<0.01). In Model 3, exercise self-efficacy (β=0.30, p < 0.05), walking limitations (β=0.32, p < 0.01), fatigue (β = -0.41, p < 0.01) and depression (β = 0.34, p < 0.05) were significantly and independently associated with PA. Findings suggest that relevant MS symptoms improved by PA, including fatigue, depression and walking limitations, and SCT constructs together explained more variance in PA than SCT constructs alone, providing support for targeting both SCT constructs and these symptoms in the multifactorial promotion of PA among pwMS.
Zhen, Xiaofei; Li, Jinping; Abdalla Osman, Yassir Idris; Feng, Rong; Zhang, Xuemin; Kang, Jian
2018-01-01
In order to utilize solar energy to meet the heating demands of a rural residential building during the winter in the northwestern region of China, a hybrid heating system combining solar energy and coal was built. Multiple experiments to monitor its performance were conducted during the winter in 2014 and 2015. In this paper, we analyze the efficiency of the energy utilization of the system and describe a prototype model to determine the thermal efficiency of the coal stove in use. Multiple linear regression was adopted to present the dual function of multiple factors on the daily heat-collecting capacity of the solar water heater; the heat-loss coefficient of the storage tank was detected as well. The prototype model shows that the average thermal efficiency of the stove is 38%, which means that the energy input for the building is divided between the coal and solar energy, 39.5% and 60.5% energy, respectively. Additionally, the allocation of the radiation of solar energy projecting into the collecting area of the solar water heater was obtained which showed 49% loss with optics and 23% with the dissipation of heat, with only 28% being utilized effectively.
SD-MSAEs: Promoter recognition in human genome based on deep feature extraction.
Xu, Wenxuan; Zhang, Li; Lu, Yaping
2016-06-01
The prediction and recognition of promoter in human genome play an important role in DNA sequence analysis. Entropy, in Shannon sense, of information theory is a multiple utility in bioinformatic details analysis. The relative entropy estimator methods based on statistical divergence (SD) are used to extract meaningful features to distinguish different regions of DNA sequences. In this paper, we choose context feature and use a set of methods of SD to select the most effective n-mers distinguishing promoter regions from other DNA regions in human genome. Extracted from the total possible combinations of n-mers, we can get four sparse distributions based on promoter and non-promoters training samples. The informative n-mers are selected by optimizing the differentiating extents of these distributions. Specially, we combine the advantage of statistical divergence and multiple sparse auto-encoders (MSAEs) in deep learning to extract deep feature for promoter recognition. And then we apply multiple SVMs and a decision model to construct a human promoter recognition method called SD-MSAEs. Framework is flexible that it can integrate new feature extraction or new classification models freely. Experimental results show that our method has high sensitivity and specificity. Copyright © 2016 Elsevier Inc. All rights reserved.
Composite load spectra for select space propulsion structural components
NASA Technical Reports Server (NTRS)
Newell, J. F.; Kurth, R. E.; Ho, H.
1991-01-01
The objective of this program is to develop generic load models with multiple levels of progressive sophistication to simulate the composite (combined) load spectra that are induced in space propulsion system components, representative of Space Shuttle Main Engines (SSME), such as transfer ducts, turbine blades, and liquid oxygen posts and system ducting. The first approach will consist of using state of the art probabilistic methods to describe the individual loading conditions and combinations of these loading conditions to synthesize the composite load spectra simulation. The second approach will consist of developing coupled models for composite load spectra simulation which combine the deterministic models for composite load dynamic, acoustic, high pressure, and high rotational speed, etc., load simulation using statistically varying coefficients. These coefficients will then be determined using advanced probabilistic simulation methods with and without strategically selected experimental data.
Limiting the public cost of stationary battery deployment by combining applications
NASA Astrophysics Data System (ADS)
Stephan, A.; Battke, B.; Beuse, M. D.; Clausdeinken, J. H.; Schmidt, T. S.
2016-07-01
Batteries could be central to low-carbon energy systems with high shares of intermittent renewable energy sources. However, the investment attractiveness of batteries is still perceived as low, eliciting calls for policy to support deployment. Here we show how the cost of battery deployment can potentially be minimized by introducing an aspect that has been largely overlooked in policy debates and underlying analyses: the fact that a single battery can serve multiple applications. Batteries thereby can not only tap into different value streams, but also combine different risk exposures. To address this gap, we develop a techno-economic model and apply it to the case of lithium-ion batteries serving multiple stationary applications in Germany. Our results show that batteries could be attractive for investors even now if non-market barriers impeding the combination of applications were removed. The current policy debate should therefore be refocused so as to encompass the removal of such barriers.
Mapping Land and Water Surface Topography with instantaneous Structure from Motion
NASA Astrophysics Data System (ADS)
Dietrich, J.; Fonstad, M. A.
2012-12-01
Structure from Motion (SfM) has given researchers an invaluable tool for low-cost, high-resolution 3D mapping of the environment. These SfM 3D surface models are commonly constructed from many digital photographs collected with one digital camera (either handheld or attached to aerial platform). This method works for stationary or very slow moving objects. However, objects in motion are impossible to capture with one-camera SfM. With multiple simultaneously triggered cameras, it becomes possible to capture multiple photographs at the same time which allows for the construction 3D surface models of moving objects and surfaces, an instantaneous SfM (ISfM) surface model. In river science, ISfM provides a low-cost solution for measuring a number of river variables that researchers normally estimate or are unable to collect over large areas. With ISfM and sufficient coverage of the banks and RTK-GPS control it is possible to create a digital surface model of land and water surface elevations across an entire channel and water surface slopes at any point within the surface model. By setting the cameras to collect time-lapse photography of a scene it is possible to create multiple surfaces that can be compared using traditional digital surface model differencing. These water surface models could be combined the high-resolution bathymetry to create fully 3D cross sections that could be useful in hydrologic modeling. Multiple temporal image sets could also be used in 2D or 3D particle image velocimetry to create 3D surface velocity maps of a channel. Other applications in earth science include anything where researchers could benefit from temporal surface modeling like mass movements, lava flows, dam removal monitoring. The camera system that was used for this research consisted of ten pocket digital cameras (Canon A3300) equipped with wireless triggers. The triggers were constructed with an Arduino-style microcontroller and off-the-shelf handheld radios with a maximum range of several kilometers. The cameras are controlled from another microcontroller/radio combination that allows for manual or automatic triggering of the cameras. The total cost of the camera system was approximately 1500 USD.
Advanced wireless mobile collaborative sensing network for tactical and strategic missions
NASA Astrophysics Data System (ADS)
Xu, Hao
2017-05-01
In this paper, an advanced wireless mobile collaborative sensing network will be developed. Through properly combining wireless sensor network, emerging mobile robots and multi-antenna sensing/communication techniques, we could demonstrate superiority of developed sensing network. To be concrete, heterogeneous mobile robots including unmanned aerial vehicle (UAV) and unmanned ground vehicle (UGV) are equipped with multi-model sensors and wireless transceiver antennas. Through real-time collaborative formation control, multiple mobile robots can team the best formation that can provide most accurate sensing results. Also, formatting multiple mobile robots can also construct a multiple-input multiple-output (MIMO) communication system that can provide a reliable and high performance communication network.
Le, Vu H.; Buscaglia, Robert; Chaires, Jonathan B.; Lewis, Edwin A.
2013-01-01
Isothermal Titration Calorimetry, ITC, is a powerful technique that can be used to estimate a complete set of thermodynamic parameters (e.g. Keq (or ΔG), ΔH, ΔS, and n) for a ligand binding interaction described by a thermodynamic model. Thermodynamic models are constructed by combination of equilibrium constant, mass balance, and charge balance equations for the system under study. Commercial ITC instruments are supplied with software that includes a number of simple interaction models, for example one binding site, two binding sites, sequential sites, and n-independent binding sites. More complex models for example, three or more binding sites, one site with multiple binding mechanisms, linked equilibria, or equilibria involving macromolecular conformational selection through ligand binding need to be developed on a case by case basis by the ITC user. In this paper we provide an algorithm (and a link to our MATLAB program) for the non-linear regression analysis of a multiple binding site model with up to four overlapping binding equilibria. Error analysis demonstrates that fitting ITC data for multiple parameters (e.g. up to nine parameters in the three binding site model) yields thermodynamic parameters with acceptable accuracy. PMID:23262283
Sexton, Nicholas J; Cooper, Richard P
2017-05-01
Task inhibition (also known as backward inhibition) is an hypothesised form of cognitive inhibition evident in multi-task situations, with the role of facilitating switching between multiple, competing tasks. This article presents a novel cognitive computational model of a backward inhibition mechanism. By combining aspects of previous cognitive models in task switching and conflict monitoring, the model instantiates the theoretical proposal that backward inhibition is the direct result of conflict between multiple task representations. In a first simulation, we demonstrate that the model produces two effects widely observed in the empirical literature, specifically, reaction time costs for both (n-1) task switches and n-2 task repeats. Through a systematic search of parameter space, we demonstrate that these effects are a general property of the model's theoretical content, and not specific parameter settings. We further demonstrate that the model captures previously reported empirical effects of inter-trial interval on n-2 switch costs. A final simulation extends the paradigm of switching between tasks of asymmetric difficulty to three tasks, and generates novel predictions for n-2 repetition costs. Specifically, the model predicts that n-2 repetition costs associated with hard-easy-hard alternations are greater than for easy-hard-easy alternations. Finally, we report two behavioural experiments testing this hypothesis, with results consistent with the model predictions. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Building "e-rater"® Scoring Models Using Machine Learning Methods. Research Report. ETS RR-16-04
ERIC Educational Resources Information Center
Chen, Jing; Fife, James H.; Bejar, Isaac I.; Rupp, André A.
2016-01-01
The "e-rater"® automated scoring engine used at Educational Testing Service (ETS) scores the writing quality of essays. In the current practice, e-rater scores are generated via a multiple linear regression (MLR) model as a linear combination of various features evaluated for each essay and human scores as the outcome variable. This…
ERIC Educational Resources Information Center
Hammond, Diana L.; Whatley, Abigail D.; Ayres, Kevin M.; Gast, David L.
2010-01-01
The primary purpose of this study was to examine the effects of video modeling delivered via computer on accurate and independent use of an iPod by three participants with moderate intellectual disabilities. In the context of combined multiple probes across participants and replicated across tasks, three female middle school students learned to…
Müller, Jörg M; Furniss, Tilman
2013-11-30
The often-reported low informant agreement about child psychopathology between multiple informants has lead to various suggestions about how to address discrepant ratings. Among the factors that may lower agreement that have been discussed is informant credibility, reliability, or psychopathology, which is of interest in this paper. We tested three different models, namely, the accuracy, the distortion, and an integrated so-called combined model, that conceptualize parental ratings to assess child psychopathology. The data comprise ratings of child psychopathology from multiple informants (mother, therapist and kindergarten teacher) and ratings of maternal psychopathology. The children were patients in a preschool psychiatry unit (N=247). The results from structural equation modeling show that maternal ratings of child psychopathology were biased by maternal psychopathology (distortion model). Based on this statistical background, we suggest a method to adjust biased maternal ratings. We illustrate the maternal bias by comparing the ratings of mother to expert ratings (combined kindergarten teacher and therapist ratings) and show that the correction equation increases the agreement between maternal and expert ratings. We conclude that this approach may help to reduce misclassification of preschool children as 'clinical' on the basis of biased maternal ratings. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
A Murine Model to Study Epilepsy and SUDEP Induced by Malaria Infection
Ssentongo, Paddy; Robuccio, Anna E.; Thuku, Godfrey; Sim, Derek G.; Nabi, Ali; Bahari, Fatemeh; Shanmugasundaram, Balaji; Billard, Myles W.; Geronimo, Andrew; Short, Kurt W.; Drew, Patrick J.; Baccon, Jennifer; Weinstein, Steven L.; Gilliam, Frank G.; Stoute, José A.; Chinchilli, Vernon M.; Read, Andrew F.; Gluckman, Bruce J.; Schiff, Steven J.
2017-01-01
One of the largest single sources of epilepsy in the world is produced as a neurological sequela in survivors of cerebral malaria. Nevertheless, the pathophysiological mechanisms of such epileptogenesis remain unknown and no adjunctive therapy during cerebral malaria has been shown to reduce the rate of subsequent epilepsy. There is no existing animal model of postmalarial epilepsy. In this technical report we demonstrate the first such animal models. These models were created from multiple mouse and parasite strain combinations, so that the epilepsy observed retained universality with respect to genetic background. We also discovered spontaneous sudden unexpected death in epilepsy (SUDEP) in two of our strain combinations. These models offer a platform to enable new preclinical research into mechanisms and prevention of epilepsy and SUDEP. PMID:28272506
Smith, Jodie; Rowland, James
2007-01-01
Satellite images from multiple sensors and dates were analyzed to measure the extent of flooding caused by Hurricane Katrina in the New Orleans, La., area. The flood polygons were combined with a high-resolution digital elevation model to estimate water depths and volumes in designated areas. The multiple satellite acquisitions enabled monitoring of the floodwater volume and extent through time.
Swanson, H L
1987-01-01
Three theoretical models (additive, independence, maximum rule) that characterize and predict the influence of independent hemispheric resources on learning-disabled and skilled readers' simultaneous processing were tested. Predictions related to word recall performance during simultaneous encoding conditions (dichotic listening task) were made from unilateral (dichotic listening task) presentations. The maximum rule model best characterized both ability groups in that simultaneous encoding produced no better recall than unilateral presentations. While the results support the hypothesis that both ability groups use similar processes in the combining of hemispheric resources (i.e., weak/dominant processing), ability group differences do occur in the coordination of such resources.
Jia, Dan; Koonce, Nathan A.; Halakatti, Roopa; Li, Xin; Yaccoby, Shmuel; Swain, Frances L.; Suva, Larry J.; Hennings, Leah; Berridge, Marc S.; Apana, Scott M.; Mayo, Kevin; Corry, Peter M.; Griffin, Robert J.
2011-01-01
The effects of ionizing radiation, with or without the antiangiogenic agent anginex (Ax), on multiple myeloma growth were tested in a SCID-rab mouse model. Mice carrying human multiple myeloma cell-containing pre-implanted bone grafts were treated weekly with various regimens for 8 weeks. Rapid multiple myeloma growth, assessed by bioluminescence intensity (IVIS), human lambda Ig light chain level in serum (ELISA), and the volume of bone grafts (caliper), was observed in untreated mice. Tumor burden in mice receiving combined therapy was reduced to 59% (by caliper), 43% (by ELISA), and 2% (by IVIS) of baseline values after 8 weeks of treatment. Ax or radiation alone slowed but did not stop tumor growth. Four weeks after the withdrawal of the treatments, tumor burden remained minimal in mice given Ax + radiation but increased noticeably in the other three groups. Multiple myeloma suppression by Ax + radiation was accompanied by a marked decrease in the number and activity of osteoclasts in bone grafts assessed by histology. Bone graft integrity was preserved by Ax + radiation but was lost in the other three groups, as assessed by microCT imaging and radiography. These results suggest that radiotherapy, when primed by anti-angiogenic agents, may be a potent therapy for focal multiple myeloma. PMID:20518660
Nonlocal variational model and filter algorithm to remove multiplicative noise
NASA Astrophysics Data System (ADS)
Chen, Dai-Qiang; Zhang, Hui; Cheng, Li-Zhi
2010-07-01
The nonlocal (NL) means filter proposed by Buades, Coll, and Morel (SIAM Multiscale Model. Simul. 4(2), 490-530, 2005), which makes full use of the redundancy information in images, has shown to be very efficient for image denoising with Gauss noise added. On the basis of the NL method and a striver to minimize the conditional mean-square error, we design a NL means filter to remove multiplicative noise, and combining the NL filter to regularity method, we propose a NL total variational (TV) model and present a fast iterated algorithm for it. Experiments demonstrate that our algorithm is better than TV method; it is superior in preserving small structures and textures and can obtain an improvement in peak signal-to-noise ratio.
System, method and apparatus for generating phrases from a database
NASA Technical Reports Server (NTRS)
McGreevy, Michael W. (Inventor)
2004-01-01
A phrase generation is a method of generating sequences of terms, such as phrases, that may occur within a database of subsets containing sequences of terms, such as text. A database is provided and a relational model of the database is created. A query is then input. The query includes a term or a sequence of terms or multiple individual terms or multiple sequences of terms or combinations thereof. Next, several sequences of terms that are contextually related to the query are assembled from contextual relations in the model of the database. The sequences of terms are then sorted and output. Phrase generation can also be an iterative process used to produce sequences of terms from a relational model of a database.
Modeling the Development of Audiovisual Cue Integration in Speech Perception
Getz, Laura M.; Nordeen, Elke R.; Vrabic, Sarah C.; Toscano, Joseph C.
2017-01-01
Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues. PMID:28335558
Modeling the Development of Audiovisual Cue Integration in Speech Perception.
Getz, Laura M; Nordeen, Elke R; Vrabic, Sarah C; Toscano, Joseph C
2017-03-21
Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues.
Protoplanetary Disks in Multiple Star Systems
NASA Astrophysics Data System (ADS)
Harris, Robert J.
Most stars are born in multiple systems, so the presence of a stellar companion may commonly influence planet formation. Theory indicates that companions may inhibit planet formation in two ways. First, dynamical interactions can tidally truncate circumstellar disks. Truncation reduces disk lifetimes and masses, leaving less time and material for planet formation. Second, these interactions might reduce grain-coagulation efficiency, slowing planet formation in its earliest stages. I present three observational studies investigating these issues. First is a spatially resolved Submillimeter Array (SMA) census of disks in young multiple systems in the Taurus-Auriga star-forming region to study their bulk properties. With this survey, I confirmed that disk lifetimes are preferentially decreased in multiples: single stars have detectable millimeter-wave continuum emission twice as often as components of multiples. I also verified that millimeter luminosity (proportional to disk mass) declines with decreasing stellar separation. Furthermore, by measuring resolved-disk radii, I quantitatively tested tidal-truncation theories: results were mixed, with a few disks much larger than expected. I then switch focus to the grain-growth properties of disks in multiple star systems. By combining SMA, Combined Array for Research in Millimeter Astronomy (CARMA), and Jansky Very Large Array (VLA) observations of the circumbinary disk in the UZ Tau quadruple system, I detected radial variations in the grain-size distribution: large particles preferentially inhabit the inner disk. Detections of these theoretically predicted variations have been rare. I related this to models of grain coagulation in gas disks and find that our results are consistent with growth limited by radial drift. I then present a study of grain growth in the disks of the AS 205 and UX Tau multiple systems. By combining SMA, Atacama Large Millimeter/submillimeter Array (ALMA), and VLA observations, I detected radial variations of the grain-size distribution in the AS 205 A disk, but not in the UX Tau A disk. I find that some combination of radial drift and fragmentation limits growth in the AS 205 A disk. In the final chapter, I summarize my findings that, while multiplicity clearly influences bulk disk properties, it does not obviously inhibit grain growth. Other investigations are suggested.
Density estimation in tiger populations: combining information for strong inference
Gopalaswamy, Arjun M.; Royle, J. Andrew; Delampady, Mohan; Nichols, James D.; Karanth, K. Ullas; Macdonald, David W.
2012-01-01
A productive way forward in studies of animal populations is to efficiently make use of all the information available, either as raw data or as published sources, on critical parameters of interest. In this study, we demonstrate two approaches to the use of multiple sources of information on a parameter of fundamental interest to ecologists: animal density. The first approach produces estimates simultaneously from two different sources of data. The second approach was developed for situations in which initial data collection and analysis are followed up by subsequent data collection and prior knowledge is updated with new data using a stepwise process. Both approaches are used to estimate density of a rare and elusive predator, the tiger, by combining photographic and fecal DNA spatial capture–recapture data. The model, which combined information, provided the most precise estimate of density (8.5 ± 1.95 tigers/100 km2 [posterior mean ± SD]) relative to a model that utilized only one data source (photographic, 12.02 ± 3.02 tigers/100 km2 and fecal DNA, 6.65 ± 2.37 tigers/100 km2). Our study demonstrates that, by accounting for multiple sources of available information, estimates of animal density can be significantly improved.
Density estimation in tiger populations: combining information for strong inference.
Gopalaswamy, Arjun M; Royle, J Andrew; Delampady, Mohan; Nichols, James D; Karanth, K Ullas; Macdonald, David W
2012-07-01
A productive way forward in studies of animal populations is to efficiently make use of all the information available, either as raw data or as published sources, on critical parameters of interest. In this study, we demonstrate two approaches to the use of multiple sources of information on a parameter of fundamental interest to ecologists: animal density. The first approach produces estimates simultaneously from two different sources of data. The second approach was developed for situations in which initial data collection and analysis are followed up by subsequent data collection and prior knowledge is updated with new data using a stepwise process. Both approaches are used to estimate density of a rare and elusive predator, the tiger, by combining photographic and fecal DNA spatial capture-recapture data. The model, which combined information, provided the most precise estimate of density (8.5 +/- 1.95 tigers/100 km2 [posterior mean +/- SD]) relative to a model that utilized only one data source (photographic, 12.02 +/- 3.02 tigers/100 km2 and fecal DNA, 6.65 +/- 2.37 tigers/100 km2). Our study demonstrates that, by accounting for multiple sources of available information, estimates of animal density can be significantly improved.
Linear and nonlinear transparencies in binocular vision.
Langley, K; Fleet, D J; Hibbard, P B
1998-01-01
When the product of a vertical square-wave grating (contrast envelope) and a horizontal sinusoidal grating (carrier) are viewed binocularly with different disparity cues they can be perceived transparently at different depths. We found, however, that the transparency was asymmetric; it only occurred when the envelope was perceived to be the overlaying surface. When the same two signals were added, the percept of transparency was symmetrical; either signal could be seen in front of or behind the other at different depths. Differences between these multiplicative and additive signal combinations were examined in two experiments. In one, we measured disparity thresholds for transparency as a function of the spatial frequency of the envelope. In the other, we measured disparity discrimination thresholds. In both experiments the thresholds for the multiplicative condition, unlike the additive condition, showed distinct minima at low envelope frequencies. The different sensitivity curves found for multiplicative and additive signal combinations suggest that different processes mediated the disparity signal. The data are consistent with a two-channel model of binocular matching, with multiple depth cues represented at single retinal locations. PMID:9802240
NASA Astrophysics Data System (ADS)
Zhang, Jinhua; Fang, Bin; Hong, Jun; Wan, Shaoke; Zhu, Yongsheng
2017-12-01
The combined angular contact ball bearings are widely used in automatic, aerospace and machine tools, but few researches on the combined angular contact ball bearings have been reported. It is shown that the preload and stiffness of combined bearings are mutual influenced rather than simply the superposition of multiple single bearing, therefore the characteristic calculation of combined bearings achieved by coupling the load and deformation analysis of a single bearing. In this paper, based on the Jones quasi-static model and stiffness analytical model, a new iterative algorithm and model are proposed for the calculation of combined bearings preload and stiffness, and the dynamic effects include centrifugal force and gyroscopic moment have to be considered. It is demonstrated that the new method has general applicability, the preload factors of combined bearings are calculated according to the different design preloads, and the static and dynamic stiffness for various arrangements of combined bearings are comparatively studied and analyzed, and the influences of the design preload magnitude, axial load and rotating speed are discussed in detail. Besides, the change rule of dynamic contact angles of combined bearings with respect to the rotating speed is also discussed. The results show that bearing arrangement modes, rotating speed and design preload magnitude have a significant influence on the preload and stiffness of combined bearings. The proposed formulation provides a useful tool in dynamic analysis of the complex bearing-rotor system.
Combining Multiple Rupture Models in Real-Time for Earthquake Early Warning
NASA Astrophysics Data System (ADS)
Minson, S. E.; Wu, S.; Beck, J. L.; Heaton, T. H.
2015-12-01
The ShakeAlert earthquake early warning system for the west coast of the United States is designed to combine information from multiple independent earthquake analysis algorithms in order to provide the public with robust predictions of shaking intensity at each user's location before they are affected by strong shaking. The current contributing analyses come from algorithms that determine the origin time, epicenter, and magnitude of an earthquake (On-site, ElarmS, and Virtual Seismologist). A second generation of algorithms will provide seismic line source information (FinDer), as well as geodetically-constrained slip models (BEFORES, GPSlip, G-larmS, G-FAST). These new algorithms will provide more information about the spatial extent of the earthquake rupture and thus improve the quality of the resulting shaking forecasts.Each of the contributing algorithms exploits different features of the observed seismic and geodetic data, and thus each algorithm may perform differently for different data availability and earthquake source characteristics. Thus the ShakeAlert system requires a central mediator, called the Central Decision Module (CDM). The CDM acts to combine disparate earthquake source information into one unified shaking forecast. Here we will present a new design for the CDM that uses a Bayesian framework to combine earthquake reports from multiple analysis algorithms and compares them to observed shaking information in order to both assess the relative plausibility of each earthquake report and to create an improved unified shaking forecast complete with appropriate uncertainties. We will describe how these probabilistic shaking forecasts can be used to provide each user with a personalized decision-making tool that can help decide whether or not to take a protective action (such as opening fire house doors or stopping trains) based on that user's distance to the earthquake, vulnerability to shaking, false alarm tolerance, and time required to act.
Models of Sector Flows Under Local, Regional and Airport Weather Constraints
NASA Technical Reports Server (NTRS)
Kulkarni, Deepak
2017-01-01
Recently, the ATM community has made important progress in collaborative trajectory management through the introduction of a new FAA traffic management initiative called a Collaborative Trajectory Options Program (CTOP). FAA can use CTOPs to manage air traffic under multiple constraints (manifested as flow constrained areas or FCAs) in the system, and it allows flight operators to indicate their preferences for routing and delay options. CTOPs also permits better management of the overall trajectory of flights by considering both routing and departure delay options simultaneously. However, adoption of CTOPs in airspace has been hampered by many factors that include challenges in how to identify constrained areas and how to set rates for the FCAs. Decision support tools providing assistance would be particularly helpful in effective use of CTOPs. Such DSTs tools would need models of demand and capacity in the presence of multiple constraints. This study examines different approaches to using historical data to create and validate models of maximum flows in sectors and other airspace regions in the presence of multiple constraints. A challenge in creating an empirical model of flows under multiple constraints is a lack of sufficient historical data that captures diverse situations involving combinations of multiple constraints especially those with severe weather. The approach taken here to deal with this is two-fold. First, we create a generalized sector model encompassing multiple sectors rather than individual sectors in order to increase the amount of data used for creating the model by an order of magnitude. Secondly, we decompose the problem so that the amount of data needed is reduced. This involves creating a baseline demand model plus a separate weather constrained flow reduction model and then composing these into a single integrated model. A nominal demand model is a flow model (gdem) in the presence of clear local weather. This defines the flow as a function of weather constraints in neighboring regions, airport constraints and weather in locations that can cause re-routes to the location of interest. A weather constrained flow reduction model (fwx-red) is a model of reduction in baseline counts as a function of local weather. Because the number of independent variables associated with each of the two decomposed models is smaller than that with a single model, need for amount of data is reduced. Finally, a composite model that combines these two can be represented as fwx-red (gdem(e), l) where e represents non-local constraints and l represents local weather. The approaches studied to developing these models are divided into three categories: (1) Point estimation models (2) Empirical models (3) Theoretical models. Errors in predictions of these different types of models have been estimated. In situations when there is abundant data, point estimation models tend to be very accurate. In contrast, empirical models do better than theoretical models when there is some data available. The biggest benefit of theoretical models is their general applicability in wider range situations once the degree of accuracy of these has been established.
Models of Sector Aircraft Counts in the Presence of Local, Regional and Airport Constraints
NASA Technical Reports Server (NTRS)
Kulkarni, Deepak
2017-01-01
Recently, the ATM community has made important progress in collaborative trajectory management through the introduction of a new FAA traffic management initiative called a Collaborative Trajectory Options Program (CTOP). FAA can use CTOPs to manage air traffic under multiple constraints (manifested as flow constrained areas or FCAs) in the system, and it allows flight operators to indicate their preferences for routing and delay options. CTOPs also permits better management of the overall trajectory of flights by considering both routing and departure delay options simultaneously. However, adoption of CTOPs in airspace has been hampered by many factors that include challenges in how to identify constrained areas and how to set rates for the FCAs. Decision support tools providing assistance would be particularly helpful in effective use of CTOPs. Such DSTs tools would need models of demand and capacity in the presence of multiple constraints. This study examines different approaches to using historical data to create and validate models of maximum flows in sectors and other airspace regions in the presence of multiple constraints. A challenge in creating an empirical model of flows under multiple constraints is a lack of sufficient historical data that captures diverse situations involving combinations of multiple constraints especially those with severe weather. The approach taken here to deal with this is two-fold. First, we create a generalized sector model encompassing multiple sectors rather than individual sectors in order to increase the amount of data used for creating the model by an order of magnitude. Secondly, we decompose the problem so that the amount of data needed is reduced. This involves creating a baseline demand model plus a separate weather constrained flow reduction model and then composing these into a single integrated model. A nominal demand model is a flow model (gdem) in the presence of clear local weather. This defines the flow as a function of weather constraints in neighboring regions, airport constraints and weather in locations that can cause re-routes to the location of interest. A weather constrained flow reduction model (fwx-red) is a model of reduction in baseline counts as a function of local weather. Because the number of independent variables associated with each of the two decomposed models is smaller than that with a single model, need for amount of data is reduced. Finally, a composite model that combines these two can be represented as fwx-red (gdem(e), l) where e represents non-local constraints and l represents local weather. The approaches studied to developing these models are divided into three categories: (1) Point estimation models (2) Empirical models (3) Theoretical models. Errors in predictions of these different types of models have been estimated. In situations when there is abundant data, point estimation models tend to be very accurate. In contrast, empirical models do better than theoretical models when there is some data available. The biggest benefit of theoretical models is their general applicability in wider range situations once the degree of accuracy of these has been established.
Huang, Yu; Griffin, Michael J
2014-01-01
This study investigated the prediction of the discomfort caused by simultaneous noise and vibration from the discomfort caused by noise and the discomfort caused by vibration when they are presented separately. A total of 24 subjects used absolute magnitude estimation to report their discomfort caused by seven levels of noise (70-88 dBA SEL), 7 magnitudes of vibration (0.146-2.318 ms(- 1.75)) and all 49 possible combinations of these noise and vibration stimuli. Vibration did not significantly influence judgements of noise discomfort, but noise reduced vibration discomfort by an amount that increased with increasing noise level, consistent with a 'masking effect' of noise on judgements of vibration discomfort. A multiple linear regression model or a root-sums-of-squares model predicted the discomfort caused by combined noise and vibration, but the root-sums-of-squares model is more convenient and provided a more accurate prediction of the discomfort produced by combined noise and vibration.
NASA Astrophysics Data System (ADS)
Liang, Dong; Song, Yimin; Sun, Tao; Jin, Xueying
2017-09-01
A systematic dynamic modeling methodology is presented to develop the rigid-flexible coupling dynamic model (RFDM) of an emerging flexible parallel manipulator with multiple actuation modes. By virtue of assumed mode method, the general dynamic model of an arbitrary flexible body with any number of lumped parameters is derived in an explicit closed form, which possesses the modular characteristic. Then the completely dynamic model of system is formulated based on the flexible multi-body dynamics (FMD) theory and the augmented Lagrangian multipliers method. An approach of combining the Udwadia-Kalaba formulation with the hybrid TR-BDF2 numerical algorithm is proposed to address the nonlinear RFDM. Two simulation cases are performed to investigate the dynamic performance of the manipulator with different actuation modes. The results indicate that the redundant actuation modes can effectively attenuate vibration and guarantee higher dynamic performance compared to the traditional non-redundant actuation modes. Finally, a virtual prototype model is developed to demonstrate the validity of the presented RFDM. The systematic methodology proposed in this study can be conveniently extended for the dynamic modeling and controller design of other planar flexible parallel manipulators, especially the emerging ones with multiple actuation modes.
Taravat, Alireza; Oppelt, Natascha
2014-01-01
Oil spills represent a major threat to ocean ecosystems and their environmental status. Previous studies have shown that Synthetic Aperture Radar (SAR), as its recording is independent of clouds and weather, can be effectively used for the detection and classification of oil spills. Dark formation detection is the first and critical stage in oil-spill detection procedures. In this paper, a novel approach for automated dark-spot detection in SAR imagery is presented. A new approach from the combination of adaptive Weibull Multiplicative Model (WMM) and MultiLayer Perceptron (MLP) neural networks is proposed to differentiate between dark spots and the background. The results have been compared with the results of a model combining non-adaptive WMM and pulse coupled neural networks. The presented approach overcomes the non-adaptive WMM filter setting parameters by developing an adaptive WMM model which is a step ahead towards a full automatic dark spot detection. The proposed approach was tested on 60 ENVISAT and ERS2 images which contained dark spots. For the overall dataset, an average accuracy of 94.65% was obtained. Our experimental results demonstrate that the proposed approach is very robust and effective where the non-adaptive WMM & pulse coupled neural network (PCNN) model generates poor accuracies. PMID:25474376
A Simulated Environment Experiment on Annoyance Due to Combined Road Traffic and Industrial Noises.
Marquis-Favre, Catherine; Morel, Julien
2015-07-21
Total annoyance due to combined noises is still difficult to predict adequately. This scientific gap is an obstacle for noise action planning, especially in urban areas where inhabitants are usually exposed to high noise levels from multiple sources. In this context, this work aims to highlight potential to enhance the prediction of total annoyance. The work is based on a simulated environment experiment where participants performed activities in a living room while exposed to combined road traffic and industrial noises. The first objective of the experiment presented in this paper was to gain further understanding of the effects on annoyance of some acoustical factors, non-acoustical factors and potential interactions between the combined noise sources. The second one was to assess total annoyance models constructed from the data collected during the experiment and tested using data gathered in situ. The results obtained in this work highlighted the superiority of perceptual models. In particular, perceptual models with an interaction term seemed to be the best predictors for the two combined noise sources under study, even with high differences in sound pressure level. Thus, these results reinforced the need to focus on perceptual models and to improve the prediction of partial annoyances.
Approaches to modelling hydrology and ecosystem interactions
NASA Astrophysics Data System (ADS)
Silberstein, Richard P.
2014-05-01
As the pressures of industry, agriculture and mining on groundwater resources increase there is a burgeoning un-met need to be able to capture these multiple, direct and indirect stresses in a formal framework that will enable better assessment of impact scenarios. While there are many catchment hydrological models and there are some models that represent ecological states and change (e.g. FLAMES, Liedloff and Cook, 2007), these have not been linked in any deterministic or substantive way. Without such coupled eco-hydrological models quantitative assessments of impacts from water use intensification on water dependent ecosystems under changing climate are difficult, if not impossible. The concept would include facility for direct and indirect water related stresses that may develop around mining and well operations, climate stresses, such as rainfall and temperature, biological stresses, such as diseases and invasive species, and competition such as encroachment from other competing land uses. Indirect water impacts could be, for example, a change in groundwater conditions has an impact on stream flow regime, and hence aquatic ecosystems. This paper reviews previous work examining models combining ecology and hydrology with a view to developing a conceptual framework linking a biophysically defensable model that combines ecosystem function with hydrology. The objective is to develop a model capable of representing the cumulative impact of multiple stresses on water resources and associated ecosystem function.
NASA Astrophysics Data System (ADS)
Liu, Ding; Huang, Weichao; Zhang, Ni
2017-07-01
A two-dimensional axisymmetric swirling model based on the lattice Boltzmann method (LBM) in a pseudo Cartesian coordinate system is posited to simulate Czochralski (Cz) crystal growth in this paper. Specifically, the multiple-relaxation-time LBM (MRT-LBM) combined with the finite difference method (FDM) is used to analyze the melt convection and heat transfer in the process of Cz crystal growth. An incompressible axisymmetric swirling MRT-LB D2Q9 model is applied to solve for the axial and radial velocities by inserting thermal buoyancy and rotational inertial force into the two-dimensional lattice Boltzmann equation. In addition, the melt temperature and the azimuthal velocity are solved by MRT-LB D2Q5 models, and the crystal temperature is solved by FDM. The comparison results of stream functions values of different methods demonstrate that our hybrid model can be used to simulate the fluid-thermal coupling in the axisymmetric swirling model correctly and effectively. Furthermore, numerical simulations of melt convection and heat transfer are conducted under the conditions of high Grashof (Gr) numbers, within the range of 105 ˜ 107, and different high Reynolds (Re) numbers. The experimental results show our hybrid model can obtain the exact solution of complex crystal-growth models and analyze the fluid-thermal coupling effectively under the combined action of natural convection and forced convection.
Punnoose, Elizabeth A; Leverson, Joel D; Peale, Franklin; Boghaert, Erwin R; Belmont, Lisa D; Tan, Nguyen; Young, Amy; Mitten, Michael; Ingalla, Ellen; Darbonne, Walter C; Oleksijew, Anatol; Tapang, Paul; Yue, Peng; Oeh, Jason; Lee, Leslie; Maiga, Sophie; Fairbrother, Wayne J; Amiot, Martine; Souers, Andrew J; Sampath, Deepak
2016-05-01
BCL-2 family proteins dictate survival of human multiple myeloma cells, making them attractive drug targets. Indeed, multiple myeloma cells are sensitive to antagonists that selectively target prosurvival proteins such as BCL-2/BCL-XL (ABT-737 and ABT-263/navitoclax) or BCL-2 only (ABT-199/GDC-0199/venetoclax). Resistance to these three drugs is mediated by expression of MCL-1. However, given the selectivity profile of venetoclax it is unclear whether coexpression of BCL-XL also affects antitumor responses to venetoclax in multiple myeloma. In multiple myeloma cell lines (n = 21), BCL-2 is expressed but sensitivity to venetoclax correlated with high BCL-2 and low BCL-XL or MCL-1 expression. Multiple myeloma cells that coexpress BCL-2 and BCL-XL were resistant to venetoclax but sensitive to a BCL-XL-selective inhibitor (A-1155463). Multiple myeloma xenograft models that coexpressed BCL-XL or MCL-1 with BCL-2 were also resistant to venetoclax. Resistance to venetoclax was mitigated by cotreatment with bortezomib in xenografts that coexpressed BCL-2 and MCL-1 due to upregulation of NOXA, a proapoptotic factor that neutralizes MCL-1. In contrast, xenografts that expressed BCL-XL, MCL-1, and BCL-2 were more sensitive to the combination of bortezomib with a BCL-XL selective inhibitor (A-1331852) but not with venetoclax cotreatment when compared with monotherapies. IHC of multiple myeloma patient bone marrow biopsies and aspirates (n = 95) revealed high levels of BCL-2 and BCL-XL in 62% and 43% of evaluable samples, respectively, while 34% were characterized as BCL-2(High)/BCL-XL (Low) In addition to MCL-1, our data suggest that BCL-XL may also be a potential resistance factor to venetoclax monotherapy and in combination with bortezomib. Mol Cancer Ther; 15(5); 1132-44. ©2016 AACR. ©2016 American Association for Cancer Research.
Software Framework for Advanced Power Plant Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
John Widmann; Sorin Munteanu; Aseem Jain
2010-08-01
This report summarizes the work accomplished during the Phase II development effort of the Advanced Process Engineering Co-Simulator (APECS). The objective of the project is to develop the tools to efficiently combine high-fidelity computational fluid dynamics (CFD) models with process modeling software. During the course of the project, a robust integration controller was developed that can be used in any CAPE-OPEN compliant process modeling environment. The controller mediates the exchange of information between the process modeling software and the CFD software. Several approaches to reducing the time disparity between CFD simulations and process modeling have been investigated and implemented. Thesemore » include enabling the CFD models to be run on a remote cluster and enabling multiple CFD models to be run simultaneously. Furthermore, computationally fast reduced-order models (ROMs) have been developed that can be 'trained' using the results from CFD simulations and then used directly within flowsheets. Unit operation models (both CFD and ROMs) can be uploaded to a model database and shared between multiple users.« less
Fisz, Jacek J
2006-12-07
The optimization approach based on the genetic algorithm (GA) combined with multiple linear regression (MLR) method, is discussed. The GA-MLR optimizer is designed for the nonlinear least-squares problems in which the model functions are linear combinations of nonlinear functions. GA optimizes the nonlinear parameters, and the linear parameters are calculated from MLR. GA-MLR is an intuitive optimization approach and it exploits all advantages of the genetic algorithm technique. This optimization method results from an appropriate combination of two well-known optimization methods. The MLR method is embedded in the GA optimizer and linear and nonlinear model parameters are optimized in parallel. The MLR method is the only one strictly mathematical "tool" involved in GA-MLR. The GA-MLR approach simplifies and accelerates considerably the optimization process because the linear parameters are not the fitted ones. Its properties are exemplified by the analysis of the kinetic biexponential fluorescence decay surface corresponding to a two-excited-state interconversion process. A short discussion of the variable projection (VP) algorithm, designed for the same class of the optimization problems, is presented. VP is a very advanced mathematical formalism that involves the methods of nonlinear functionals, algebra of linear projectors, and the formalism of Fréchet derivatives and pseudo-inverses. Additional explanatory comments are added on the application of recently introduced the GA-NR optimizer to simultaneous recovery of linear and weakly nonlinear parameters occurring in the same optimization problem together with nonlinear parameters. The GA-NR optimizer combines the GA method with the NR method, in which the minimum-value condition for the quadratic approximation to chi(2), obtained from the Taylor series expansion of chi(2), is recovered by means of the Newton-Raphson algorithm. The application of the GA-NR optimizer to model functions which are multi-linear combinations of nonlinear functions, is indicated. The VP algorithm does not distinguish the weakly nonlinear parameters from the nonlinear ones and it does not apply to the model functions which are multi-linear combinations of nonlinear functions.
Gravitational Wave Signals from the First Massive Black Hole Seeds
NASA Astrophysics Data System (ADS)
Hartwig, Tilman; Agarwal, Bhaskar; Regan, John A.
2018-05-01
Recent numerical simulations reveal that the isothermal collapse of pristine gas in atomic cooling haloes may result in stellar binaries of supermassive stars with M* ≳ 104M⊙. For the first time, we compute the in-situ merger rate for such massive black hole remnants by combining their abundance and multiplicity estimates. For black holes with initial masses in the range 104 - 6M⊙ merging at redshifts z ≳ 15 our optimistic model predicts that LISA should be able to detect 0.6 mergers per year. This rate of detection can be attributed, without confusion, to the in-situ mergers of seeds from the collapse of very massive stars. Equally, in the case where LISA observes no mergers from heavy seeds at z ≳ 15 we can constrain the combined number density, multiplicity, and coalesence times of these high-redshift systems. This letter proposes gravitational wave signatures as a means to constrain theoretical models and processes that govern the abundance of massive black hole seeds in the early Universe.
Multiple site receptor modeling with a minimal spanning tree combined with a Kohonen neural network
NASA Astrophysics Data System (ADS)
Hopke, Philip K.
1999-12-01
A combination of two pattern recognition methods has been developed that allows the generation of geographical emission maps form multivariate environmental data. In such a projection into a visually interpretable subspace by a Kohonen Self-Organizing Feature Map, the topology of the higher dimensional variables space can be preserved, but parts of the information about the correct neighborhood among the sample vectors will be lost. This can partly be compensated for by an additional projection of Prim's Minimal Spanning Tree into the trained neural network. This new environmental receptor modeling technique has been adapted for multiple sampling sites. The behavior of the method has been studied using simulated data. Subsequently, the method has been applied to mapping data sets from the Southern California Air Quality Study. The projection of a 17 chemical variables measured at up to 8 sampling sites provided a 2D, visually interpretable, geometrically reasonable arrangement of air pollution source sin the South Coast Air Basin.
A Flexible Approach for the Statistical Visualization of Ensemble Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Potter, K.; Wilson, A.; Bremer, P.
2009-09-29
Scientists are increasingly moving towards ensemble data sets to explore relationships present in dynamic systems. Ensemble data sets combine spatio-temporal simulation results generated using multiple numerical models, sampled input conditions and perturbed parameters. While ensemble data sets are a powerful tool for mitigating uncertainty, they pose significant visualization and analysis challenges due to their complexity. We present a collection of overview and statistical displays linked through a high level of interactivity to provide a framework for gaining key scientific insight into the distribution of the simulation results as well as the uncertainty associated with the data. In contrast to methodsmore » that present large amounts of diverse information in a single display, we argue that combining multiple linked statistical displays yields a clearer presentation of the data and facilitates a greater level of visual data analysis. We demonstrate this approach using driving problems from climate modeling and meteorology and discuss generalizations to other fields.« less
NASA Astrophysics Data System (ADS)
Bhakat, Soumendranath; Åberg, Emil; Söderhjelm, Pär
2018-01-01
Advanced molecular docking methods often aim at capturing the flexibility of the protein upon binding to the ligand. In this study, we investigate whether instead a simple rigid docking method can be applied, if combined with multiple target structures to model the backbone flexibility and molecular dynamics simulations to model the sidechain and ligand flexibility. The methods are tested for the binding of 35 ligands to FXR as part of the first stage of the Drug Design Data Resource (D3R) Grand Challenge 2 blind challenge. The results show that the multiple-target docking protocol performs surprisingly well, with correct poses found for 21 of the ligands. MD simulations started on the docked structures are remarkably stable, but show almost no tendency of refining the structure closer to the experimentally found binding pose. Reconnaissance metadynamics enhances the exploration of new binding poses, but additional collective variables involving the protein are needed to exploit the full potential of the method.
Bhakat, Soumendranath; Åberg, Emil; Söderhjelm, Pär
2018-01-01
Advanced molecular docking methods often aim at capturing the flexibility of the protein upon binding to the ligand. In this study, we investigate whether instead a simple rigid docking method can be applied, if combined with multiple target structures to model the backbone flexibility and molecular dynamics simulations to model the sidechain and ligand flexibility. The methods are tested for the binding of 35 ligands to FXR as part of the first stage of the Drug Design Data Resource (D3R) Grand Challenge 2 blind challenge. The results show that the multiple-target docking protocol performs surprisingly well, with correct poses found for 21 of the ligands. MD simulations started on the docked structures are remarkably stable, but show almost no tendency of refining the structure closer to the experimentally found binding pose. Reconnaissance metadynamics enhances the exploration of new binding poses, but additional collective variables involving the protein are needed to exploit the full potential of the method.
NASA Astrophysics Data System (ADS)
Tolson, B.; Matott, L. S.; Gaffoor, T. A.; Asadzadeh, M.; Shafii, M.; Pomorski, P.; Xu, X.; Jahanpour, M.; Razavi, S.; Haghnegahdar, A.; Craig, J. R.
2015-12-01
We introduce asynchronous parallel implementations of the Dynamically Dimensioned Search (DDS) family of algorithms including DDS, discrete DDS, PA-DDS and DDS-AU. These parallel algorithms are unique from most existing parallel optimization algorithms in the water resources field in that parallel DDS is asynchronous and does not require an entire population (set of candidate solutions) to be evaluated before generating and then sending a new candidate solution for evaluation. One key advance in this study is developing the first parallel PA-DDS multi-objective optimization algorithm. The other key advance is enhancing the computational efficiency of solving optimization problems (such as model calibration) by combining a parallel optimization algorithm with the deterministic model pre-emption concept. These two efficiency techniques can only be combined because of the asynchronous nature of parallel DDS. Model pre-emption functions to terminate simulation model runs early, prior to completely simulating the model calibration period for example, when intermediate results indicate the candidate solution is so poor that it will definitely have no influence on the generation of further candidate solutions. The computational savings of deterministic model preemption available in serial implementations of population-based algorithms (e.g., PSO) disappear in synchronous parallel implementations as these algorithms. In addition to the key advances above, we implement the algorithms across a range of computation platforms (Windows and Unix-based operating systems from multi-core desktops to a supercomputer system) and package these for future modellers within a model-independent calibration software package called Ostrich as well as MATLAB versions. Results across multiple platforms and multiple case studies (from 4 to 64 processors) demonstrate the vast improvement over serial DDS-based algorithms and highlight the important role model pre-emption plays in the performance of parallel, pre-emptable DDS algorithms. Case studies include single- and multiple-objective optimization problems in water resources model calibration and in many cases linear or near linear speedups are observed.
Nonlinear computations shaping temporal processing of precortical vision.
Butts, Daniel A; Cui, Yuwei; Casti, Alexander R R
2016-09-01
Computations performed by the visual pathway are constructed by neural circuits distributed over multiple stages of processing, and thus it is challenging to determine how different stages contribute on the basis of recordings from single areas. In the current article, we address this problem in the lateral geniculate nucleus (LGN), using experiments combined with nonlinear modeling capable of isolating various circuit contributions. We recorded cat LGN neurons presented with temporally modulated spots of various sizes, which drove temporally precise LGN responses. We utilized simultaneously recorded S-potentials, corresponding to the primary retinal ganglion cell (RGC) input to each LGN cell, to distinguish the computations underlying temporal precision in the retina from those in the LGN. Nonlinear models with excitatory and delayed suppressive terms were sufficient to explain temporal precision in the LGN, and we found that models of the S-potentials were nearly identical, although with a lower threshold. To determine whether additional influences shaped the response at the level of the LGN, we extended this model to use the S-potential input in combination with stimulus-driven terms to predict the LGN response. We found that the S-potential input "explained away" the major excitatory and delayed suppressive terms responsible for temporal patterning of LGN spike trains but revealed additional contributions, largely PULL suppression, to the LGN response. Using this novel combination of recordings and modeling, we were thus able to dissect multiple circuit contributions to LGN temporal responses across retina and LGN, and set the foundation for targeted study of each stage. Copyright © 2016 the American Physiological Society.
Kuc, S; Koster, M P; Franx, A; Schielen, P C; Visser, G H
2012-07-01
In a previous study, we described the predictive value of first-trimester pregnancy-associated plasma protein-A (PAPP-A), free beta-subunit of human chorionic gonadotrophin (fb-hCG), Placental Growth Factor (PlGF) and A Desintegrin And Metalloproteinase 12 (ADAM12) for early onset preeclampsia (delivery <34 weeks) [1]. The objective of the current study was to obtain the predictive value of these serum makers, for both early onset PE (EOPE) and late onset PE (LOPE), combined with maternal characteristics and first-trimester maternal mean arterial blood pressure (MAP). This was a nested case-control study, using stored first-trimester maternal serum from 167 women who subsequently developed PE, and 500 uncomplicated singleton pregnancies which resulted in a live birth =>37 weeks. Maternal characteristics (i.e. medical records, parity, weight, length) MAP and pregnancy outcome (i.e. gestational age at delivery, birthweight, fetal sex) were collected for each individual and used to calculate prior risks for PE in a multiple logistic regression model. MAP values and marker levels of PAPP-A, fb-hCG, PlGF and ADAM12 were expressed as multiples of the gestation-specific normal median (MoMs). Subsequently, MoMs were log-transformed and compared between PE and controls using Student's t-tests. Posterior risks were calculated using different combinations of variables;(1) maternal characteristics, serum markers, and MAP separately (2) maternal characteristics combined with serum markers or MAP (3) maternal characteristics combined with serum markers and MAP. The model-predicted detection rates (DR) for fixed 10% false-positive rates were obtained for EOPE and LOPE with or without intra-uterine growth restriction (IUGR,birth weight <10th centile). The maternal characteristics: maternal age, weight, length, smoking status and nulliparity were discriminative between PE and control groups and therefore incorporated in the multiple logistic regression model. MoM MAP was significantly elevated (1.10 p<0.001; 1.07 p<0.001) and MoM PlGF was significantly reduced (0.95 p=0.016; 0.90 p=0.029) in the EOPE and LOPE group, respectively. The differences in markers for IUGR groups were larger. The estimated DRs of the three different models are presented in the table. This study demonstrates that first-trimester MAP and PlGF combined with maternal characteristics are promising markers in risk assessment for PE. Combination of markers proved especially useful for risk assessment for term PE. Detection rates were higher in the presence of IUGR. Copyright © 2012. Published by Elsevier B.V.
An integrative formal model of motivation and decision making: The MGPM*.
Ballard, Timothy; Yeo, Gillian; Loft, Shayne; Vancouver, Jeffrey B; Neal, Andrew
2016-09-01
We develop and test an integrative formal model of motivation and decision making. The model, referred to as the extended multiple-goal pursuit model (MGPM*), is an integration of the multiple-goal pursuit model (Vancouver, Weinhardt, & Schmidt, 2010) and decision field theory (Busemeyer & Townsend, 1993). Simulations of the model generated predictions regarding the effects of goal type (approach vs. avoidance), risk, and time sensitivity on prioritization. We tested these predictions in an experiment in which participants pursued different combinations of approach and avoidance goals under different levels of risk. The empirical results were consistent with the predictions of the MGPM*. Specifically, participants pursuing 1 approach and 1 avoidance goal shifted priority from the approach to the avoidance goal over time. Among participants pursuing 2 approach goals, those with low time sensitivity prioritized the goal with the larger discrepancy, whereas those with high time sensitivity prioritized the goal with the smaller discrepancy. Participants pursuing 2 avoidance goals generally prioritized the goal with the smaller discrepancy. Finally, all of these effects became weaker as the level of risk increased. We used quantitative model comparison to show that the MGPM* explained the data better than the original multiple-goal pursuit model, and that the major extensions from the original model were justified. The MGPM* represents a step forward in the development of a general theory of decision making during multiple-goal pursuit. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Mi, Zhibao; Novitzky, Dimitri; Collins, Joseph F; Cooper, David KC
2015-01-01
The management of brain-dead organ donors is complex. The use of inotropic agents and replacement of depleted hormones (hormonal replacement therapy) is crucial for successful multiple organ procurement, yet the optimal hormonal replacement has not been identified, and the statistical adjustment to determine the best selection is not trivial. Traditional pair-wise comparisons between every pair of treatments, and multiple comparisons to all (MCA), are statistically conservative. Hsu’s multiple comparisons with the best (MCB) – adapted from the Dunnett’s multiple comparisons with control (MCC) – has been used for selecting the best treatment based on continuous variables. We selected the best hormonal replacement modality for successful multiple organ procurement using a two-step approach. First, we estimated the predicted margins by constructing generalized linear models (GLM) or generalized linear mixed models (GLMM), and then we applied the multiple comparison methods to identify the best hormonal replacement modality given that the testing of hormonal replacement modalities is independent. Based on 10-year data from the United Network for Organ Sharing (UNOS), among 16 hormonal replacement modalities, and using the 95% simultaneous confidence intervals, we found that the combination of thyroid hormone, a corticosteroid, antidiuretic hormone, and insulin was the best modality for multiple organ procurement for transplantation. PMID:25565890
A Combined IRT and SEM Approach for Individual-Level Assessment in Test-Retest Studies
ERIC Educational Resources Information Center
Ferrando, Pere J.
2015-01-01
The standard two-wave multiple-indicator model (2WMIM) commonly used to analyze test-retest data provides information at both the group and item level. Furthermore, when applied to binary and graded item responses, it is related to well-known item response theory (IRT) models. In this article the IRT-2WMIM relations are used to obtain additional…
Time-frequency analysis of band-limited EEG with BMFLC and Kalman filter for BCI applications
2013-01-01
Background Time-Frequency analysis of electroencephalogram (EEG) during different mental tasks received significant attention. As EEG is non-stationary, time-frequency analysis is essential to analyze brain states during different mental tasks. Further, the time-frequency information of EEG signal can be used as a feature for classification in brain-computer interface (BCI) applications. Methods To accurately model the EEG, band-limited multiple Fourier linear combiner (BMFLC), a linear combination of truncated multiple Fourier series models is employed. A state-space model for BMFLC in combination with Kalman filter/smoother is developed to obtain accurate adaptive estimation. By virtue of construction, BMFLC with Kalman filter/smoother provides accurate time-frequency decomposition of the bandlimited signal. Results The proposed method is computationally fast and is suitable for real-time BCI applications. To evaluate the proposed algorithm, a comparison with short-time Fourier transform (STFT) and continuous wavelet transform (CWT) for both synthesized and real EEG data is performed in this paper. The proposed method is applied to BCI Competition data IV for ERD detection in comparison with existing methods. Conclusions Results show that the proposed algorithm can provide optimal time-frequency resolution as compared to STFT and CWT. For ERD detection, BMFLC-KF outperforms STFT and BMFLC-KS in real-time applicability with low computational requirement. PMID:24274109
NASA Astrophysics Data System (ADS)
Noi Phan, Thanh; Kappas, Martin; Degener, Jan
2017-04-01
Land air temperature (Ta) with high spatial and temporal resolution plays an important role in various applications, such as: crop growth monitoring and simulations, environmental risk models, weather forecasting, land use cover change, urban heat islands, etc. Daily Ta (including Ta-max, Ta-min, and Ta-mean) is usually measured by weather stations (often at 2 m above the ground); thus, Ta is limited in spatial coverage. Satellite data, especially MODIS land surface temperature (LST) data at 1 kilometre and high temporal resolution (4 times per day, combining TERRA and AQUA) are free available and easily to access. However, there is a difference between Ta and LST because of the complex surface energy budget and multiple related variables between them. Several researches states that the Ta could be estimated using MODIS LST data with accurate of 2-4oC. However, there are only a handful of studies using dynamically combining of four MODIS LST data for Ta estimation. In this study, we evaluated all 15 - possible - combinations of four MODIS LST using support vector machine (SVM) and random forests (RFs) models. MODIS LST and Ta data was extracted from 4 weather stations in rural area in North West Vietnam from 2010 to 2012 (three years). Our results indicated that the accuracy of Ta estimation was affected by the different combination and the combined data (multiple variables) gave better results than those of single LST (solely variable), the best result was achieved (coefficient of determination (R2) = 0.95, 0.97, 0.97; root mean square error (RMSE) =1.7, 1.4, 1.2 oC for Ta-min, Ta-max, Ta-mean respectively) when all four LSTs were combined and RFs performed better than SVM.
Huang, Shi; MacKinnon, David P.; Perrino, Tatiana; Gallo, Carlos; Cruden, Gracelyn; Brown, C Hendricks
2016-01-01
Mediation analysis often requires larger sample sizes than main effect analysis to achieve the same statistical power. Combining results across similar trials may be the only practical option for increasing statistical power for mediation analysis in some situations. In this paper, we propose a method to estimate: 1) marginal means for mediation path a, the relation of the independent variable to the mediator; 2) marginal means for path b, the relation of the mediator to the outcome, across multiple trials; and 3) the between-trial level variance-covariance matrix based on a bivariate normal distribution. We present the statistical theory and an R computer program to combine regression coefficients from multiple trials to estimate a combined mediated effect and confidence interval under a random effects model. Values of coefficients a and b, along with their standard errors from each trial are the input for the method. This marginal likelihood based approach with Monte Carlo confidence intervals provides more accurate inference than the standard meta-analytic approach. We discuss computational issues, apply the method to two real-data examples and make recommendations for the use of the method in different settings. PMID:28239330
Chiu, Chi-yang; Jung, Jeesun; Wang, Yifan; Weeks, Daniel E.; Wilson, Alexander F.; Bailey-Wilson, Joan E.; Amos, Christopher I.; Mills, James L.; Boehnke, Michael; Xiong, Momiao; Fan, Ruzong
2016-01-01
In this paper, extensive simulations are performed to compare two statistical methods to analyze multiple correlated quantitative phenotypes: (1) approximate F-distributed tests of multivariate functional linear models (MFLM) and additive models of multivariate analysis of variance (MANOVA), and (2) Gene Association with Multiple Traits (GAMuT) for association testing of high-dimensional genotype data. It is shown that approximate F-distributed tests of MFLM and MANOVA have higher power and are more appropriate for major gene association analysis (i.e., scenarios in which some genetic variants have relatively large effects on the phenotypes); GAMuT has higher power and is more appropriate for analyzing polygenic effects (i.e., effects from a large number of genetic variants each of which contributes a small amount to the phenotypes). MFLM and MANOVA are very flexible and can be used to perform association analysis for: (i) rare variants, (ii) common variants, and (iii) a combination of rare and common variants. Although GAMuT was designed to analyze rare variants, it can be applied to analyze a combination of rare and common variants and it performs well when (1) the number of genetic variants is large and (2) each variant contributes a small amount to the phenotypes (i.e., polygenes). MFLM and MANOVA are fixed effect models which perform well for major gene association analysis. GAMuT can be viewed as an extension of sequence kernel association tests (SKAT). Both GAMuT and SKAT are more appropriate for analyzing polygenic effects and they perform well not only in the rare variant case, but also in the case of a combination of rare and common variants. Data analyses of European cohorts and the Trinity Students Study are presented to compare the performance of the two methods. PMID:27917525
Patient-derived models of acquired resistance can identify effective drug combinations for cancer.
Crystal, Adam S; Shaw, Alice T; Sequist, Lecia V; Friboulet, Luc; Niederst, Matthew J; Lockerman, Elizabeth L; Frias, Rosa L; Gainor, Justin F; Amzallag, Arnaud; Greninger, Patricia; Lee, Dana; Kalsy, Anuj; Gomez-Caraballo, Maria; Elamine, Leila; Howe, Emily; Hur, Wooyoung; Lifshits, Eugene; Robinson, Hayley E; Katayama, Ryohei; Faber, Anthony C; Awad, Mark M; Ramaswamy, Sridhar; Mino-Kenudson, Mari; Iafrate, A John; Benes, Cyril H; Engelman, Jeffrey A
2014-12-19
Targeted cancer therapies have produced substantial clinical responses, but most tumors develop resistance to these drugs. Here, we describe a pharmacogenomic platform that facilitates rapid discovery of drug combinations that can overcome resistance. We established cell culture models derived from biopsy samples of lung cancer patients whose disease had progressed while on treatment with epidermal growth factor receptor (EGFR) or anaplastic lymphoma kinase (ALK) tyrosine kinase inhibitors and then subjected these cells to genetic analyses and a pharmacological screen. Multiple effective drug combinations were identified. For example, the combination of ALK and MAPK kinase (MEK) inhibitors was active in an ALK-positive resistant tumor that had developed a MAP2K1 activating mutation, and the combination of EGFR and fibroblast growth factor receptor (FGFR) inhibitors was active in an EGFR mutant resistant cancer with a mutation in FGFR3. Combined ALK and SRC (pp60c-src) inhibition was effective in several ALK-driven patient-derived models, a result not predicted by genetic analysis alone. With further refinements, this strategy could help direct therapeutic choices for individual patients. Copyright © 2014, American Association for the Advancement of Science.
Model-Based Localization and Tracking Using Bluetooth Low-Energy Beacons
Cemgil, Ali Taylan
2017-01-01
We introduce a high precision localization and tracking method that makes use of cheap Bluetooth low-energy (BLE) beacons only. We track the position of a moving sensor by integrating highly unreliable and noisy BLE observations streaming from multiple locations. A novel aspect of our approach is the development of an observation model, specifically tailored for received signal strength indicator (RSSI) fingerprints: a combination based on the optimal transport model of Wasserstein distance. The tracking results of the entire system are compared with alternative baseline estimation methods, such as nearest neighboring fingerprints and an artificial neural network. Our results show that highly accurate estimation from noisy Bluetooth data is practically feasible with an observation model based on Wasserstein distance interpolation combined with the sequential Monte Carlo (SMC) method for tracking. PMID:29109375
Model-Based Localization and Tracking Using Bluetooth Low-Energy Beacons.
Daniş, F Serhan; Cemgil, Ali Taylan
2017-10-29
We introduce a high precision localization and tracking method that makes use of cheap Bluetooth low-energy (BLE) beacons only. We track the position of a moving sensor by integrating highly unreliable and noisy BLE observations streaming from multiple locations. A novel aspect of our approach is the development of an observation model, specifically tailored for received signal strength indicator (RSSI) fingerprints: a combination based on the optimal transport model of Wasserstein distance. The tracking results of the entire system are compared with alternative baseline estimation methods, such as nearest neighboring fingerprints and an artificial neural network. Our results show that highly accurate estimation from noisy Bluetooth data is practically feasible with an observation model based on Wasserstein distance interpolation combined with the sequential Monte Carlo (SMC) method for tracking.
Merging information from multi-model flood projections in a hierarchical Bayesian framework
NASA Astrophysics Data System (ADS)
Le Vine, Nataliya
2016-04-01
Multi-model ensembles are becoming widely accepted for flood frequency change analysis. The use of multiple models results in large uncertainty around estimates of flood magnitudes, due to both uncertainty in model selection and natural variability of river flow. The challenge is therefore to extract the most meaningful signal from the multi-model predictions, accounting for both model quality and uncertainties in individual model estimates. The study demonstrates the potential of a recently proposed hierarchical Bayesian approach to combine information from multiple models. The approach facilitates explicit treatment of shared multi-model discrepancy as well as the probabilistic nature of the flood estimates, by treating the available models as a sample from a hypothetical complete (but unobserved) set of models. The advantages of the approach are: 1) to insure an adequate 'baseline' conditions with which to compare future changes; 2) to reduce flood estimate uncertainty; 3) to maximize use of statistical information in circumstances where multiple weak predictions individually lack power, but collectively provide meaningful information; 4) to adjust multi-model consistency criteria when model biases are large; and 5) to explicitly consider the influence of the (model performance) stationarity assumption. Moreover, the analysis indicates that reducing shared model discrepancy is the key to further reduction of uncertainty in the flood frequency analysis. The findings are of value regarding how conclusions about changing exposure to flooding are drawn, and to flood frequency change attribution studies.
Swift, Brenna E; Williams, Brent A; Kosaka, Yoko; Wang, Xing-Hua; Medin, Jeffrey A; Viswanathan, Sowmya; Martinez-Lopez, Joaquin; Keating, Armand
2012-07-01
Novel therapies capable of targeting drug resistant clonogenic MM cells are required for more effective treatment of multiple myeloma. This study investigates the cytotoxicity of natural killer cell lines against bulk and clonogenic multiple myeloma and evaluates the tumor burden after NK cell therapy in a bioluminescent xenograft mouse model. The cytotoxicity of natural killer cell lines was evaluated against bulk multiple myeloma cell lines using chromium release and flow cytometry cytotoxicity assays. Selected activating receptors on natural killer cells were blocked to determine their role in multiple myeloma recognition. Growth inhibition of clonogenic multiple myeloma cells was assessed in a methylcellulose clonogenic assay in combination with secondary replating to evaluate the self-renewal of residual progenitors after natural killer cell treatment. A bioluminescent mouse model was developed using the human U266 cell line transduced to express green fluorescent protein and luciferase (U266eGFPluc) to monitor disease progression in vivo and assess bone marrow engraftment after intravenous NK-92 cell therapy. Three multiple myeloma cell lines were sensitive to NK-92 and KHYG-1 cytotoxicity mediated by NKp30, NKp46, NKG2D and DNAM-1 activating receptors. NK-92 and KHYG-1 demonstrated 2- to 3-fold greater inhibition of clonogenic multiple myeloma growth, compared with killing of the bulk tumor population. In addition, the residual colonies after treatment formed significantly fewer colonies compared to the control in a secondary replating for a cumulative clonogenic inhibition of 89-99% at the 20:1 effector to target ratio. Multiple myeloma tumor burden was reduced by NK-92 in a xenograft mouse model as measured by bioluminescence imaging and reduction in bone marrow engraftment of U266eGFPluc cells by flow cytometry. This study demonstrates that NK-92 and KHYG-1 are capable of killing clonogenic and bulk multiple myeloma cells. In addition, multiple myeloma tumor burden in a xenograft mouse model was reduced by intravenous NK-92 cell therapy. Since multiple myeloma colony frequency correlates with survival, our observations have important clinical implications and suggest that clinical studies of NK cell lines to treat MM are warranted.
A regenerative approach to the treatment of multiple sclerosis.
Deshmukh, Vishal A; Tardif, Virginie; Lyssiotis, Costas A; Green, Chelsea C; Kerman, Bilal; Kim, Hyung Joon; Padmanabhan, Krishnan; Swoboda, Jonathan G; Ahmad, Insha; Kondo, Toru; Gage, Fred H; Theofilopoulos, Argyrios N; Lawson, Brian R; Schultz, Peter G; Lairson, Luke L
2013-10-17
Progressive phases of multiple sclerosis are associated with inhibited differentiation of the progenitor cell population that generates the mature oligodendrocytes required for remyelination and disease remission. To identify selective inducers of oligodendrocyte differentiation, we performed an image-based screen for myelin basic protein (MBP) expression using primary rat optic-nerve-derived progenitor cells. Here we show that among the most effective compounds identifed was benztropine, which significantly decreases clinical severity in the experimental autoimmune encephalomyelitis (EAE) model of relapsing-remitting multiple sclerosis when administered alone or in combination with approved immunosuppressive treatments for multiple sclerosis. Evidence from a cuprizone-induced model of demyelination, in vitro and in vivo T-cell assays and EAE adoptive transfer experiments indicated that the observed efficacy of this drug results directly from an enhancement of remyelination rather than immune suppression. Pharmacological studies indicate that benztropine functions by a mechanism that involves direct antagonism of M1 and/or M3 muscarinic receptors. These studies should facilitate the development of effective new therapies for the treatment of multiple sclerosis that complement established immunosuppressive approaches.
NASA Astrophysics Data System (ADS)
Liu, W. L.; Li, Y. W.
2017-09-01
Large-scale dimensional metrology usually requires a combination of multiple measurement systems, such as laser tracking, total station, laser scanning, coordinate measuring arm and video photogrammetry, etc. Often, the results from different measurement systems must be combined to provide useful results. The coordinate transformation is used to unify coordinate frames in combination; however, coordinate transformation uncertainties directly affect the accuracy of the final measurement results. In this paper, a novel method is proposed for improving the accuracy of coordinate transformation, combining the advantages of the best-fit least-square and radial basis function (RBF) neural networks. First of all, the configuration of coordinate transformation is introduced and a transformation matrix containing seven variables is obtained. Second, the 3D uncertainty of the transformation model and the residual error variable vector are established based on the best-fit least-square. Finally, in order to optimize the uncertainty of the developed seven-variable transformation model, we used the RBF neural network to identify the uncertainty of the dynamic, and unstructured, owing to its great ability to approximate any nonlinear function to the designed accuracy. Intensive experimental studies were conducted to check the validity of the theoretical results. The results show that the mean error of coordinate transformation decreased from 0.078 mm to 0.054 mm after using this method in contrast with the GUM method.
Oscillations and Multiple Equilibria in Microvascular Blood Flow.
Karst, Nathaniel J; Storey, Brian D; Geddes, John B
2015-07-01
We investigate the existence of oscillatory dynamics and multiple steady-state flow rates in a network with a simple topology and in vivo microvascular blood flow constitutive laws. Unlike many previous analytic studies, we employ the most biologically relevant models of the physical properties of whole blood. Through a combination of analytic and numeric techniques, we predict in a series of two-parameter bifurcation diagrams a range of dynamical behaviors, including multiple equilibria flow configurations, simple oscillations in volumetric flow rate, and multiple coexistent limit cycles at physically realizable parameters. We show that complexity in network topology is not necessary for complex behaviors to arise and that nonlinear rheology, in particular the plasma skimming effect, is sufficient to support oscillatory dynamics similar to those observed in vivo.
Inverse Problems in Complex Models and Applications to Earth Sciences
NASA Astrophysics Data System (ADS)
Bosch, M. E.
2015-12-01
The inference of the subsurface earth structure and properties requires the integration of different types of data, information and knowledge, by combined processes of analysis and synthesis. To support the process of integrating information, the regular concept of data inversion is evolving to expand its application to models with multiple inner components (properties, scales, structural parameters) that explain multiple data (geophysical survey data, well-logs, core data). The probabilistic inference methods provide the natural framework for the formulation of these problems, considering a posterior probability density function (PDF) that combines the information from a prior information PDF and the new sets of observations. To formulate the posterior PDF in the context of multiple datasets, the data likelihood functions are factorized assuming independence of uncertainties for data originating across different surveys. A realistic description of the earth medium requires modeling several properties and structural parameters, which relate to each other according to dependency and independency notions. Thus, conditional probabilities across model components also factorize. A common setting proceeds by structuring the model parameter space in hierarchical layers. A primary layer (e.g. lithology) conditions a secondary layer (e.g. physical medium properties), which conditions a third layer (e.g. geophysical data). In general, less structured relations within model components and data emerge from the analysis of other inverse problems. They can be described with flexibility via direct acyclic graphs, which are graphs that map dependency relations between the model components. Examples of inverse problems in complex models can be shown at various scales. At local scale, for example, the distribution of gas saturation is inferred from pre-stack seismic data and a calibrated rock-physics model. At regional scale, joint inversion of gravity and magnetic data is applied for the estimation of lithological structure of the crust, with the lithotype body regions conditioning the mass density and magnetic susceptibility fields. At planetary scale, the Earth mantle temperature and element composition is inferred from seismic travel-time and geodetic data.
Equivalent ZF precoding scheme for downlink indoor MU-MIMO VLC systems
NASA Astrophysics Data System (ADS)
Fan, YangYu; Zhao, Qiong; Kang, BoChao; Deng, LiJun
2018-01-01
In indoor visible light communication (VLC) systems, the channels of photo detectors (PDs) at one user are highly correlated, which determines the choice of spatial diversity model for individual users. In a spatial diversity model, the signals received by PDs belonging to one user carry the same information, and can be combined directly. Based on the above, we propose an equivalent zero-forcing (ZF) precoding scheme for multiple-user multiple-input single-output (MU-MIMO) VLC systems by transforming an indoor MU-MIMO VLC system into an indoor multiple-user multiple-input single-output (MU-MISO) VLC system through simply processing. The power constraints of light emitting diodes (LEDs) are also taken into account. Comprehensive computer simulations in three scenarios indicate that our scheme can not only reduce the computational complexity, but also guarantee the system performance. Furthermore, the proposed scheme does not require noise information in the calculating of the precoding weights, and has no restrictions on the numbers of APs and PDs.
Chen, Jin; Venugopal, Vivek; Intes, Xavier
2011-01-01
Time-resolved fluorescence optical tomography allows 3-dimensional localization of multiple fluorophores based on lifetime contrast while providing a unique data set for improved resolution. However, to employ the full fluorescence time measurements, a light propagation model that accurately simulates weakly diffused and multiple scattered photons is required. In this article, we derive a computationally efficient Monte Carlo based method to compute time-gated fluorescence Jacobians for the simultaneous imaging of two fluorophores with lifetime contrast. The Monte Carlo based formulation is validated on a synthetic murine model simulating the uptake in the kidneys of two distinct fluorophores with lifetime contrast. Experimentally, the method is validated using capillaries filled with 2.5nmol of ICG and IRDye™800CW respectively embedded in a diffuse media mimicking the average optical properties of mice. Combining multiple time gates in one inverse problem allows the simultaneous reconstruction of multiple fluorophores with increased resolution and minimal crosstalk using the proposed formulation. PMID:21483610
Scheffe, Richard D; Strum, Madeleine; Phillips, Sharon B; Thurman, James; Eyth, Alison; Fudge, Steve; Morris, Mark; Palma, Ted; Cook, Richard
2016-11-15
A hybrid air quality model has been developed and applied to estimate annual concentrations of 40 hazardous air pollutants (HAPs) across the continental United States (CONUS) to support the 2011 calendar year National Air Toxics Assessment (NATA). By combining a chemical transport model (CTM) with a Gaussian dispersion model, both reactive and nonreactive HAPs are accommodated across local to regional spatial scales, through a multiplicative technique designed to improve mass conservation relative to previous additive methods. The broad scope of multiple pollutants capturing regional to local spatial scale patterns across a vast spatial domain is precedent setting within the air toxics community. The hybrid design exhibits improved performance relative to the stand alone CTM and dispersion model. However, model performance varies widely across pollutant categories and quantifiably definitive performance assessments are hampered by a limited observation base and challenged by the multiple physical and chemical attributes of HAPs. Formaldehyde and acetaldehyde are the dominant HAP concentration and cancer risk drivers, characterized by strong regional signals associated with naturally emitted carbonyl precursors enhanced in urban transport corridors with strong mobile source sector emissions. The multiple pollutant emission characteristics of combustion dominated source sectors creates largely similar concentration patterns across the majority of HAPs. However, reactive carbonyls exhibit significantly less spatial variability relative to nonreactive HAPs across the CONUS.
NASA Astrophysics Data System (ADS)
Morales, Marco A.; Fernández-Cervantes, Irving; Agustín-Serrano, Ricardo; Anzo, Andrés; Sampedro, Mercedes P.
2016-08-01
A functional with interactions short-range and long-range low coarse-grained approximation is proposed. This functional satisfies models with dissipative dynamics A, B and the stochastic Swift-Hohenberg equation. Furthermore, terms associated with multiplicative noise source are added in these models. These models are solved numerically using the method known as fast Fourier transform. Results of the spatio-temporal dynamic show similarity with respect to patterns behaviour in ferrofluids phases subject to external fields (magnetic, electric and temperature), as well as with the nucleation and growth phenomena present in some solid dissolutions. As a result of the multiplicative noise effect over the dynamic, some microstructures formed by changing solid phase and composed by binary alloys of Pb-Sn, Fe-C and Cu-Ni, as well as a NiAl-Cr(Mo) eutectic composite material. The model A for active-particles with a non-potential term in form of quadratic gradient explain the formation of nanostructured particles of silver phosphate. With these models is shown that the underlying mechanisms in the patterns formation in all these systems depends of: (a) dissipative dynamics; (b) the short-range and long-range interactions and (c) the appropiate combination of quadratic and multiplicative noise terms.
Software forecasting as it is really done: A study of JPL software engineers
NASA Technical Reports Server (NTRS)
Griesel, Martha Ann; Hihn, Jairus M.; Bruno, Kristin J.; Fouser, Thomas J.; Tausworthe, Robert C.
1993-01-01
This paper presents a summary of the results to date of a Jet Propulsion Laboratory internally funded research task to study the costing process and parameters used by internally recognized software cost estimating experts. Protocol Analysis and Markov process modeling were used to capture software engineer's forecasting mental models. While there is significant variation between the mental models that were studied, it was nevertheless possible to identify a core set of cost forecasting activities, and it was also found that the mental models cluster around three forecasting techniques. Further partitioning of the mental models revealed clustering of activities, that is very suggestive of a forecasting lifecycle. The different forecasting methods identified were based on the use of multiple-decomposition steps or multiple forecasting steps. The multiple forecasting steps involved either forecasting software size or an additional effort forecast. Virtually no subject used risk reduction steps in combination. The results of the analysis include: the identification of a core set of well defined costing activities, a proposed software forecasting life cycle, and the identification of several basic software forecasting mental models. The paper concludes with a discussion of the implications of the results for current individual and institutional practices.
Multiple Goal Orientations and Foreign Language Anxiety
ERIC Educational Resources Information Center
Koul, Ravinder; Roy, Laura; Kaewkuekool, Sittichai; Ploisawaschai, Suthee
2009-01-01
This investigation examines Thai college students' motivational goals for learning the English language. Thai student volunteers (N = 1387) from two types of educational institutions participated in this survey study which combined measures of goal orientations based on two different goal constructs and motivation models. Results of two-step…
Interaction Analysis of Longevity Interventions Using Survival Curves.
Nowak, Stefan; Neidhart, Johannes; Szendro, Ivan G; Rzezonka, Jonas; Marathe, Rahul; Krug, Joachim
2018-01-06
A long-standing problem in ageing research is to understand how different factors contributing to longevity should be expected to act in combination under the assumption that they are independent. Standard interaction analysis compares the extension of mean lifespan achieved by a combination of interventions to the prediction under an additive or multiplicative null model, but neither model is fundamentally justified. Moreover, the target of longevity interventions is not mean life span but the entire survival curve. Here we formulate a mathematical approach for predicting the survival curve resulting from a combination of two independent interventions based on the survival curves of the individual treatments, and quantify interaction between interventions as the deviation from this prediction. We test the method on a published data set comprising survival curves for all combinations of four different longevity interventions in Caenorhabditis elegans . We find that interactions are generally weak even when the standard analysis indicates otherwise.
Interaction Analysis of Longevity Interventions Using Survival Curves
Nowak, Stefan; Neidhart, Johannes; Szendro, Ivan G.; Rzezonka, Jonas; Marathe, Rahul; Krug, Joachim
2018-01-01
A long-standing problem in ageing research is to understand how different factors contributing to longevity should be expected to act in combination under the assumption that they are independent. Standard interaction analysis compares the extension of mean lifespan achieved by a combination of interventions to the prediction under an additive or multiplicative null model, but neither model is fundamentally justified. Moreover, the target of longevity interventions is not mean life span but the entire survival curve. Here we formulate a mathematical approach for predicting the survival curve resulting from a combination of two independent interventions based on the survival curves of the individual treatments, and quantify interaction between interventions as the deviation from this prediction. We test the method on a published data set comprising survival curves for all combinations of four different longevity interventions in Caenorhabditis elegans. We find that interactions are generally weak even when the standard analysis indicates otherwise. PMID:29316622
NASA Technical Reports Server (NTRS)
Bell, James H.; Burner, Alpheus W.
2004-01-01
As the benefit-to-cost ratio of advanced optical techniques for wind tunnel measurements such as Video Model Deformation (VMD), Pressure-Sensitive Paint (PSP), and others increases, these techniques are being used more and more often in large-scale production type facilities. Further benefits might be achieved if multiple optical techniques could be deployed in a wind tunnel test simultaneously. The present study discusses the problems and benefits of combining VMD and PSP systems. The desirable attributes of useful optical techniques for wind tunnels, including the ability to accommodate the myriad optical techniques available today, are discussed. The VMD and PSP techniques are briefly reviewed. Commonalties and differences between the two techniques are discussed. Recent wind tunnel experiences and problems when combining PSP and VMD are presented, as are suggestions for future developments in combined PSP and deformation measurements.
POLICY VARIATION, LABOR SUPPLY ELASTICITIES, AND A STRUCTURAL MODEL OF RETIREMENT
MANOLI, DAY; MULLEN, KATHLEEN J.; WAGNER, MATHIS
2015-01-01
This paper exploits a combination of policy variation from multiple pension reforms in Austria and administrative data from the Austrian Social Security Database. Using the policy changes for identification, we estimate social security wealth and accrual elasticities in individuals’ retirement decisions. Next, we use these elasticities to estimate a dynamic programming model of retirement decisions. Finally, we use the estimated model to examine the labor supply and welfare consequences of potential social security reforms. PMID:26472916
Multivariate meta-analysis using individual participant data
Riley, R. D.; Price, M. J.; Jackson, D.; Wardle, M.; Gueyffier, F.; Wang, J.; Staessen, J. A.; White, I. R.
2016-01-01
When combining results across related studies, a multivariate meta-analysis allows the joint synthesis of correlated effect estimates from multiple outcomes. Joint synthesis can improve efficiency over separate univariate syntheses, may reduce selective outcome reporting biases, and enables joint inferences across the outcomes. A common issue is that within-study correlations needed to fit the multivariate model are unknown from published reports. However, provision of individual participant data (IPD) allows them to be calculated directly. Here, we illustrate how to use IPD to estimate within-study correlations, using a joint linear regression for multiple continuous outcomes and bootstrapping methods for binary, survival and mixed outcomes. In a meta-analysis of 10 hypertension trials, we then show how these methods enable multivariate meta-analysis to address novel clinical questions about continuous, survival and binary outcomes; treatment–covariate interactions; adjusted risk/prognostic factor effects; longitudinal data; prognostic and multiparameter models; and multiple treatment comparisons. Both frequentist and Bayesian approaches are applied, with example software code provided to derive within-study correlations and to fit the models. PMID:26099484
2012-01-01
Background The Danish Multiple Sclerosis Society initiated a large-scale bridge building and integrative treatment project to take place from 2004–2010 at a specialized Multiple Sclerosis (MS) hospital. In this project, a team of five conventional health care practitioners and five alternative practitioners was set up to work together in developing and offering individualized treatments to 200 people with MS. The purpose of this paper is to present results from the six year treatment collaboration process regarding the development of an integrative treatment model. Discussion The collaborative work towards an integrative treatment model for people with MS, involved six steps: 1) Working with an initial model 2) Unfolding the different treatment philosophies 3) Discussing the elements of the Intervention-Mechanism-Context-Outcome-scheme (the IMCO-scheme) 4) Phrasing the common assumptions for an integrative MS program theory 5) Developing the integrative MS program theory 6) Building the integrative MS treatment model. The model includes important elements of the different treatment philosophies represented in the team and thereby describes a common understanding of the complexity of the courses of treatment. Summary An integrative team of practitioners has developed an integrative model for combined treatments of People with Multiple Sclerosis. The model unites different treatment philosophies and focuses on process-oriented factors and the strengthening of the patients’ resources and competences on a physical, an emotional and a cognitive level. PMID:22524586
Developing Scientific Reasoning Through Drawing Cross-Sections
NASA Astrophysics Data System (ADS)
Hannula, K. A.
2012-12-01
Cross-sections and 3D models of subsurface geology are typically based on incomplete information (whether surface geologic mapping, well logs, or geophysical data). Creating and evaluating those models requires spatial and quantitative thinking skills (including penetrative thinking, understanding of horizontality, mental rotation and animation, and scaling). However, evaluating the reasonableness of a cross-section or 3D structural model also requires consideration of multiple possible geometries and geologic histories. Teaching students to create good models requires application of the scientific methods of the geosciences (such as evaluation of multiple hypotheses and combining evidence from multiple techniques). Teaching these critical thinking skills, especially combined with teaching spatial thinking skills, is challenging. My Structural Geology and Advanced Structural Geology courses have taken two different approaches to developing both the abilities to visualize and to test multiple models. In the final project in Structural Geology (a 3rd year course with a pre-requisite sophomore mapping course), students create a viable cross-section across part of the Wyoming thrust belt by hand, based on a published 1:62,500 geologic map. The cross-section must meet a number of geometric criteria (such as the template constraint), but is not required to balance. Each student tries many potential geometries while trying to find a viable solution. In most cases, the students don't visualize the implications of the geometries that they try, but have to draw them and then erase their work if it does not meet the criteria for validity. The Advanced Structural Geology course used Midland Valley's Move suite to test the cross-sections that they made in Structural Geology, mostly using the flexural slip unfolding algorithm and testing whether the resulting line lengths balanced. In both exercises, students seemed more confident in the quality of their cross-sections when the sections were easy to visualize. Students in Structural Geology are proud of their cross-sections once they were inked and colored. Students in Advanced Structural Geology were confident in their digitized cross-sections, even before they had tried to balance them or had tested whether they were kinematically plausible. In both cases, visually attractive models seemed easier to believe. Three-dimensional models seemed even more convincing: if students could visualize the model, they also thought it should work geometrically and kinematically, whether they had tested it or not. Students were more inclined to test their models when they had a clear set of criteria that would indicate success or failure. However, future development of new ideas about the kinematic and/or mechanical development of structures may force the students to also decide which criteria fit their problem the best. Combining both kinds of critical thinking (evaluating techniques and evaluating their results) in the same assignment may be challenging.
Information Retrieval and Graph Analysis Approaches for Book Recommendation.
Benkoussas, Chahinez; Bellot, Patrice
2015-01-01
A combination of multiple information retrieval approaches is proposed for the purpose of book recommendation. In this paper, book recommendation is based on complex user's query. We used different theoretical retrieval models: probabilistic as InL2 (Divergence from Randomness model) and language model and tested their interpolated combination. Graph analysis algorithms such as PageRank have been successful in Web environments. We consider the application of this algorithm in a new retrieval approach to related document network comprised of social links. We called Directed Graph of Documents (DGD) a network constructed with documents and social information provided from each one of them. Specifically, this work tackles the problem of book recommendation in the context of INEX (Initiative for the Evaluation of XML retrieval) Social Book Search track. A series of reranking experiments demonstrate that combining retrieval models yields significant improvements in terms of standard ranked retrieval metrics. These results extend the applicability of link analysis algorithms to different environments.
Information Retrieval and Graph Analysis Approaches for Book Recommendation
Benkoussas, Chahinez; Bellot, Patrice
2015-01-01
A combination of multiple information retrieval approaches is proposed for the purpose of book recommendation. In this paper, book recommendation is based on complex user's query. We used different theoretical retrieval models: probabilistic as InL2 (Divergence from Randomness model) and language model and tested their interpolated combination. Graph analysis algorithms such as PageRank have been successful in Web environments. We consider the application of this algorithm in a new retrieval approach to related document network comprised of social links. We called Directed Graph of Documents (DGD) a network constructed with documents and social information provided from each one of them. Specifically, this work tackles the problem of book recommendation in the context of INEX (Initiative for the Evaluation of XML retrieval) Social Book Search track. A series of reranking experiments demonstrate that combining retrieval models yields significant improvements in terms of standard ranked retrieval metrics. These results extend the applicability of link analysis algorithms to different environments. PMID:26504899
A Case Study on a Combination NDVI Forecasting Model Based on the Entropy Weight Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Shengzhi; Ming, Bo; Huang, Qiang
It is critically meaningful to accurately predict NDVI (Normalized Difference Vegetation Index), which helps guide regional ecological remediation and environmental managements. In this study, a combination forecasting model (CFM) was proposed to improve the performance of NDVI predictions in the Yellow River Basin (YRB) based on three individual forecasting models, i.e., the Multiple Linear Regression (MLR), Artificial Neural Network (ANN), and Support Vector Machine (SVM) models. The entropy weight method was employed to determine the weight coefficient for each individual model depending on its predictive performance. Results showed that: (1) ANN exhibits the highest fitting capability among the four orecastingmore » models in the calibration period, whilst its generalization ability becomes weak in the validation period; MLR has a poor performance in both calibration and validation periods; the predicted results of CFM in the calibration period have the highest stability; (2) CFM generally outperforms all individual models in the validation period, and can improve the reliability and stability of predicted results through combining the strengths while reducing the weaknesses of individual models; (3) the performances of all forecasting models are better in dense vegetation areas than in sparse vegetation areas.« less
Functional helicoidal model of DNA molecule with elastic nonlinearity
NASA Astrophysics Data System (ADS)
Tseytlin, Y. M.
2013-06-01
We constructed a functional DNA molecule model on the basis of a flexible helicoidal sensor, specifically, a pretwisted hollow nano-strip. We study in this article the helicoidal nano- sensor model with a pretwisted strip axial extension corresponding to the overstretching transition of DNA from dsDNA to ssDNA. Our model and the DNA molecule have similar geometrical and nonlinear mechanical features unlike models based on an elastic rod, accordion bellows, or an imaginary combination of "multiple soft and hard linear springs", presented in some recent publications.
Development of a Random Field Model for Gas Plume Detection in Multiple LWIR Images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heasler, Patrick G.
This report develops a random field model that describes gas plumes in LWIR remote sensing images. The random field model serves as a prior distribution that can be combined with LWIR data to produce a posterior that determines the probability that a gas plume exists in the scene and also maps the most probable location of any plume. The random field model is intended to work with a single pixel regression estimator--a regression model that estimates gas concentration on an individual pixel basis.
Modeling Passive Propagation of Malwares on the WWW
NASA Astrophysics Data System (ADS)
Chunbo, Liu; Chunfu, Jia
Web-based malwares host in websites fixedly and download onto user's computers automatically while users browse. This passive propagation pattern is different from that of traditional viruses and worms. A propagation model based on reverse web graph is proposed. In this model, propagation of malwares is analyzed by means of random jump matrix which combines orderness and randomness of user browsing behaviors. Explanatory experiments, which has single or multiple propagation sources respectively, prove the validity of the model. Using this model, people can evaluate the hazardness of specified websites and take corresponding countermeasures.
Wyke, Sally; Hunt, Kate
2008-01-01
Background. Frequent consulting is associated with multiple and complex social and health conditions. It is not known how the impact of multiple conditions, the ability to self-manage and patient perception of the GP consultation combines to influence consulting frequency. Objective. To investigate reasons for frequent consultation among people with multiple morbidity but contrasting consulting rates. Methods. Qualitative study with in-depth interviews in the west of Scotland. Participants were 23 men and women aged about 50 years with four or more chronic illnesses; 11 reported consulting seven or more times in the last year [the frequent consulters (FCs)] and 12, three or fewer times [the less frequent consulters (LFCs)]. The main outcome measures were the participants’ accounts of their symptoms, self-management strategies and reasons for consulting a GP. Results. All participants used multiple self-management strategies. FCs described: more disruptive symptoms, which were resistant to self-management strategies; less access to fewer treatments and resources and more medical monitoring, for unstable conditions and drug regimens. The LFCs reported: less severe and more containable symptoms; accessing more efficacious self-management strategies and infrequent GP monitoring for stable conditions and routine drug regimens. All participants conveyed consulting as a ‘last resort’. However, the GP was seen as ‘ally’, for the FCs, and as ‘innocent bystander’, for the LFCs. Conclusions. This qualitative investigation into the combined significance of multiple morbidities and self-management on the GP consultation suggests that current models of self-management might have limited potential to reduce utilization rates among this vulnerable group. Severity of symptoms, stability of condition and complexity of drug regimens combine to influence the availability of effective resources and influence frequency of GP consultations. PMID:18448858
The activity of several newer antimicrobials against logarithmically multiplying M. leprae in mice.
Burgos, Jasmin; de la Cruz, Eduardo; Paredes, Rose; Andaya, Cora Revelyn; Gelber, Robert H
2011-09-01
Moxifloxacin, rifampicin, rifapentine, linezolid, and PA 824, alone and in combination, have been previously administered, as single doses and five times daily doses, to M. leprae infected mice during lag phase multiplication and were each found to have some bactericidal activity. The fluroquinolones, ofloxacin, moxifloxacin and gatifloxacin, (50 mg/kg, 150 mg/kg and 300 mg/kg) and the rifamycins (5 mg/kg, 10 mg/kg, and 20 mg/kg), rifampicin and rifapentine, were evaluated alone and in combination for bactericidal activity against M. leprae using the mouse footpad model during logarithmic multiplication. Linezolid and PA 824 were similarly evaluated alone and linezolid in combination with rifampicin, minocycline and ofloxacin. The three fluroquinolones and rifamycins were found alone and in combination to be bactericidal at all dosage schedules. PA 824 had no activity against M. leprae, while linezolid at a dose of 25 mg/kg was bacteriostatic, and progressively more bactericidal at doses of 50 mg/kg and 100 mg/kg. No antagonisms were detected between any of these drugs when used in combinations. Moxifloxacin, gatifloxacin, rifapentine and linezolid were found bactericidal against rapidly multiplying M. leprae.
Koch, Tobias; Schultze, Martin; Jeon, Minjeong; Nussbeck, Fridtjof W; Praetorius, Anna-Katharina; Eid, Michael
2016-01-01
Multirater (multimethod, multisource) studies are increasingly applied in psychology. Eid and colleagues (2008) proposed a multilevel confirmatory factor model for multitrait-multimethod (MTMM) data combining structurally different and multiple independent interchangeable methods (raters). In many studies, however, different interchangeable raters (e.g., peers, subordinates) are asked to rate different targets (students, supervisors), leading to violations of the independence assumption and to cross-classified data structures. In the present work, we extend the ML-CFA-MTMM model by Eid and colleagues (2008) to cross-classified multirater designs. The new C4 model (Cross-Classified CTC[M-1] Combination of Methods) accounts for nonindependent interchangeable raters and enables researchers to explicitly model the interaction between targets and raters as a latent variable. Using a real data application, it is shown how credibility intervals of model parameters and different variance components can be obtained using Bayesian estimation techniques.
NASA Astrophysics Data System (ADS)
Crowther, Ashley R.; Singh, Rajendra; Zhang, Nong; Chapman, Chris
2007-10-01
Impulsive responses in geared systems with multiple clearances are studied when the mean torque excitation and system load change abruptly, with application to a vehicle driveline with an automatic transmission. First, torsional lumped-mass models of the planetary and differential gear sets are formulated using matrix elements. The model is then reduced to address tractable nonlinear problems while successfully retaining the main modes of interest. Second, numerical simulations for the nonlinear model are performed for transient conditions and a typical driving situation that induces an impulsive behaviour simulated. However, initial conditions and excitation and load profiles have to be carefully defined before the model can be numerically solved. It is shown that the impacts within the planetary or differential gears may occur under combinations of engine, braking and vehicle load transients. Our analysis shows that the shaping of the engine transient by the torque converter before reaching the clearance locations is more critical. Third, a free vibration experiment is developed for an analogous driveline with multiple clearances and three experiments that excite different response regimes have been carried out. Good correlations validate the proposed methodology.
Post-Stall Aerodynamic Modeling and Gain-Scheduled Control Design
NASA Technical Reports Server (NTRS)
Wu, Fen; Gopalarathnam, Ashok; Kim, Sungwan
2005-01-01
A multidisciplinary research e.ort that combines aerodynamic modeling and gain-scheduled control design for aircraft flight at post-stall conditions is described. The aerodynamic modeling uses a decambering approach for rapid prediction of post-stall aerodynamic characteristics of multiple-wing con.gurations using known section data. The approach is successful in bringing to light multiple solutions at post-stall angles of attack right during the iteration process. The predictions agree fairly well with experimental results from wind tunnel tests. The control research was focused on actuator saturation and .ight transition between low and high angles of attack regions for near- and post-stall aircraft using advanced LPV control techniques. The new control approaches maintain adequate control capability to handle high angle of attack aircraft control with stability and performance guarantee.
Meunier, Jean Christophe; Bisceglia, Rossana; Jenkins, Jennifer M
2012-07-01
In this study the associations between mothers' and fathers' differential parenting and children's oppositional and emotional problems were examined. A curvilinear relationship between differential parenting and children's outcomes was hypothesized, as well as the combined effect of mothers' and fathers' parenting. Data came from a community sample of 599 two-parent families with multiple children per family and were analyzed using a cross-classified multilevel model. Results showed that both family average parenting and differential parenting explained unique variance in children's outcomes. The curvilinear hypothesis was supported for oppositional behavior but not for emotional problems. The effects of mother and father positivity were found to be additive for both family average parenting and differential parenting, but for negativity there was evidence for multiplicative effects.
Radiation Transport of Heliospheric Lyman-alpha from Combined Cassini and Voyager Data Sets
NASA Technical Reports Server (NTRS)
Pryor, W.; Gangopadhyay, P.; Sandel, B.; Forrester, T.; Quemerais, E.; Moebius, E.; Esposito, L.; Stewart, I.; McClintock, W.; Jouchoux, A.;
2008-01-01
Heliospheric neutral hydrogen scatters solar Lyman-alpha radiation from the Sun with '27-day' intensity modulations observed near Earth due to the Sun's rotation combined with Earth's orbital motion. These modulations are increasingly damped in amplitude at larger distances from the Sun due to multiple scattering in the heliosphere, providing a diagnostic of the interplanetary neutral hydrogen density independent of instrument calibration. This paper presents Cassini data from 2003-2004 obtained downwind near Saturn at approximately 10 AU that at times show undamped '27-day' waves in good agreement with the single-scattering models of Pryor et al., 1992. Simultaneous Voyager 1 data from 2003- 2004 obtained upwind at a distance of 88.8-92.6 AU from the Sun show waves damped by a factor of -0.21. The observed degree of damping is interpreted in terms of Monte Carlo multiple-scattering calculations (e.g., Keller et al., 1981) applied to two heliospheric hydrogen two-shock density distributions (discussed in Gangopadhyay et al., 2006) calculated in the frame of the Baranov-Malama model of the solar wind interaction with the two-component (neutral hydrogen and plasma) interstellar wind (Baranov and Malama 1993, Izmodenov et al., 2001, Baranov and Izmodenov, 2006). We conclude that multiple scattering is definitely occurring in the outer heliosphere. Both models compare favorably to the data, using heliospheric neutral H densities at the termination shock of 0.085 cm(exp -3) and 0.095 cm(exp -3). This work generally agrees with earlier discussions of Voyager data in Quemerais et al., 1996 showing the importance of multiple scattering but is based on Voyager data obtained at larger distances from the Sun (with larger damping) simultaneously with Cassini data obtained closer to the Sun.
Zhen, Xiaofei; Abdalla Osman, Yassir Idris; Feng, Rong; Zhang, Xuemin
2018-01-01
In order to utilize solar energy to meet the heating demands of a rural residential building during the winter in the northwestern region of China, a hybrid heating system combining solar energy and coal was built. Multiple experiments to monitor its performance were conducted during the winter in 2014 and 2015. In this paper, we analyze the efficiency of the energy utilization of the system and describe a prototype model to determine the thermal efficiency of the coal stove in use. Multiple linear regression was adopted to present the dual function of multiple factors on the daily heat-collecting capacity of the solar water heater; the heat-loss coefficient of the storage tank was detected as well. The prototype model shows that the average thermal efficiency of the stove is 38%, which means that the energy input for the building is divided between the coal and solar energy, 39.5% and 60.5% energy, respectively. Additionally, the allocation of the radiation of solar energy projecting into the collecting area of the solar water heater was obtained which showed 49% loss with optics and 23% with the dissipation of heat, with only 28% being utilized effectively. PMID:29651424
Khdair, Ayman; Chen, Di; Patil, Yogesh; Ma, Linan; Dou, Q Ping; Shekhar, Malathy P V; Panyam, Jayanth
2010-01-25
Tumor drug resistance significantly limits the success of chemotherapy in the clinic. Tumor cells utilize multiple mechanisms to prevent the accumulation of anticancer drugs at their intracellular site of action. In this study, we investigated the anticancer efficacy of doxorubicin in combination with photodynamic therapy using methylene blue in a drug-resistant mouse tumor model. Surfactant-polymer hybrid nanoparticles formulated using an anionic surfactant, Aerosol-OT (AOT), and a naturally occurring polysaccharide polymer, sodium alginate, were used for synchronized delivery of the two drugs. Balb/c mice bearing syngeneic JC tumors (mammary adenocarcinoma) were used as a drug-resistant tumor model. Nanoparticle-mediated combination therapy significantly inhibited tumor growth and improved animal survival. Nanoparticle-mediated combination treatment resulted in enhanced tumor accumulation of both doxorubicin and methylene blue, significant inhibition of tumor cell proliferation, and increased induction of apoptosis. These data suggest that nanoparticle-mediated combination chemotherapy and photodynamic therapy using doxorubicin and methylene blue has significant therapeutic potential against drug-resistant tumors. Copyright 2009 Elsevier B.V. All rights reserved.
Assessing NARCCAP climate model effects using spatial confidence regions.
French, Joshua P; McGinnis, Seth; Schwartzman, Armin
2017-01-01
We assess similarities and differences between model effects for the North American Regional Climate Change Assessment Program (NARCCAP) climate models using varying classes of linear regression models. Specifically, we consider how the average temperature effect differs for the various global and regional climate model combinations, including assessment of possible interaction between the effects of global and regional climate models. We use both pointwise and simultaneous inference procedures to identify regions where global and regional climate model effects differ. We also show conclusively that results from pointwise inference are misleading, and that accounting for multiple comparisons is important for making proper inference.
The advent of new higher throughput analytical instrumentation has put a strain on interpreting and explaining the results from complex studies. Contemporary human, environmental, and biomonitoring data sets are comprised of tens or hundreds of analytes, multiple repeat measures...
Early Prediction of Student Self-Regulation Strategies by Combining Multiple Models
ERIC Educational Resources Information Center
Sabourin, Jennifer L.; Mott, Bradford W.; Lester, James C.
2012-01-01
Self-regulated learning behaviors such as goal setting and monitoring have been found to be crucial to students' success in computer-based learning environments. Consequently, understanding students' self-regulated learning behavior has been the subject of increasing interest. Unfortunately, monitoring these behaviors in real-time has proven…
How Binary Skills Obscure the Transition from Non-Mastery to Mastery
ERIC Educational Resources Information Center
Karelitz, Tzur M.
2008-01-01
What is the nature of latent predictors that facilitate diagnostic classification? Rupp and Templin (this issue) suggest that these predictors should be multidimensional, categorical variables that can be combined in various ways. Diagnostic Classification Models (DCM) typically use multiple categorical predictors to classify respondents into…
Environmental science and management are fed by individual studies of pollution effects, often focused on single locations. Data are encountered data, typically from multiple sources and on different time and spatial scales. Statistical issues including publication bias and m...
Graves, Tabitha A.; Royle, J. Andrew; Kendall, Katherine C.; Beier, Paul; Stetz, Jeffrey B.; Macleod, Amy C.
2012-01-01
Using multiple detection methods can increase the number, kind, and distribution of individuals sampled, which may increase accuracy and precision and reduce cost of population abundance estimates. However, when variables influencing abundance are of interest, if individuals detected via different methods are influenced by the landscape differently, separate analysis of multiple detection methods may be more appropriate. We evaluated the effects of combining two detection methods on the identification of variables important to local abundance using detections of grizzly bears with hair traps (systematic) and bear rubs (opportunistic). We used hierarchical abundance models (N-mixture models) with separate model components for each detection method. If both methods sample the same population, the use of either data set alone should (1) lead to the selection of the same variables as important and (2) provide similar estimates of relative local abundance. We hypothesized that the inclusion of 2 detection methods versus either method alone should (3) yield more support for variables identified in single method analyses (i.e. fewer variables and models with greater weight), and (4) improve precision of covariate estimates for variables selected in both separate and combined analyses because sample size is larger. As expected, joint analysis of both methods increased precision as well as certainty in variable and model selection. However, the single-method analyses identified different variables and the resulting predicted abundances had different spatial distributions. We recommend comparing single-method and jointly modeled results to identify the presence of individual heterogeneity between detection methods in N-mixture models, along with consideration of detection probabilities, correlations among variables, and tolerance to risk of failing to identify variables important to a subset of the population. The benefits of increased precision should be weighed against those risks. The analysis framework presented here will be useful for other species exhibiting heterogeneity by detection method.
Force on Force Modeling with Formal Task Structures and Dynamic Geometry
2017-03-24
task framework, derived using the MMF methodology to structure a complex mission. It further demonstrated the integration of effects from a range of...application methodology was intended to support a combined developmental testing (DT) and operational testing (OT) strategy for selected systems under test... methodology to develop new or modify existing Models and Simulations (M&S) to: • Apply data from multiple, distributed sources (including test
Peck, Steven L
2014-10-01
It is becoming clear that handling the inherent complexity found in ecological systems is an essential task for finding ways to control insect pests of tropical livestock such as tsetse flies, and old and new world screwworms. In particular, challenging multivalent management programs, such as Area Wide Integrated Pest Management (AW-IPM), face daunting problems of complexity at multiple spatial scales, ranging from landscape level processes to those of smaller scales such as the parasite loads of individual animals. Daunting temporal challenges also await resolution, such as matching management time frames to those found on ecological and even evolutionary temporal scales. How does one deal with representing processes with models that involve multiple spatial and temporal scales? Agent-based models (ABM), combined with geographic information systems (GIS), may allow for understanding, predicting and managing pest control efforts in livestock pests. This paper argues that by incorporating digital ecologies in our management efforts clearer and more informed decisions can be made. I also point out the power of these models in making better predictions in order to anticipate the range of outcomes possible or likely. Copyright © 2014 International Atomic Energy Agency 2014. Published by Elsevier B.V. All rights reserved.
Bioinspired legged-robot based on large deformation of flexible skeleton.
Mayyas, Mohammad
2014-11-11
In this article we present STARbot, a bioinspired legged robot capable of multiple locomotion modalities by using large deformation of its skeleton. We construct STARbot by using origami-style folding of flexible laminates. The long-term goal is to provide a robotic platform with maximum mobility on multiple surfaces. This paper particularly studies the quasistatic model of STARbot's leg under different conditions. We describe the large elastic deformation of a leg under external force, payload, and friction by using a set of non-dimensional, nonlinear approximate equations. We developed a test mechanism that models the motion of a leg in STARbot. We augmented several foot shapes and then tested them on soft to rough grounds. Both simulation and experimental findings were in good agreement. We utilized the model to develop several scales of tri and quad STARbot. We demonstrated the capability of these robots to locomote by combining their leg deformations with their foot motions. The combination provided a design platform for an active suspension STARbot with controlled foot locomotion. This included the ability of STARbot to change size, run over obstacles, walk and slide. Furthermore, in this paper we discuss a cost effective manufacturing and production method for manufacturing STARbot.
Xuan, Ziming; Chaloupka, Frank J; Blanchette, Jason G; Nguyen, Thien H; Heeren, Timothy C; Nelson, Toben F; Naimi, Timothy S
2015-03-01
U.S. studies contribute heavily to the literature about the tax elasticity of demand for alcohol, and most U.S. studies have relied upon specific excise (volume-based) taxes for beer as a proxy for alcohol taxes. The purpose of this paper was to compare this conventional alcohol tax measure with more comprehensive tax measures (incorporating multiple tax and beverage types) in analyses of the relationship between alcohol taxes and adult binge drinking prevalence in U.S. states. Data on U.S. state excise, ad valorem and sales taxes from 2001 to 2010 were obtained from the Alcohol Policy Information System and other sources. For 510 state-year strata, we developed a series of weighted tax-per-drink measures that incorporated various combinations of tax and beverage types, and related these measures to state-level adult binge drinking prevalence data from the Behavioral Risk Factor Surveillance System surveys. In analyses pooled across all years, models using the combined tax measure explained approximately 20% of state binge drinking prevalence, and documented more negative tax elasticity (-0.09, P = 0.02 versus -0.005, P = 0.63) and price elasticity (-1.40, P < 0.01 versus -0.76, P = 0.15) compared with models using only the volume-based tax. In analyses stratified by year, the R-squares for models using the beer combined tax measure were stable across the study period (P = 0.11), while the R-squares for models rely only on volume-based tax declined (P < 0.0). Compared with volume-based tax measures, combined tax measures (i.e. those incorporating volume-based tax and value-based taxes) yield substantial improvement in model fit and find more negative tax elasticity and price elasticity predicting adult binge drinking prevalence in U.S. states. © 2014 Society for the Study of Addiction.
Xuan, Ziming; Chaloupka, Frank J.; Blanchette, Jason G.; Nguyen, Thien H.; Heeren, Timothy C.; Nelson, Toben F.; Naimi, Timothy S.
2015-01-01
Aims U.S. studies contribute heavily to the literature about the tax elasticity of demand for alcohol, and most U.S. studies have relied upon specific excise (volume-based) taxes for beer as a proxy for alcohol taxes. The purpose of this paper was to compare this conventional alcohol tax measure with more comprehensive tax measures (incorporating multiple tax and beverage types) in analyses of the relationship between alcohol taxes and adult binge drinking prevalence in U.S. states. Design Data on U.S. state excise, ad valorem and sales taxes from 2001 to 2010 were obtained from the Alcohol Policy Information System and other sources. For 510 state-year strata, we developed a series of weighted tax-per-drink measures that incorporated various combinations of tax and beverage types, and related these measures to state-level adult binge drinking prevalence data from the Behavioral Risk Factor Surveillance System surveys. Findings In analyses pooled across all years, models using the combined tax measure explained approximately 20% of state binge drinking prevalence, and documented more negative tax elasticity (−0.09, P=0.02 versus −0.005, P=0.63) and price elasticity (−1.40, P<0.01 versus −0.76, P=0.15) compared with models using only the volume-based tax. In analyses stratified by year, the R-squares for models using the beer combined tax measure were stable across the study period (P=0.11), while the R-squares for models rely only on volume-based tax declined (P<0.01). Conclusions Compared with volume-based tax measures, combined tax measures (i.e. those incorporating volume-based tax and value-based taxes) yield substantial improvement in model fit and find more negative tax elasticity and price elasticity predicting adult binge drinking prevalence in U.S. states. PMID:25428795
NASA Astrophysics Data System (ADS)
Chawla, Amarpreet S.; Samei, Ehsan; Abbey, Craig
2007-03-01
In this study, we used a mathematical observer model to combine information obtained from multiple angular projections of the same breast to determine the overall detection performance of a multi-projection breast imaging system in detectability of a simulated mass. 82 subjects participated in the study and 25 angular projections of each breast were acquired. Projections from a simulated 3 mm 3-D lesion were added to the projection images. The lesion was assumed to be embedded in the compressed breast at a distance of 3 cm from the detector. Hotelling observer with Laguerre-Gauss channels (LG CHO) was applied to each image. Detectability was analyzed in terms of ROC curves and the area under ROC curves (AUC). The critical question studied is how to best integrate the individual decision variables across multiple (correlated) views. Towards that end, three different methods were investigated. Specifically, 1) ROCs from different projections were simply averaged; 2) the test statistics from different projections were averaged; and 3) a Bayesian decision fusion rule was used. Finally, AUC of the combined ROC was used as a parameter to optimize the acquisition parameters to maximize the performance of the system. It was found that the Bayesian decision fusion technique performs better than the other two techniques and likely offers the best approximation of the diagnostic process. Furthermore, if the total dose level is held constant at 1/25th of dual-view mammographic screening dose, the highest detectability performance is observed when considering only two projections spread along an angular span of 11.4°.
Sinclair, Karen; Kinable, Els; Grosch, Kai; Wang, Jixian
2016-05-01
In current industry practice, it is difficult to assess QT effects at potential therapeutic doses based on Phase I dose-escalation trials in oncology due to data scarcity, particularly in combinations trials. In this paper, we propose to use dose-concentration and concentration-QT models jointly to model the exposures and effects of multiple drugs in combination. The fitted models then can be used to make early predictions for QT prolongation to aid choosing recommended dose combinations for further investigation. The models consider potential correlation between concentrations of test drugs and potential drug-drug interactions at PK and QT levels. In addition, this approach allows for the assessment of the probability of QT prolongation exceeding given thresholds of clinical significance. The performance of this approach was examined via simulation under practical scenarios for dose-escalation trials for a combination of two drugs. The simulation results show that invaluable information of QT effects at therapeutic dose combinations can be gained by the proposed approaches. Early detection of dose combinations with substantial QT prolongation is evaluated effectively through the CIs of the predicted peak QT prolongation at each dose combination. Furthermore, the probability of QT prolongation exceeding a certain threshold is also computed to support early detection of safety signals while accounting for uncertainty associated with data from Phase I studies. While the prediction of QT effects is sensitive to the dose escalation process, the sensitivity and limited sample size should be considered when providing support to the decision-making process for further developing certain dose combinations. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Decision-analytic modeling studies: An overview for clinicians using multiple myeloma as an example.
Rochau, U; Jahn, B; Qerimi, V; Burger, E A; Kurzthaler, C; Kluibenschaedl, M; Willenbacher, E; Gastl, G; Willenbacher, W; Siebert, U
2015-05-01
The purpose of this study was to provide a clinician-friendly overview of decision-analytic models evaluating different treatment strategies for multiple myeloma (MM). We performed a systematic literature search to identify studies evaluating MM treatment strategies using mathematical decision-analytic models. We included studies that were published as full-text articles in English, and assessed relevant clinical endpoints, and summarized methodological characteristics (e.g., modeling approaches, simulation techniques, health outcomes, perspectives). Eleven decision-analytic modeling studies met our inclusion criteria. Five different modeling approaches were adopted: decision-tree modeling, Markov state-transition modeling, discrete event simulation, partitioned-survival analysis and area-under-the-curve modeling. Health outcomes included survival, number-needed-to-treat, life expectancy, and quality-adjusted life years. Evaluated treatment strategies included novel agent-based combination therapies, stem cell transplantation and supportive measures. Overall, our review provides a comprehensive summary of modeling studies assessing treatment of MM and highlights decision-analytic modeling as an important tool for health policy decision making. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
A Simulated Environment Experiment on Annoyance Due to Combined Road Traffic and Industrial Noises
Marquis-Favre, Catherine; Morel, Julien
2015-01-01
Total annoyance due to combined noises is still difficult to predict adequately. This scientific gap is an obstacle for noise action planning, especially in urban areas where inhabitants are usually exposed to high noise levels from multiple sources. In this context, this work aims to highlight potential to enhance the prediction of total annoyance. The work is based on a simulated environment experiment where participants performed activities in a living room while exposed to combined road traffic and industrial noises. The first objective of the experiment presented in this paper was to gain further understanding of the effects on annoyance of some acoustical factors, non-acoustical factors and potential interactions between the combined noise sources. The second one was to assess total annoyance models constructed from the data collected during the experiment and tested using data gathered in situ. The results obtained in this work highlighted the superiority of perceptual models. In particular, perceptual models with an interaction term seemed to be the best predictors for the two combined noise sources under study, even with high differences in sound pressure level. Thus, these results reinforced the need to focus on perceptual models and to improve the prediction of partial annoyances. PMID:26197326
Joint Seasonal ARMA Approach for Modeling of Load Forecast Errors in Planning Studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hafen, Ryan P.; Samaan, Nader A.; Makarov, Yuri V.
2014-04-14
To make informed and robust decisions in the probabilistic power system operation and planning process, it is critical to conduct multiple simulations of the generated combinations of wind and load parameters and their forecast errors to handle the variability and uncertainty of these time series. In order for the simulation results to be trustworthy, the simulated series must preserve the salient statistical characteristics of the real series. In this paper, we analyze day-ahead load forecast error data from multiple balancing authority locations and characterize statistical properties such as mean, standard deviation, autocorrelation, correlation between series, time-of-day bias, and time-of-day autocorrelation.more » We then construct and validate a seasonal autoregressive moving average (ARMA) model to model these characteristics, and use the model to jointly simulate day-ahead load forecast error series for all BAs.« less
Kim, K B; Shanyfelt, L M; Hahn, D W
2006-01-01
Dense-medium scattering is explored in the context of providing a quantitative measurement of turbidity, with specific application to corneal haze. A multiple-wavelength scattering technique is proposed to make use of two-color scattering response ratios, thereby providing a means for data normalization. A combination of measurements and simulations are reported to assess this technique, including light-scattering experiments for a range of polystyrene suspensions. Monte Carlo (MC) simulations were performed using a multiple-scattering algorithm based on full Mie scattering theory. The simulations were in excellent agreement with the polystyrene suspension experiments, thereby validating the MC model. The MC model was then used to simulate multiwavelength scattering in a corneal tissue model. Overall, the proposed multiwavelength scattering technique appears to be a feasible approach to quantify dense-medium scattering such as the manifestation of corneal haze, although more complex modeling of keratocyte scattering, and animal studies, are necessary.
Identification and Quantification of Cumulative Factors that ...
Evaluating the combined adverse effects of multiple stressors upon human health is an imperative component of cumulative risk assessment (CRA)1. In addition to chemical stressors, other non-chemical factors are also considered. For examples, smoking will elevate the risks of having lung cancer associated with radon exposure2; toluene and noise together will induce higher levels of hearing loss3; children exposed to violence will have higher risks of developing asthma in the presence of air pollution4. Environmental Justice (EJ) indicators, used as a tool to assess and quantify some of these non-chemical factors, include health, economic, and social indicators such as vulnerability and susceptibility5. Vulnerability factors encompass race, ethnicity, behavior, geographic location, etc., while susceptibility factors include life stage, genetic predisposition, pre-existing health condition and others6, although these two categories are not always mutually exclusive. Numerous findings regarding combined effects of EJ indicators and chemical stressors have been identified7-11. However, fewer studies have analyzed the interrelation between multiple stressors that exert combined harmful effects upon individual or population health in the context of exposure assessment within the risk assessment framework12. In this study, we connected EJ indicators to variables in the exposure assessment model, especially the Average Daily Dose (ADD) model13, in order to better underst
Pleiotropy Analysis of Quantitative Traits at Gene Level by Multivariate Functional Linear Models
Wang, Yifan; Liu, Aiyi; Mills, James L.; Boehnke, Michael; Wilson, Alexander F.; Bailey-Wilson, Joan E.; Xiong, Momiao; Wu, Colin O.; Fan, Ruzong
2015-01-01
In genetics, pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. A common approach is to analyze the phenotypic traits separately using univariate analyses and combine the test results through multiple comparisons. This approach may lead to low power. Multivariate functional linear models are developed to connect genetic variant data to multiple quantitative traits adjusting for covariates for a unified analysis. Three types of approximate F-distribution tests based on Pillai–Bartlett trace, Hotelling–Lawley trace, and Wilks’s Lambda are introduced to test for association between multiple quantitative traits and multiple genetic variants in one genetic region. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and optimal sequence kernel association test (SKAT-O). Extensive simulations were performed to evaluate the false positive rates and power performance of the proposed models and tests. We show that the approximate F-distribution tests control the type I error rates very well. Overall, simultaneous analysis of multiple traits can increase power performance compared to an individual test of each trait. The proposed methods were applied to analyze (1) four lipid traits in eight European cohorts, and (2) three biochemical traits in the Trinity Students Study. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and SKAT-O for the three biochemical traits. The approximate F-distribution tests of the proposed functional linear models are more sensitive than those of the traditional multivariate linear models that in turn are more sensitive than SKAT-O in the univariate case. The analysis of the four lipid traits and the three biochemical traits detects more association than SKAT-O in the univariate case. PMID:25809955
Pleiotropy analysis of quantitative traits at gene level by multivariate functional linear models.
Wang, Yifan; Liu, Aiyi; Mills, James L; Boehnke, Michael; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao; Wu, Colin O; Fan, Ruzong
2015-05-01
In genetics, pleiotropy describes the genetic effect of a single gene on multiple phenotypic traits. A common approach is to analyze the phenotypic traits separately using univariate analyses and combine the test results through multiple comparisons. This approach may lead to low power. Multivariate functional linear models are developed to connect genetic variant data to multiple quantitative traits adjusting for covariates for a unified analysis. Three types of approximate F-distribution tests based on Pillai-Bartlett trace, Hotelling-Lawley trace, and Wilks's Lambda are introduced to test for association between multiple quantitative traits and multiple genetic variants in one genetic region. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and optimal sequence kernel association test (SKAT-O). Extensive simulations were performed to evaluate the false positive rates and power performance of the proposed models and tests. We show that the approximate F-distribution tests control the type I error rates very well. Overall, simultaneous analysis of multiple traits can increase power performance compared to an individual test of each trait. The proposed methods were applied to analyze (1) four lipid traits in eight European cohorts, and (2) three biochemical traits in the Trinity Students Study. The approximate F-distribution tests provide much more significant results than those of F-tests of univariate analysis and SKAT-O for the three biochemical traits. The approximate F-distribution tests of the proposed functional linear models are more sensitive than those of the traditional multivariate linear models that in turn are more sensitive than SKAT-O in the univariate case. The analysis of the four lipid traits and the three biochemical traits detects more association than SKAT-O in the univariate case. © 2015 WILEY PERIODICALS, INC.
Composite Load Spectra for Select Space Propulsion Structural Components
NASA Technical Reports Server (NTRS)
Ho, Hing W.; Newell, James F.
1994-01-01
Generic load models are described with multiple levels of progressive sophistication to simulate the composite (combined) load spectra (CLS) that are induced in space propulsion system components, representative of Space Shuttle Main Engines (SSME), such as transfer ducts, turbine blades and liquid oxygen (LOX) posts. These generic (coupled) models combine the deterministic models for composite load dynamic, acoustic, high-pressure and high rotational speed, etc., load simulation using statistically varying coefficients. These coefficients are then determined using advanced probabilistic simulation methods with and without strategically selected experimental data. The entire simulation process is included in a CLS computer code. Applications of the computer code to various components in conjunction with the PSAM (Probabilistic Structural Analysis Method) to perform probabilistic load evaluation and life prediction evaluations are also described to illustrate the effectiveness of the coupled model approach.
An in vivo model of functional and vascularized human brain organoids.
Mansour, Abed AlFatah; Gonçalves, J Tiago; Bloyd, Cooper W; Li, Hao; Fernandes, Sarah; Quang, Daphne; Johnston, Stephen; Parylak, Sarah L; Jin, Xin; Gage, Fred H
2018-06-01
Differentiation of human pluripotent stem cells to small brain-like structures known as brain organoids offers an unprecedented opportunity to model human brain development and disease. To provide a vascularized and functional in vivo model of brain organoids, we established a method for transplanting human brain organoids into the adult mouse brain. Organoid grafts showed progressive neuronal differentiation and maturation, gliogenesis, integration of microglia, and growth of axons to multiple regions of the host brain. In vivo two-photon imaging demonstrated functional neuronal networks and blood vessels in the grafts. Finally, in vivo extracellular recording combined with optogenetics revealed intragraft neuronal activity and suggested graft-to-host functional synaptic connectivity. This combination of human neural organoids and an in vivo physiological environment in the animal brain may facilitate disease modeling under physiological conditions.
Zhang, Haixia; Zhao, Junkang; Gu, Caijiao; Cui, Yan; Rong, Huiying; Meng, Fanlong; Wang, Tong
2015-05-01
The study of the medical expenditure and its influencing factors among the students enrolling in Urban Resident Basic Medical Insurance (URBMI) in Taiyuan indicated that non response bias and selection bias coexist in dependent variable of the survey data. Unlike previous studies only focused on one missing mechanism, a two-stage method to deal with two missing mechanisms simultaneously was suggested in this study, combining multiple imputation with sample selection model. A total of 1 190 questionnaires were returned by the students (or their parents) selected in child care settings, schools and universities in Taiyuan by stratified cluster random sampling in 2012. In the returned questionnaires, 2.52% existed not missing at random (NMAR) of dependent variable and 7.14% existed missing at random (MAR) of dependent variable. First, multiple imputation was conducted for MAR by using completed data, then sample selection model was used to correct NMAR in multiple imputation, and a multi influencing factor analysis model was established. Based on 1 000 times resampling, the best scheme of filling the random missing values is the predictive mean matching (PMM) method under the missing proportion. With this optimal scheme, a two stage survey was conducted. Finally, it was found that the influencing factors on annual medical expenditure among the students enrolling in URBMI in Taiyuan included population group, annual household gross income, affordability of medical insurance expenditure, chronic disease, seeking medical care in hospital, seeking medical care in community health center or private clinic, hospitalization, hospitalization canceled due to certain reason, self medication and acceptable proportion of self-paid medical expenditure. The two-stage method combining multiple imputation with sample selection model can deal with non response bias and selection bias effectively in dependent variable of the survey data.
Deformable complex network for refining low-resolution X-ray structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Chong; Wang, Qinghua; Ma, Jianpeng, E-mail: jpma@bcm.edu
2015-10-27
A new refinement algorithm called the deformable complex network that combines a novel angular network-based restraint with a deformable elastic network model in the target function has been developed to aid in structural refinement in macromolecular X-ray crystallography. In macromolecular X-ray crystallography, building more accurate atomic models based on lower resolution experimental diffraction data remains a great challenge. Previous studies have used a deformable elastic network (DEN) model to aid in low-resolution structural refinement. In this study, the development of a new refinement algorithm called the deformable complex network (DCN) is reported that combines a novel angular network-based restraint withmore » the DEN model in the target function. Testing of DCN on a wide range of low-resolution structures demonstrated that it constantly leads to significantly improved structural models as judged by multiple refinement criteria, thus representing a new effective refinement tool for low-resolution structural determination.« less
Damage modeling and statistical analysis of optics damage performance in MJ-class laser systems.
Liao, Zhi M; Raymond, B; Gaylord, J; Fallejo, R; Bude, J; Wegner, P
2014-11-17
Modeling the lifetime of a fused silica optic is described for a multiple beam, MJ-class laser system. This entails combining optic processing data along with laser shot data to account for complete history of optic processing and shot exposure. Integrating with online inspection data allows for the construction of a performance metric to describe how an optic performs with respect to the model. This methodology helps to validate the damage model as well as allows strategic planning and identifying potential hidden parameters that are affecting the optic's performance.
NASA Technical Reports Server (NTRS)
DeCarvalho, N. V.; Chen, B. Y.; Pinho, S. T.; Baiz, P. M.; Ratcliffe, J. G.; Tay, T. E.
2013-01-01
A novel approach is proposed for high-fidelity modeling of progressive damage and failure in composite materials that combines the Floating Node Method (FNM) and the Virtual Crack Closure Technique (VCCT) to represent multiple interacting failure mechanisms in a mesh-independent fashion. In this study, the approach is applied to the modeling of delamination migration in cross-ply tape laminates. Delamination, matrix cracking, and migration are all modeled using fracture mechanics based failure and migration criteria. The methodology proposed shows very good qualitative and quantitative agreement with experiments.
NASA Technical Reports Server (NTRS)
DeCarvalho, Nelson V.; Chen, B. Y.; Pinho, Silvestre T.; Baiz, P. M.; Ratcliffe, James G.; Tay, T. E.
2013-01-01
A novel approach is proposed for high-fidelity modeling of progressive damage and failure in composite materials that combines the Floating Node Method (FNM) and the Virtual Crack Closure Technique (VCCT) to represent multiple interacting failure mechanisms in a mesh-independent fashion. In this study, the approach is applied to the modeling of delamination migration in cross-ply tape laminates. Delamination, matrix cracking, and migration are all modeled using fracture mechanics based failure and migration criteria. The methodology proposed shows very good qualitative and quantitative agreement with experiments.
Robust Combining of Disparate Classifiers Through Order Statistics
NASA Technical Reports Server (NTRS)
Tumer, Kagan; Ghosh, Joydeep
2001-01-01
Integrating the outputs of multiple classifiers via combiners or meta-learners has led to substantial improvements in several difficult pattern recognition problems. In this article we investigate a family of combiners based on order statistics, for robust handling of situations where there are large discrepancies in performance of individual classifiers. Based on a mathematical modeling of how the decision boundaries are affected by order statistic combiners, we derive expressions for the reductions in error expected when simple output combination methods based on the the median, the maximum and in general, the ith order statistic, are used. Furthermore, we analyze the trim and spread combiners, both based on linear combinations of the ordered classifier outputs, and show that in the presence of uneven classifier performance, they often provide substantial gains over both linear and simple order statistics combiners. Experimental results on both real world data and standard public domain data sets corroborate these findings.
NASA Astrophysics Data System (ADS)
Benedict, K. K.; Scott, S.
2013-12-01
While there has been a convergence towards a limited number of standards for representing knowledge (metadata) about geospatial (and other) data objects and collections, there exist a variety of community conventions around the specific use of those standards and within specific data discovery and access systems. This combination of limited (but multiple) standards and conventions creates a challenge for system developers that aspire to participate in multiple data infrastrucutres, each of which may use a different combination of standards and conventions. While Extensible Markup Language (XML) is a shared standard for encoding most metadata, traditional direct XML transformations (XSLT) from one standard to another often result in an imperfect transfer of information due to incomplete mapping from one standard's content model to another. This paper presents the work at the University of New Mexico's Earth Data Analysis Center (EDAC) in which a unified data and metadata management system has been developed in support of the storage, discovery and access of heterogeneous data products. This system, the Geographic Storage, Transformation and Retrieval Engine (GSTORE) platform has adopted a polyglot database model in which a combination of relational and document-based databases are used to store both data and metadata, with some metadata stored in a custom XML schema designed as a superset of the requirements for multiple target metadata standards: ISO 19115-2/19139/19110/19119, FGCD CSDGM (both with and without remote sensing extensions) and Dublin Core. Metadata stored within this schema is complemented by additional service, format and publisher information that is dynamically "injected" into produced metadata documents when they are requested from the system. While mapping from the underlying common metadata schema is relatively straightforward, the generation of valid metadata within each target standard is necessary but not sufficient for integration into multiple data infrastructures, as has been demonstrated through EDAC's testing and deployment of metadata into multiple external systems: Data.Gov, the GEOSS Registry, the DataONE network, the DSpace based institutional repository at UNM and semantic mediation systems developed as part of the NASA ACCESS ELSeWEB project. Each of these systems requires valid metadata as a first step, but to make most effective use of the delivered metadata each also has a set of conventions that are specific to the system. This presentation will provide an overview of the underlying metadata management model, the processes and web services that have been developed to automatically generate metadata in a variety of standard formats and highlight some of the specific modifications made to the output metadata content to support the different conventions used by the multiple metadata integration endpoints.
NASA Astrophysics Data System (ADS)
Wolff, J.; Jankov, I.; Beck, J.; Carson, L.; Frimel, J.; Harrold, M.; Jiang, H.
2016-12-01
It is well known that global and regional numerical weather prediction ensemble systems are under-dispersive, producing unreliable and overconfident ensemble forecasts. Typical approaches to alleviate this problem include the use of multiple dynamic cores, multiple physics suite configurations, or a combination of the two. While these approaches may produce desirable results, they have practical and theoretical deficiencies and are more difficult and costly to maintain. An active area of research that promotes a more unified and sustainable system for addressing the deficiencies in ensemble modeling is the use of stochastic physics to represent model-related uncertainty. Stochastic approaches include Stochastic Parameter Perturbations (SPP), Stochastic Kinetic Energy Backscatter (SKEB), Stochastic Perturbation of Physics Tendencies (SPPT), or some combination of all three. The focus of this study is to assess the model performance within a convection-permitting ensemble at 3-km grid spacing across the Contiguous United States (CONUS) when using stochastic approaches. For this purpose, the test utilized a single physics suite configuration based on the operational High-Resolution Rapid Refresh (HRRR) model, with ensemble members produced by employing stochastic methods. Parameter perturbations were employed in the Rapid Update Cycle (RUC) land surface model and Mellor-Yamada-Nakanishi-Niino (MYNN) planetary boundary layer scheme. Results will be presented in terms of bias, error, spread, skill, accuracy, reliability, and sharpness using the Model Evaluation Tools (MET) verification package. Due to the high level of complexity of running a frequently updating (hourly), high spatial resolution (3 km), large domain (CONUS) ensemble system, extensive high performance computing (HPC) resources were needed to meet this objective. Supercomputing resources were provided through the National Center for Atmospheric Research (NCAR) Strategic Capability (NSC) project support, allowing for a more extensive set of tests over multiple seasons, consequently leading to more robust results. Through the use of these stochastic innovations and powerful supercomputing at NCAR, further insights and advancements in ensemble forecasting at convection-permitting scales will be possible.
Multi-object segmentation framework using deformable models for medical imaging analysis.
Namías, Rafael; D'Amato, Juan Pablo; Del Fresno, Mariana; Vénere, Marcelo; Pirró, Nicola; Bellemare, Marc-Emmanuel
2016-08-01
Segmenting structures of interest in medical images is an important step in different tasks such as visualization, quantitative analysis, simulation, and image-guided surgery, among several other clinical applications. Numerous segmentation methods have been developed in the past three decades for extraction of anatomical or functional structures on medical imaging. Deformable models, which include the active contour models or snakes, are among the most popular methods for image segmentation combining several desirable features such as inherent connectivity and smoothness. Even though different approaches have been proposed and significant work has been dedicated to the improvement of such algorithms, there are still challenging research directions as the simultaneous extraction of multiple objects and the integration of individual techniques. This paper presents a novel open-source framework called deformable model array (DMA) for the segmentation of multiple and complex structures of interest in different imaging modalities. While most active contour algorithms can extract one region at a time, DMA allows integrating several deformable models to deal with multiple segmentation scenarios. Moreover, it is possible to consider any existing explicit deformable model formulation and even to incorporate new active contour methods, allowing to select a suitable combination in different conditions. The framework also introduces a control module that coordinates the cooperative evolution of the snakes and is able to solve interaction issues toward the segmentation goal. Thus, DMA can implement complex object and multi-object segmentations in both 2D and 3D using the contextual information derived from the model interaction. These are important features for several medical image analysis tasks in which different but related objects need to be simultaneously extracted. Experimental results on both computed tomography and magnetic resonance imaging show that the proposed framework has a wide range of applications especially in the presence of adjacent structures of interest or under intra-structure inhomogeneities giving excellent quantitative results.
Erlandsson, Lena; Nääv, Åsa; Hennessy, Annemarie; Vaiman, Daniel; Gram, Magnus; Åkerström, Bo; Hansson, Stefan R
2016-03-01
Preeclampsia is a pregnancy-related disease afflicting 3-7% of pregnancies worldwide and leads to maternal and infant morbidity and mortality. The disease is of placental origin and is commonly described as a disease of two stages. A variety of preeclampsia animal models have been proposed, but all of them have limitations in fully recapitulating the human disease. Based on the research question at hand, different or multiple models might be suitable. Multiple animal models in combination with in vitro or ex vivo studies on human placenta together offer a synergistic platform to further our understanding of the etiology of preeclampsia and potential therapeutic interventions. The described animal models of preeclampsia divide into four categories (i) spontaneous, (ii) surgically induced, (iii) pharmacologically/substance induced, and (iv) transgenic. This review aims at providing an inventory of novel models addressing etiology of the disease and or therapeutic/intervention opportunities. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
A Model for Generating Multi-hazard Scenarios
NASA Astrophysics Data System (ADS)
Lo Jacomo, A.; Han, D.; Champneys, A.
2017-12-01
Communities in mountain areas are often subject to risk from multiple hazards, such as earthquakes, landslides, and floods. Each hazard has its own different rate of onset, duration, and return period. Multiple hazards tend to complicate the combined risk due to their interactions. Prioritising interventions for minimising risk in this context is challenging. We developed a probabilistic multi-hazard model to help inform decision making in multi-hazard areas. The model is applied to a case study region in the Sichuan province in China, using information from satellite imagery and in-situ data. The model is not intended as a predictive model, but rather as a tool which takes stakeholder input and can be used to explore plausible hazard scenarios over time. By using a Monte Carlo framework and varrying uncertain parameters for each of the hazards, the model can be used to explore the effect of different mitigation interventions aimed at reducing the disaster risk within an uncertain hazard context.
Design and implementation of a hybrid MPI-CUDA model for the Smith-Waterman algorithm.
Khaled, Heba; Faheem, Hossam El Deen Mostafa; El Gohary, Rania
2015-01-01
This paper provides a novel hybrid model for solving the multiple pair-wise sequence alignment problem combining message passing interface and CUDA, the parallel computing platform and programming model invented by NVIDIA. The proposed model targets homogeneous cluster nodes equipped with similar Graphical Processing Unit (GPU) cards. The model consists of the Master Node Dispatcher (MND) and the Worker GPU Nodes (WGN). The MND distributes the workload among the cluster working nodes and then aggregates the results. The WGN performs the multiple pair-wise sequence alignments using the Smith-Waterman algorithm. We also propose a modified implementation to the Smith-Waterman algorithm based on computing the alignment matrices row-wise. The experimental results demonstrate a considerable reduction in the running time by increasing the number of the working GPU nodes. The proposed model achieved a performance of about 12 Giga cell updates per second when we tested against the SWISS-PROT protein knowledge base running on four nodes.
Thin Cloud Detection Method by Linear Combination Model of Cloud Image
NASA Astrophysics Data System (ADS)
Liu, L.; Li, J.; Wang, Y.; Xiao, Y.; Zhang, W.; Zhang, S.
2018-04-01
The existing cloud detection methods in photogrammetry often extract the image features from remote sensing images directly, and then use them to classify images into cloud or other things. But when the cloud is thin and small, these methods will be inaccurate. In this paper, a linear combination model of cloud images is proposed, by using this model, the underlying surface information of remote sensing images can be removed. So the cloud detection result can become more accurate. Firstly, the automatic cloud detection program in this paper uses the linear combination model to split the cloud information and surface information in the transparent cloud images, then uses different image features to recognize the cloud parts. In consideration of the computational efficiency, AdaBoost Classifier was introduced to combine the different features to establish a cloud classifier. AdaBoost Classifier can select the most effective features from many normal features, so the calculation time is largely reduced. Finally, we selected a cloud detection method based on tree structure and a multiple feature detection method using SVM classifier to compare with the proposed method, the experimental data shows that the proposed cloud detection program in this paper has high accuracy and fast calculation speed.
NASA Astrophysics Data System (ADS)
Fabián Calderón Marín, Carlos; González González, Joaquín Jorge; Laguardia, Rodolfo Alfonso
2017-09-01
The combination of radiotherapy modalities with external bundles and systemic radiotherapy (CIERT) could be a reliable alternative for patients with multiple lesions or those where treatment planning maybe difficult because organ(s)-at-risk (OARs) constraints. Radiobiological models should have the capacity for predicting the biological irradiation response considering the differences in the temporal pattern of dose delivering in both modalities. Two CIERT scenarios were studied: sequential combination in which one modality is executed following the other one and concurrent combination when both modalities are running simultaneously. Expressions are provided for calculation of the dose-response magnitudes Tumor Control Probability (TCP) and Normal Tissue Complication Probability (NTCP). General results on radiobiological modeling using the linear-quadratic (LQ) model are also discussed. Inter-subject variation of radiosensitivity and volume irradiation effect in CIERT are studied. OARs should be under control during the planning in concurrent CIERT treatment as the administered activity is increased. The formulation presented here may be used for biological evaluation of prescriptions and biological treatment planning of CIERT schemes in clinical situation.
Using Robust Standard Errors to Combine Multiple Regression Estimates with Meta-Analysis
ERIC Educational Resources Information Center
Williams, Ryan T.
2012-01-01
Combining multiple regression estimates with meta-analysis has continued to be a difficult task. A variety of methods have been proposed and used to combine multiple regression slope estimates with meta-analysis, however, most of these methods have serious methodological and practical limitations. The purpose of this study was to explore the use…
Comparison of FDMA and CDMA for second generation land-mobile satellite communications
NASA Technical Reports Server (NTRS)
Yongacoglu, A.; Lyons, R. G.; Mazur, B. A.
1990-01-01
Code Division Multiple Access (CDMA) and Frequency Division Multiple Access (FDMA) (both analog and digital) systems capacities are compared on the basis of identical link availabilities and physical propagation models. Parameters are optimized for a bandwidth limited, multibeam environment. For CDMA, the benefits of voice activated carriers, antenna discrimination, polarization reuse, return link power control and multipath suppression are included in the analysis. For FDMA, the advantages of bandwidth efficient modulation/coding combinations, voice activated carriers, polarization reuse, beam placement, and frequency staggering were taken into account.
People's preference patterns for gains/losses in multiple time period situations.
Chang, Shin-Shin; Chang, Jung-Hua
2013-10-01
Little research to date has been devoted to investigating whether people treat time differently from money when facing multiple gains or losses. This study tested the hypothesis that because time is characterized by perishability, fixed supply, and infungibility, people with strong motivation to obtain a long period of uninterrupted discretionary time would strive to trim the time needed for non-discretionary activities or to combine several non-discretionary activities. As a result, people prefer integration over segregation of multiple time losses or gains, which is not consistent with the prediction based on hedonic editing theory or the renewable resource model. This proposition is supported by results from four experiments.
Efficient Regressions via Optimally Combining Quantile Information*
Zhao, Zhibiao; Xiao, Zhijie
2014-01-01
We develop a generally applicable framework for constructing efficient estimators of regression models via quantile regressions. The proposed method is based on optimally combining information over multiple quantiles and can be applied to a broad range of parametric and nonparametric settings. When combining information over a fixed number of quantiles, we derive an upper bound on the distance between the efficiency of the proposed estimator and the Fisher information. As the number of quantiles increases, this upper bound decreases and the asymptotic variance of the proposed estimator approaches the Cramér-Rao lower bound under appropriate conditions. In the case of non-regular statistical estimation, the proposed estimator leads to super-efficient estimation. We illustrate the proposed method for several widely used regression models. Both asymptotic theory and Monte Carlo experiments show the superior performance over existing methods. PMID:25484481
Strong-lensing analysis of A2744 with MUSE and Hubble Frontier Fields images
NASA Astrophysics Data System (ADS)
Mahler, G.; Richard, J.; Clément, B.; Lagattuta, D.; Schmidt, K.; Patrício, V.; Soucail, G.; Bacon, R.; Pello, R.; Bouwens, R.; Maseda, M.; Martinez, J.; Carollo, M.; Inami, H.; Leclercq, F.; Wisotzki, L.
2018-01-01
We present an analysis of Multi Unit Spectroscopic Explorer (MUSE) observations obtained on the massive Frontier Fields (FFs) cluster A2744. This new data set covers the entire multiply imaged region around the cluster core. The combined catalogue consists of 514 spectroscopic redshifts (with 414 new identifications). We use this redshift information to perform a strong-lensing analysis revising multiple images previously found in the deep FF images, and add three new MUSE-detected multiply imaged systems with no obvious Hubble Space Telescope counterpart. The combined strong-lensing constraints include a total of 60 systems producing 188 images altogether, out of which 29 systems and 83 images are spectroscopically confirmed, making A2744 one of the most well-constrained clusters to date. Thanks to the large amount of spectroscopic redshifts, we model the influence of substructures at larger radii, using a parametrization including two cluster-scale components in the cluster core and several group scale in the outskirts. The resulting model accurately reproduces all the spectroscopic multiple systems, reaching an rms of 0.67 arcsec in the image plane. The large number of MUSE spectroscopic redshifts gives us a robust model, which we estimate reduces the systematic uncertainty on the 2D mass distribution by up to ∼2.5 times the statistical uncertainty in the cluster core. In addition, from a combination of the parametrization and the set of constraints, we estimate the relative systematic uncertainty to be up to 9 per cent at 200 kpc.
Gao, Xueping; Liu, Yinzhu; Sun, Bowen
2018-06-05
The risk of water shortage caused by uncertainties, such as frequent drought, varied precipitation, multiple water resources, and different water demands, brings new challenges to the water transfer projects. Uncertainties exist for transferring water and local surface water; therefore, the relationship between them should be thoroughly studied to prevent water shortage. For more effective water management, an uncertainty-based water shortage risk assessment model (UWSRAM) is developed to study the combined effect of multiple water resources and analyze the shortage degree under uncertainty. The UWSRAM combines copula-based Monte Carlo stochastic simulation and the chance-constrained programming-stochastic multiobjective optimization model, using the Lunan water-receiving area in China as an example. Statistical copula functions are employed to estimate the joint probability of available transferring water and local surface water and sampling from the multivariate probability distribution, which are used as inputs for the optimization model. The approach reveals the distribution of water shortage and is able to emphasize the importance of improving and updating transferring water and local surface water management, and examine their combined influence on water shortage risk assessment. The possible available water and shortages can be calculated applying the UWSRAM, also with the corresponding allocation measures under different water availability levels and violating probabilities. The UWSRAM is valuable for mastering the overall multi-water resource and water shortage degree, adapting to the uncertainty surrounding water resources, establishing effective water resource planning policies for managers and achieving sustainable development.
Model-based tomographic reconstruction
Chambers, David H; Lehman, Sean K; Goodman, Dennis M
2012-06-26
A model-based approach to estimating wall positions for a building is developed and tested using simulated data. It borrows two techniques from geophysical inversion problems, layer stripping and stacking, and combines them with a model-based estimation algorithm that minimizes the mean-square error between the predicted signal and the data. The technique is designed to process multiple looks from an ultra wideband radar array. The processed signal is time-gated and each section processed to detect the presence of a wall and estimate its position, thickness, and material parameters. The floor plan of a building is determined by moving the array around the outside of the building. In this paper we describe how the stacking and layer stripping algorithms are combined and show the results from a simple numerical example of three parallel walls.
Combined visual and motor evoked potentials predict multiple sclerosis disability after 20 years.
Schlaeger, Regina; Schindler, Christian; Grize, Leticia; Dellas, Sophie; Radue, Ernst W; Kappos, Ludwig; Fuhr, Peter
2014-09-01
The development of predictors of multiple sclerosis (MS) disability is difficult due to the complex interplay of pathophysiological and adaptive processes. The purpose of this study was to investigate whether combined evoked potential (EP)-measures allow prediction of MS disability after 20 years. We examined 28 patients with clinically definite MS according to Poser's criteria with Expanded Disability Status Scale (EDSS) scores, combined visual and motor EPs at entry (T0), 6 (T1), 12 (T2) and 24 (T3) months, and a cranial magnetic resonance imaging (MRI) scan at T0 and T2. EDSS testing was repeated at year 14 (T4) and year 20 (T5). Spearman rank correlation was used. We performed a multivariable regression analysis to examine predictive relationships of the sum of z-transformed EP latencies (s-EPT0) and other baseline variables with EDSST5. We found that s-EPT0 correlated with EDSST5 (rho=0.72, p<0.0001) and ΔEDSST5-T0 (rho=0.50, p=0.006). Backward selection resulted in the prediction model: E (EDSST5)=3.91-2.22×therapy+0.079×age+0.057×s-EPT0 (Model 1, R (2)=0.58) with therapy as binary variable (1=any disease-modifying therapy between T3 and T5, 0=no therapy). Neither EDSST0 nor T2-lesion or gadolinium (Gd)-enhancing lesion quantities at T0 improved prediction of EDSST5. The area under the receiver operating characteristic (ROC) curve was 0.89 for model 1. These results further support a role for combined EP-measures as predictors of long-term disability in MS. © The Author(s) 2014.
Dynamical scattering in coherent hard x-ray nanobeam Bragg diffraction
NASA Astrophysics Data System (ADS)
Pateras, A.; Park, J.; Ahn, Y.; Tilka, J. A.; Holt, M. V.; Kim, H.; Mawst, L. J.; Evans, P. G.
2018-06-01
Unique intensity features arising from dynamical diffraction arise in coherent x-ray nanobeam diffraction patterns of crystals having thicknesses larger than the x-ray extinction depth or exhibiting combinations of nanoscale and mesoscale features. We demonstrate that dynamical scattering effects can be accurately predicted using an optical model combined with the Darwin theory of dynamical x-ray diffraction. The model includes the highly divergent coherent x-ray nanobeams produced by Fresnel zone plate focusing optics and accounts for primary extinction, multiple scattering, and absorption. The simulation accurately reproduces the dynamical scattering features of experimental diffraction patterns acquired from a GaAs/AlGaAs epitaxial heterostructure on a GaAs (001) substrate.
NASA Astrophysics Data System (ADS)
Dou, Hao; Sun, Xiao; Li, Bin; Deng, Qianqian; Yang, Xubo; Liu, Di; Tian, Jinwen
2018-03-01
Aircraft detection from very high resolution remote sensing images, has gained more increasing interest in recent years due to the successful civil and military applications. However, several problems still exist: 1) how to extract the high-level features of aircraft; 2) locating objects within such a large image is difficult and time consuming; 3) A common problem of multiple resolutions of satellite images still exists. In this paper, inspirited by biological visual mechanism, the fusion detection framework is proposed, which fusing the top-down visual mechanism (deep CNN model) and bottom-up visual mechanism (GBVS) to detect aircraft. Besides, we use multi-scale training method for deep CNN model to solve the problem of multiple resolutions. Experimental results demonstrate that our method can achieve a better detection result than the other methods.
Combining multiple decisions: applications to bioinformatics
NASA Astrophysics Data System (ADS)
Yukinawa, N.; Takenouchi, T.; Oba, S.; Ishii, S.
2008-01-01
Multi-class classification is one of the fundamental tasks in bioinformatics and typically arises in cancer diagnosis studies by gene expression profiling. This article reviews two recent approaches to multi-class classification by combining multiple binary classifiers, which are formulated based on a unified framework of error-correcting output coding (ECOC). The first approach is to construct a multi-class classifier in which each binary classifier to be aggregated has a weight value to be optimally tuned based on the observed data. In the second approach, misclassification of each binary classifier is formulated as a bit inversion error with a probabilistic model by making an analogy to the context of information transmission theory. Experimental studies using various real-world datasets including cancer classification problems reveal that both of the new methods are superior or comparable to other multi-class classification methods.
2015-01-01
Background Computer-aided drug design has a long history of being applied to discover new molecules to treat various cancers, but it has always been focused on single targets. The development of systems biology has let scientists reveal more hidden mechanisms of cancers, but attempts to apply systems biology to cancer therapies remain at preliminary stages. Our lab has successfully developed various systems biology models for several cancers. Based on these achievements, we present the first attempt to combine multiple-target therapy with systems biology. Methods In our previous study, we identified 28 significant proteins--i.e., common core network markers--of four types of cancers as house-keeping proteins of these cancers. In this study, we ranked these proteins by summing their carcinogenesis relevance values (CRVs) across the four cancers, and then performed docking and pharmacophore modeling to do virtual screening on the NCI database for anti-cancer drugs. We also performed pathway analysis on these proteins using Panther and MetaCore to reveal more mechanisms of these cancer house-keeping proteins. Results We designed several approaches to discover targets for multiple-target cocktail therapies. In the first one, we identified the top 20 drugs for each of the 28 cancer house-keeping proteins, and analyzed the docking pose to further understand the interaction mechanisms of these drugs. After screening for duplicates, we found that 13 of these drugs could target 11 proteins simultaneously. In the second approach, we chose the top 5 proteins with the highest summed CRVs and used them as the drug targets. We built a pharmacophore and applied it to do virtual screening against the Life-Chemical library for anti-cancer drugs. Based on these results, wet-lab bio-scientists could freely investigate combinations of these drugs for multiple-target therapy for cancers, in contrast to the traditional single target therapy. Conclusions Combination of systems biology with computer-aided drug design could help us develop novel drug cocktails with multiple targets. We believe this will enhance the efficiency of therapeutic practice and lead to new directions for cancer therapy. PMID:26680552
Using iPad Tablets for Self-modeling with Preschoolers: Videos versus Photos
ERIC Educational Resources Information Center
McCoy, Dacia M.; Morrison, Julie Q.; Barnett, Dave W.; Kalra, Hilary D.; Donovan, Lauren K.
2017-01-01
As technology becomes more accessible and acceptable in the preschool setting, teachers need effective strategies of incorporating it to address challenging behaviors. A nonconcurrent delayed multiple baseline design in combination with an alternating treatment design was utilized to investigate the effects of using iPad tablets to display video…
Landscape silviculture for late-successional reserve management
S Hummel; R.J. Barbour
2007-01-01
The effects of different combinations of multiple, variable-intensity silvicultural treatments on fire and habitat management objectives were evaluated for a ±6,000 ha forest reserve using simulation models and optimization techniques. Our methods help identify areas within the reserve where opportunities exist to minimize conflict between the dual landscape objectives...
Integrating Science and Management to Assess Forest Ecosystem Vulnerability to Climate Change
Leslie A. Brandt; Patricia R. Butler; Stephen D. Handler; Maria K. Janowiak; P. Danielle Shannon; Christopher W. Swanston
2017-01-01
We developed the ecosystem vulnerability assessment approach (EVAA) to help inform potential adaptation actions in response to a changing climate. EVAA combines multiple quantitative models and expert elicitation from scientists and land managers. In each of eight assessment areas, a panel of local experts determined potential vulnerability of forest ecosystems to...
Safeguards Technology Development Program 1st Quarter FY 2018 Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prasad, Manoj K.
LLNL will evaluate the performance of a stilbene-based scintillation detector array for IAEA neutron multiplicity counting (NMC) applications. This effort will combine newly developed modeling methodologies and recently acquired high-efficiency stilbene detector units to quantitatively compare the prototype system performance with the conventional He-3 counters and liquid scintillator alternatives.
Robustness of Ability Estimation to Multidimensionality in CAST with Implications to Test Assembly
ERIC Educational Resources Information Center
Zhang, Yanwei; Nandakumar, Ratna
2006-01-01
Computer Adaptive Sequential Testing (CAST) is a test delivery model that combines features of the traditional conventional paper-and-pencil testing and item-based computerized adaptive testing (CAT). The basic structure of CAST is a panel composed of multiple testlets adaptively administered to examinees at different stages. Current applications…
Spatially-varied erosion modeling using WEPP for timber harvested and burned hillslopes
Peter R. Robichaud; T. M. Monroe
1997-01-01
Spatially-varied hydrologic surface conditions exist on steep hillslopes after timber harvest operation and site preparation burning treatments. Site preparation burning creates low- and high-severity burn surface conditions or disturbances. In this study, a hillslope was divided into multiple combinations of surface conditions to determine how their spatial...
Oral Reading Fluency Assessment: Issues of Construct, Criterion, and Consequential Validity
ERIC Educational Resources Information Center
Valencia, Sheila W.; Smith, Antony T.; Reece, Anne M.; Li, Min; Wixson, Karen K.; Newman, Heather
2010-01-01
This study investigated multiple models for assessing oral reading fluency, including 1-minute oral reading measures that produce scores reported as words correct per minute (wcpm). We compared a measure of wcpm with measures of the individual and combined indicators of oral reading fluency (rate, accuracy, prosody, and comprehension) to examine…
Changes of crop rotation in Iowa determined from the USDA-NASS cropland data layer product
USDA-ARS?s Scientific Manuscript database
Crop rotation is one of the important decisions made independently by numerous farm managers, and is a critical variable in models of crop growth and soil carbon. By combining multiple years (2001-2009) of the USDA National Agricultural Statistics Service (NASS) cropland data layer (CDL), it is pos...
NASA Astrophysics Data System (ADS)
Xu, Jiuping; Ma, Ning; Lv, Chengwei
2016-08-01
Efficient water transfer and allocation are critical for disaster mitigation in drought emergencies. This is especially important when the different interests of the multiple decision makers and the fluctuating water resource supply and demand simultaneously cause space and time conflicts. To achieve more effective and efficient water transfers and allocations, this paper proposes a novel optimization method with an integrated bi-level structure and a dynamic strategy, in which the bi-level structure works to deal with space dimension conflicts in drought emergencies, and the dynamic strategy is used to deal with time dimension conflicts. Combining these two optimization methods, however, makes calculation complex, so an integrated interactive fuzzy program and a PSO-POA are combined to develop a hybrid-heuristic algorithm. The successful application of the proposed model in a real world case region demonstrates its practicality and efficiency. Dynamic cooperation between multiple reservoirs under the coordination of a global regulator reflects the model's efficiency and effectiveness in drought emergency water transfer and allocation, especially in a fluctuating environment. On this basis, some corresponding management recommendations are proposed to improve practical operations.
Vanderveldt, Ariana; Green, Leonard; Myerson, Joel
2014-01-01
The value of an outcome is affected both by the delay until its receipt (delay discounting) and by the likelihood of its receipt (probability discounting). Despite being well-described by the same hyperboloid function, delay and probability discounting involve fundamentally different processes, as revealed, for example, by the differential effects of reward amount. Previous research has focused on the discounting of delayed and probabilistic rewards separately, with little research examining more complex situations in which rewards are both delayed and probabilistic. In two experiments, participants made choices between smaller rewards that were both immediate and certain and larger rewards that were both delayed and probabilistic. Analyses revealed significant interactions between delay and probability factors inconsistent with an additive model. In contrast, a hyperboloid discounting model in which delay and probability were combined multiplicatively provided an excellent fit to the data. These results suggest that the hyperboloid is a good descriptor of decision making in complicated monetary choice situations like those people encounter in everyday life. PMID:24933696
Comparing and combining biomarkers as principle surrogates for time-to-event clinical endpoints.
Gabriel, Erin E; Sachs, Michael C; Gilbert, Peter B
2015-02-10
Principal surrogate endpoints are useful as targets for phase I and II trials. In many recent trials, multiple post-randomization biomarkers are measured. However, few statistical methods exist for comparison of or combination of biomarkers as principal surrogates, and none of these methods to our knowledge utilize time-to-event clinical endpoint information. We propose a Weibull model extension of the semi-parametric estimated maximum likelihood method that allows for the inclusion of multiple biomarkers in the same risk model as multivariate candidate principal surrogates. We propose several methods for comparing candidate principal surrogates and evaluating multivariate principal surrogates. These include the time-dependent and surrogate-dependent true and false positive fraction, the time-dependent and the integrated standardized total gain, and the cumulative distribution function of the risk difference. We illustrate the operating characteristics of our proposed methods in simulations and outline how these statistics can be used to evaluate and compare candidate principal surrogates. We use these methods to investigate candidate surrogates in the Diabetes Control and Complications Trial. Copyright © 2014 John Wiley & Sons, Ltd.
Software engineering the mixed model for genome-wide association studies on large samples.
Zhang, Zhiwu; Buckler, Edward S; Casstevens, Terry M; Bradbury, Peter J
2009-11-01
Mixed models improve the ability to detect phenotype-genotype associations in the presence of population stratification and multiple levels of relatedness in genome-wide association studies (GWAS), but for large data sets the resource consumption becomes impractical. At the same time, the sample size and number of markers used for GWAS is increasing dramatically, resulting in greater statistical power to detect those associations. The use of mixed models with increasingly large data sets depends on the availability of software for analyzing those models. While multiple software packages implement the mixed model method, no single package provides the best combination of fast computation, ability to handle large samples, flexible modeling and ease of use. Key elements of association analysis with mixed models are reviewed, including modeling phenotype-genotype associations using mixed models, population stratification, kinship and its estimation, variance component estimation, use of best linear unbiased predictors or residuals in place of raw phenotype, improving efficiency and software-user interaction. The available software packages are evaluated, and suggestions made for future software development.
Update on Linear Mode Photon Counting with the HgCdTe Linear Mode Avalanche Photodiode
NASA Technical Reports Server (NTRS)
Beck, Jeffrey D.; Kinch, Mike; Sun, Xiaoli
2014-01-01
The behavior of the gain-voltage characteristic of the mid-wavelength infrared cutoff HgCdTe linear mode avalanche photodiode (e-APD) is discussed both experimentally and theoretically as a function of the width of the multiplication region. Data are shown that demonstrate a strong dependence of the gain at a given bias voltage on the width of the n- gain region. Geometrical and fundamental theoretical models are examined to explain this behavior. The geometrical model takes into account the gain-dependent optical fill factor of the cylindrical APD. The theoretical model is based on the ballistic ionization model being developed for the HgCdTe APD. It is concluded that the fundamental theoretical explanation is the dominant effect. A model is developed that combines both the geometrical and fundamental effects. The model also takes into account the effect of the varying multiplication width in the low bias region of the gain-voltage curve. It is concluded that the lower than expected gain seen in the first 2 × 8 HgCdTe linear mode photon counting APD arrays, and higher excess noise factor, was very likely due to the larger than typical multiplication region length in the photon counting APD pixel design. The implications of these effects on device photon counting performance are discussed.
NASA Technical Reports Server (NTRS)
Song, Y. T.
2002-01-01
It is found that two adaptive parametric functions can be introduced into the basic ocean equations for utilizing the optimal or hybrid features of commonly used z-level, terrain- following, isopycnal, and pressure coordinates in numerical ocean models. The two parametric functions are formulated by combining three techniques: the arbitrary vertical coordinate system of Kasahara (1 974), the Jacobian pressure gradient formulation of Song (1 998), and a newly developed metric factor that permits both compressible (non-Boussinesq) and incompressible (Boussinesq) approximations. Based on the new formulation, an adaptive modeling strategy is proposed and a staggered finite volume method is designed to ensure conservation of important physical properties and numerical accuracy. Implementation of the combined techniques to SCRUM (Song and Haidvogel1994) shows that the adaptive modeling strategy can be applied to any existing ocean model without incurring computational expense or altering the original numerical schemes. Such a generalized coordinate model is expected to benefit diverse ocean modelers for easily choosing optimal vertical structures and sharing modeling resources based on a common model platform. Several representing oceanographic problems with different scales and characteristics, such as coastal canyons, basin-scale circulation, and global ocean circulation, are used to demonstrate the model's capability for multiple applications. New results show that the model is capable of simultaneously resolving both Boussinesq and non-Boussinesq, and both small- and large-scale processes well. This talk will focus on its applications of multiple satellite sensing data in eddy-resolving simulations of Asian Marginal Sea and Kurosio. Attention will be given to how Topex/Poseidon SSH, TRMM SST; and GRACE ocean bottom pressure can be correctly represented in a non- Boussinesq model.
NASA Astrophysics Data System (ADS)
Solov'eva, Yu. V.; Fakhrutdinova, Ya. D.; Starenchenko, V. A.
2015-01-01
The processes of the superlocalization of plastic deformation in L12 alloys have been studied numerically based on a combination of the model of the dislocation kinetics of the deformation-induced and heat-treatment-induced strengthening of an element of a deformable medium with the model of the mechanics of microplastic deformation described in terms of elastoplastic medium. It has been shown that the superlocalization of plastic deformation is determined by the presence of stress concentrators and by the nonmonotonic strengthening of the elements of the deformable medium. The multiple nonmonotonicity of the process of strengthening of the elementary volume of the medium can be responsible for the multiplicity of bands of microplastic localization of deformation.
Salient object detection method based on multiple semantic features
NASA Astrophysics Data System (ADS)
Wang, Chunyang; Yu, Chunyan; Song, Meiping; Wang, Yulei
2018-04-01
The existing salient object detection model can only detect the approximate location of salient object, or highlight the background, to resolve the above problem, a salient object detection method was proposed based on image semantic features. First of all, three novel salient features were presented in this paper, including object edge density feature (EF), object semantic feature based on the convex hull (CF) and object lightness contrast feature (LF). Secondly, the multiple salient features were trained with random detection windows. Thirdly, Naive Bayesian model was used for combine these features for salient detection. The results on public datasets showed that our method performed well, the location of salient object can be fixed and the salient object can be accurately detected and marked by the specific window.
Combined mining: discovering informative knowledge in complex data.
Cao, Longbing; Zhang, Huaifeng; Zhao, Yanchang; Luo, Dan; Zhang, Chengqi
2011-06-01
Enterprise data mining applications often involve complex data such as multiple large heterogeneous data sources, user preferences, and business impact. In such situations, a single method or one-step mining is often limited in discovering informative knowledge. It would also be very time and space consuming, if not impossible, to join relevant large data sources for mining patterns consisting of multiple aspects of information. It is crucial to develop effective approaches for mining patterns combining necessary information from multiple relevant business lines, catering for real business settings and decision-making actions rather than just providing a single line of patterns. The recent years have seen increasing efforts on mining more informative patterns, e.g., integrating frequent pattern mining with classifications to generate frequent pattern-based classifiers. Rather than presenting a specific algorithm, this paper builds on our existing works and proposes combined mining as a general approach to mining for informative patterns combining components from either multiple data sets or multiple features or by multiple methods on demand. We summarize general frameworks, paradigms, and basic processes for multifeature combined mining, multisource combined mining, and multimethod combined mining. Novel types of combined patterns, such as incremental cluster patterns, can result from such frameworks, which cannot be directly produced by the existing methods. A set of real-world case studies has been conducted to test the frameworks, with some of them briefed in this paper. They identify combined patterns for informing government debt prevention and improving government service objectives, which show the flexibility and instantiation capability of combined mining in discovering informative knowledge in complex data.
Tomasek, Ladislav
2013-01-01
The aim of the present study was to evaluate the risk of lung cancer from combined exposure to radon and smoking. Methodologically, it is based on case-control studies nested within two Czech cohort studies of nearly 11,000 miners followed-up for mortality in 1952–2010 and nearly 12,000 inhabitants exposed to high levels of radon in homes, with mortality follow-up in 1960–2010. In addition to recorded radon exposure, these studies use information on smoking collected from the subjects or their relatives. A total of 1,029 and 370 cases with smoking information have been observed in the occupational and environmental (residential) studies, respectively. Three or four control subjects have been individually matched to cases according to sex, year of birth, and age. The combined effect from radon and smoking is analyzed in terms of geometric mixture models of which the additive and multiplicative models are special cases. The resulting models are relatively close to the additive interaction (mixing parameter 0.2 and 0.3 in the occupational and residential studies, respectively). The impact of the resulting model in the residential radon study is illustrated by estimates of lifetime risk in hypothetical populations of smokers and non-smokers. In comparison to the multiplicative risk model, the lifetime risk from the best geometric mixture model is considerably higher, particularly in the non-smoking population. PMID:23470882
Cilfone, Nicholas A.; Kirschner, Denise E.; Linderman, Jennifer J.
2015-01-01
Biologically related processes operate across multiple spatiotemporal scales. For computational modeling methodologies to mimic this biological complexity, individual scale models must be linked in ways that allow for dynamic exchange of information across scales. A powerful methodology is to combine a discrete modeling approach, agent-based models (ABMs), with continuum models to form hybrid models. Hybrid multi-scale ABMs have been used to simulate emergent responses of biological systems. Here, we review two aspects of hybrid multi-scale ABMs: linking individual scale models and efficiently solving the resulting model. We discuss the computational choices associated with aspects of linking individual scale models while simultaneously maintaining model tractability. We demonstrate implementations of existing numerical methods in the context of hybrid multi-scale ABMs. Using an example model describing Mycobacterium tuberculosis infection, we show relative computational speeds of various combinations of numerical methods. Efficient linking and solution of hybrid multi-scale ABMs is key to model portability, modularity, and their use in understanding biological phenomena at a systems level. PMID:26366228
Swift, Brenna E.; Williams, Brent A.; Kosaka, Yoko; Wang, Xing-Hua; Medin, Jeffrey A.; Viswanathan, Sowmya; Martinez-Lopez, Joaquin; Keating, Armand
2012-01-01
Background Novel therapies capable of targeting drug resistant clonogenic MM cells are required for more effective treatment of multiple myeloma. This study investigates the cytotoxicity of natural killer cell lines against bulk and clonogenic multiple myeloma and evaluates the tumor burden after NK cell therapy in a bioluminescent xenograft mouse model. Design and Methods The cytotoxicity of natural killer cell lines was evaluated against bulk multiple myeloma cell lines using chromium release and flow cytometry cytotoxicity assays. Selected activating receptors on natural killer cells were blocked to determine their role in multiple myeloma recognition. Growth inhibition of clonogenic multiple myeloma cells was assessed in a methylcellulose clonogenic assay in combination with secondary replating to evaluate the self-renewal of residual progenitors after natural killer cell treatment. A bioluminescent mouse model was developed using the human U266 cell line transduced to express green fluorescent protein and luciferase (U266eGFPluc) to monitor disease progression in vivo and assess bone marrow engraftment after intravenous NK-92 cell therapy. Results Three multiple myeloma cell lines were sensitive to NK-92 and KHYG-1 cytotoxicity mediated by NKp30, NKp46, NKG2D and DNAM-1 activating receptors. NK-92 and KHYG-1 demonstrated 2- to 3-fold greater inhibition of clonogenic multiple myeloma growth, compared with killing of the bulk tumor population. In addition, the residual colonies after treatment formed significantly fewer colonies compared to the control in a secondary replating for a cumulative clonogenic inhibition of 89–99% at the 20:1 effector to target ratio. Multiple myeloma tumor burden was reduced by NK-92 in a xenograft mouse model as measured by bioluminescence imaging and reduction in bone marrow engraftment of U266eGFPluc cells by flow cytometry. Conclusions This study demonstrates that NK-92 and KHYG-1 are capable of killing clonogenic and bulk multiple myeloma cells. In addition, multiple myeloma tumor burden in a xenograft mouse model was reduced by intravenous NK-92 cell therapy. Since multiple myeloma colony frequency correlates with survival, our observations have important clinical implications and suggest that clinical studies of NK cell lines to treat MM are warranted. PMID:22271890
Wafer hotspot prevention using etch aware OPC correction
NASA Astrophysics Data System (ADS)
Hamouda, Ayman; Power, Dave; Salama, Mohamed; Chen, Ao
2016-03-01
As technology development advances into deep-sub-wavelength nodes, multiple patterning is becoming more essential to achieve the technology shrink requirements. Recently, Optical Proximity Correction (OPC) technology has proposed simultaneous correction of multiple mask-patterns to enable multiple patterning awareness during OPC correction. This is essential to prevent inter-layer hot-spots during the final pattern transfer. In state-of-art literature, multi-layer awareness is achieved using simultaneous resist-contour simulations to predict and correct for hot-spots during mask generation. However, this approach assumes a uniform etch shrink response for all patterns independent of their proximity, which isn't sufficient for the full prevention of inter-exposure hot-spot, for example different color space violations post etch or via coverage/enclosure post etch. In this paper, we explain the need to include the etch component during multiple patterning OPC. We also introduce a novel approach for Etch-aware simultaneous Multiple-patterning OPC, where we calibrate and verify a lumped model that includes the combined resist and etch responses. Adding this extra simulation condition during OPC is suitable for full chip processing from a computation intensity point of view. Also, using this model during OPC to predict and correct inter-exposures hot-spots is similar to previously proposed multiple-patterning OPC, yet our proposed approach more accurately corrects post-etch defects too.
NASA Astrophysics Data System (ADS)
Mekanik, F.; Imteaz, M. A.; Gato-Trinidad, S.; Elmahdi, A.
2013-10-01
In this study, the application of Artificial Neural Networks (ANN) and Multiple regression analysis (MR) to forecast long-term seasonal spring rainfall in Victoria, Australia was investigated using lagged El Nino Southern Oscillation (ENSO) and Indian Ocean Dipole (IOD) as potential predictors. The use of dual (combined lagged ENSO-IOD) input sets for calibrating and validating ANN and MR Models is proposed to investigate the simultaneous effect of past values of these two major climate modes on long-term spring rainfall prediction. The MR models that did not violate the limits of statistical significance and multicollinearity were selected for future spring rainfall forecast. The ANN was developed in the form of multilayer perceptron using Levenberg-Marquardt algorithm. Both MR and ANN modelling were assessed statistically using mean square error (MSE), mean absolute error (MAE), Pearson correlation (r) and Willmott index of agreement (d). The developed MR and ANN models were tested on out-of-sample test sets; the MR models showed very poor generalisation ability for east Victoria with correlation coefficients of -0.99 to -0.90 compared to ANN with correlation coefficients of 0.42-0.93; ANN models also showed better generalisation ability for central and west Victoria with correlation coefficients of 0.68-0.85 and 0.58-0.97 respectively. The ability of multiple regression models to forecast out-of-sample sets is compatible with ANN for Daylesford in central Victoria and Kaniva in west Victoria (r = 0.92 and 0.67 respectively). The errors of the testing sets for ANN models are generally lower compared to multiple regression models. The statistical analysis suggest the potential of ANN over MR models for rainfall forecasting using large scale climate modes.
Assessing NARCCAP climate model effects using spatial confidence regions
French, Joshua P.; McGinnis, Seth; Schwartzman, Armin
2017-01-01
We assess similarities and differences between model effects for the North American Regional Climate Change Assessment Program (NARCCAP) climate models using varying classes of linear regression models. Specifically, we consider how the average temperature effect differs for the various global and regional climate model combinations, including assessment of possible interaction between the effects of global and regional climate models. We use both pointwise and simultaneous inference procedures to identify regions where global and regional climate model effects differ. We also show conclusively that results from pointwise inference are misleading, and that accounting for multiple comparisons is important for making proper inference. PMID:28936474
Anugraha, Gandhirajan; Jeyaprita, Parasurama Jawaharlal; Madhumathi, Jayaprakasam; Sheeba, Tamilvanan; Kaliraj, Perumal
2013-12-01
Although multiple vaccine strategy for lymphatic filariasis has provided tremendous hope, the choice of antigens used in combination has determined its success in the previous studies. Multiple antigens comprising key vaccine candidates from different life cycle stages would provide a promising strategy if the antigenic combination is chosen by careful screening. In order to analyze one such combination, we have used a chimeric construct carrying the well studied B. malayi antigens thioredoxin (BmTRX) and venom allergen homologue (BmVAH) as a fusion protein (TV) and evaluated its immune responses in mice model. The efficacy of fusion protein vaccine was explored in comparison with the single antigen vaccines and their cocktail. In mice, TV induced significantly high antibody titer of 1,28,000 compared to cocktail vaccine TRX+VAH (50,000) and single antigen vaccine TRX (16,000) or VAH (50,000). Furthermore, TV elicited higher level of cellular proliferative response together with elevated levels of IFN-γ, IL-4 and IL-5 indicating a Th1/Th2 balanced response. The isotype antibody profile showed significantly high level of IgG1 and IgG2b confirming the balanced response elicited by TV. Immunization with TV antigen induced high levels of both humoral and cellular immune responses compared to either cocktail or antigen given alone. The result suggests that TV is highly immunogenic in mice and hence the combination needs to be evaluated for its prophylactic potential.
Genomic prediction based on data from three layer lines using non-linear regression models.
Huang, Heyun; Windig, Jack J; Vereijken, Addie; Calus, Mario P L
2014-11-06
Most studies on genomic prediction with reference populations that include multiple lines or breeds have used linear models. Data heterogeneity due to using multiple populations may conflict with model assumptions used in linear regression methods. In an attempt to alleviate potential discrepancies between assumptions of linear models and multi-population data, two types of alternative models were used: (1) a multi-trait genomic best linear unbiased prediction (GBLUP) model that modelled trait by line combinations as separate but correlated traits and (2) non-linear models based on kernel learning. These models were compared to conventional linear models for genomic prediction for two lines of brown layer hens (B1 and B2) and one line of white hens (W1). The three lines each had 1004 to 1023 training and 238 to 240 validation animals. Prediction accuracy was evaluated by estimating the correlation between observed phenotypes and predicted breeding values. When the training dataset included only data from the evaluated line, non-linear models yielded at best a similar accuracy as linear models. In some cases, when adding a distantly related line, the linear models showed a slight decrease in performance, while non-linear models generally showed no change in accuracy. When only information from a closely related line was used for training, linear models and non-linear radial basis function (RBF) kernel models performed similarly. The multi-trait GBLUP model took advantage of the estimated genetic correlations between the lines. Combining linear and non-linear models improved the accuracy of multi-line genomic prediction. Linear models and non-linear RBF models performed very similarly for genomic prediction, despite the expectation that non-linear models could deal better with the heterogeneous multi-population data. This heterogeneity of the data can be overcome by modelling trait by line combinations as separate but correlated traits, which avoids the occasional occurrence of large negative accuracies when the evaluated line was not included in the training dataset. Furthermore, when using a multi-line training dataset, non-linear models provided information on the genotype data that was complementary to the linear models, which indicates that the underlying data distributions of the three studied lines were indeed heterogeneous.
Discovering Synergistic Drug Combination from a Computational Perspective.
Ding, Pingjian; Luo, Jiawei; Liang, Cheng; Xiao, Qiu; Cao, Buwen; Li, Guanghui
2018-03-30
Synergistic drug combinations play an important role in the treatment of complex diseases. The identification of effective drug combination is vital to further reduce the side effects and improve therapeutic efficiency. In previous years, in vitro method has been the main route to discover synergistic drug combinations. However, many limitations of time and resource consumption lie within the in vitro method. Therefore, with the rapid development of computational models and the explosive growth of large and phenotypic data, computational methods for discovering synergistic drug combinations are an efficient and promising tool and contribute to precision medicine. It is the key of computational methods how to construct the computational model. Different computational strategies generate different performance. In this review, the recent advancements in computational methods for predicting effective drug combination are concluded from multiple aspects. First, various datasets utilized to discover synergistic drug combinations are summarized. Second, we discussed feature-based approaches and partitioned these methods into two classes including feature-based methods in terms of similarity measure, and feature-based methods in terms of machine learning. Third, we discussed network-based approaches for uncovering synergistic drug combinations. Finally, we analyzed and prospected computational methods for predicting effective drug combinations. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
An integrated modelling framework for neural circuits with multiple neuromodulators.
Joshi, Alok; Youssofzadeh, Vahab; Vemana, Vinith; McGinnity, T M; Prasad, Girijesh; Wong-Lin, KongFatt
2017-01-01
Neuromodulators are endogenous neurochemicals that regulate biophysical and biochemical processes, which control brain function and behaviour, and are often the targets of neuropharmacological drugs. Neuromodulator effects are generally complex partly owing to the involvement of broad innervation, co-release of neuromodulators, complex intra- and extrasynaptic mechanism, existence of multiple receptor subtypes and high interconnectivity within the brain. In this work, we propose an efficient yet sufficiently realistic computational neural modelling framework to study some of these complex behaviours. Specifically, we propose a novel dynamical neural circuit model that integrates the effective neuromodulator-induced currents based on various experimental data (e.g. electrophysiology, neuropharmacology and voltammetry). The model can incorporate multiple interacting brain regions, including neuromodulator sources, simulate efficiently and easily extendable to large-scale brain models, e.g. for neuroimaging purposes. As an example, we model a network of mutually interacting neural populations in the lateral hypothalamus, dorsal raphe nucleus and locus coeruleus, which are major sources of neuromodulator orexin/hypocretin, serotonin and norepinephrine/noradrenaline, respectively, and which play significant roles in regulating many physiological functions. We demonstrate that such a model can provide predictions of systemic drug effects of the popular antidepressants (e.g. reuptake inhibitors), neuromodulator antagonists or their combinations. Finally, we developed user-friendly graphical user interface software for model simulation and visualization for both fundamental sciences and pharmacological studies. © 2017 The Authors.
An integrated modelling framework for neural circuits with multiple neuromodulators
Vemana, Vinith
2017-01-01
Neuromodulators are endogenous neurochemicals that regulate biophysical and biochemical processes, which control brain function and behaviour, and are often the targets of neuropharmacological drugs. Neuromodulator effects are generally complex partly owing to the involvement of broad innervation, co-release of neuromodulators, complex intra- and extrasynaptic mechanism, existence of multiple receptor subtypes and high interconnectivity within the brain. In this work, we propose an efficient yet sufficiently realistic computational neural modelling framework to study some of these complex behaviours. Specifically, we propose a novel dynamical neural circuit model that integrates the effective neuromodulator-induced currents based on various experimental data (e.g. electrophysiology, neuropharmacology and voltammetry). The model can incorporate multiple interacting brain regions, including neuromodulator sources, simulate efficiently and easily extendable to large-scale brain models, e.g. for neuroimaging purposes. As an example, we model a network of mutually interacting neural populations in the lateral hypothalamus, dorsal raphe nucleus and locus coeruleus, which are major sources of neuromodulator orexin/hypocretin, serotonin and norepinephrine/noradrenaline, respectively, and which play significant roles in regulating many physiological functions. We demonstrate that such a model can provide predictions of systemic drug effects of the popular antidepressants (e.g. reuptake inhibitors), neuromodulator antagonists or their combinations. Finally, we developed user-friendly graphical user interface software for model simulation and visualization for both fundamental sciences and pharmacological studies. PMID:28100828
Technology Performance Level (TPL) Scoring Tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weber, Jochem; Roberts, Jesse D.; Costello, Ronan
2016-09-01
Three different ways of combining scores are used in the revised formulation. These are arithmetic mean, geometric mean and multiplication with normalisation. Arithmetic mean is used when combining scores that measure similar attributes, e.g. used for combining costs. The arithmetic mean has the property that it is similar to a logical OR, e.g. when combining costs it does not matter what the individual costs are only what the combined cost is. Geometric mean and Multiplication are used when combining scores that measure disparate attributes. Multiplication is similar to a logical AND, it is used to combine ‘must haves.’ As amore » result, this method is more punitive than the geometric mean; to get a good score in the combined result it is necessary to have a good score in ALL of the inputs. e.g. the different types of survivability are ‘must haves.’ On balance, the revised TPL is probably less punitive than the previous spreadsheet, multiplication is used sparingly as a method of combining scores. This is in line with the feedback of the Wave Energy Prize judges.« less
Screening Models of Aquifer Heterogeneity Using the Flow Dimension
NASA Astrophysics Data System (ADS)
Walker, D. D.; Cello, P. A.; Roberts, R. M.; Valocchi, A. J.
2007-12-01
Despite advances in test interpretation and modeling, typical groundwater modeling studies only indirectly use the parameters and information inferred from hydraulic tests. In particular, the Generalized Radial Flow approach to test interpretation infers the flow dimension, a parameter describing the geometry of the flow field during a hydraulic test. Noninteger values of the flow dimension often are inferred for tests in highly heterogeneous aquifers, yet subsequent modeling studies typically ignore the flow dimension. Monte Carlo analyses of detailed numerical models of aquifer tests examine the flow dimension for several stochastic models of heterogeneous transmissivity, T(x). These include multivariate lognormal, fractional Brownian motion, a site percolation network, and discrete linear features with lengths distributed as power-law. The behavior of the simulated flow dimensions are compared to the flow dimensions observed for multiple aquifer tests in a fractured dolomite aquifer in the Great Lakes region of North America. The combination of multiple hydraulic tests, observed fracture patterns, and the Monte Carlo results are used to screen models of heterogeneity and their parameters for subsequent groundwater flow modeling.
Locally adaptive MR intensity models and MRF-based segmentation of multiple sclerosis lesions
NASA Astrophysics Data System (ADS)
Galimzianova, Alfiia; Lesjak, Žiga; Likar, Boštjan; Pernuš, Franjo; Špiclin, Žiga
2015-03-01
Neuroimaging biomarkers are an important paraclinical tool used to characterize a number of neurological diseases, however, their extraction requires accurate and reliable segmentation of normal and pathological brain structures. For MR images of healthy brains the intensity models of normal-appearing brain tissue (NABT) in combination with Markov random field (MRF) models are known to give reliable and smooth NABT segmentation. However, the presence of pathology, MR intensity bias and natural tissue-dependent intensity variability altogether represent difficult challenges for a reliable estimation of NABT intensity model based on MR images. In this paper, we propose a novel method for segmentation of normal and pathological structures in brain MR images of multiple sclerosis (MS) patients that is based on locally-adaptive NABT model, a robust method for the estimation of model parameters and a MRF-based segmentation framework. Experiments on multi-sequence brain MR images of 27 MS patients show that, compared to whole-brain model and compared to the widely used Expectation-Maximization Segmentation (EMS) method, the locally-adaptive NABT model increases the accuracy of MS lesion segmentation.
A video, text, and speech-driven realistic 3-d virtual head for human-machine interface.
Yu, Jun; Wang, Zeng-Fu
2015-05-01
A multiple inputs-driven realistic facial animation system based on 3-D virtual head for human-machine interface is proposed. The system can be driven independently by video, text, and speech, thus can interact with humans through diverse interfaces. The combination of parameterized model and muscular model is used to obtain a tradeoff between computational efficiency and high realism of 3-D facial animation. The online appearance model is used to track 3-D facial motion from video in the framework of particle filtering, and multiple measurements, i.e., pixel color value of input image and Gabor wavelet coefficient of illumination ratio image, are infused to reduce the influence of lighting and person dependence for the construction of online appearance model. The tri-phone model is used to reduce the computational consumption of visual co-articulation in speech synchronized viseme synthesis without sacrificing any performance. The objective and subjective experiments show that the system is suitable for human-machine interaction.
A Hands-on Physical Analog Demonstration of Real-Time Volcano Deformation Monitoring with GNSS/GPS
NASA Astrophysics Data System (ADS)
Jones, J. R.; Schobelock, J.; Nguyen, T. T.; Rajaonarison, T. A.; Malloy, S.; Njinju, E. A.; Guerra, L.; Stamps, D. S.; Glesener, G. B.
2017-12-01
Teaching about volcano deformation and how scientists study these processes using GNSS/GPS may present some challenge since the volcanoes and/or GNSS/GPS equipment are not quite accessible to most teachers. Educators and curriculum materials specialists have developed and shared a number of activities and demonstrations to help students visualize volcanic processes and ways scientist use GNSS/GPS in their research. From resources provided by MEDL (the Modeling and Educational Demonstrations Laboratory) in the Department of Geosciences at Virginia Tech, we combined multiple materials and techniques from these previous works to produce a hands-on physical analog model from which students can learn about GNSS/GPS studies of volcano deformation. The model functions as both a qualitative and quantitative learning tool with good analogical affordances. In our presentation, we will describe multiple ways of teaching with the model, what kinds of materials can be used to build it, and ways we think the model could be enhanced with the addition of Vernier sensors for data collection.
Multivariate meta-analysis using individual participant data.
Riley, R D; Price, M J; Jackson, D; Wardle, M; Gueyffier, F; Wang, J; Staessen, J A; White, I R
2015-06-01
When combining results across related studies, a multivariate meta-analysis allows the joint synthesis of correlated effect estimates from multiple outcomes. Joint synthesis can improve efficiency over separate univariate syntheses, may reduce selective outcome reporting biases, and enables joint inferences across the outcomes. A common issue is that within-study correlations needed to fit the multivariate model are unknown from published reports. However, provision of individual participant data (IPD) allows them to be calculated directly. Here, we illustrate how to use IPD to estimate within-study correlations, using a joint linear regression for multiple continuous outcomes and bootstrapping methods for binary, survival and mixed outcomes. In a meta-analysis of 10 hypertension trials, we then show how these methods enable multivariate meta-analysis to address novel clinical questions about continuous, survival and binary outcomes; treatment-covariate interactions; adjusted risk/prognostic factor effects; longitudinal data; prognostic and multiparameter models; and multiple treatment comparisons. Both frequentist and Bayesian approaches are applied, with example software code provided to derive within-study correlations and to fit the models. © 2014 The Authors. Research Synthesis Methods published by John Wiley & Sons, Ltd.
Monitoring multiple components in vinegar fermentation using Raman spectroscopy.
Uysal, Reyhan Selin; Soykut, Esra Acar; Boyaci, Ismail Hakki; Topcu, Ali
2013-12-15
In this study, the utility of Raman spectroscopy (RS) with chemometric methods for quantification of multiple components in the fermentation process was investigated. Vinegar, the product of a two stage fermentation, was used as a model and glucose and fructose consumption, ethanol production and consumption and acetic acid production were followed using RS and the partial least squares (PLS) method. Calibration of the PLS method was performed using model solutions. The prediction capability of the method was then investigated with both model and real samples. HPLC was used as a reference method. The results from comparing RS-PLS and HPLC with each other showed good correlations were obtained between predicted and actual sample values for glucose (R(2)=0.973), fructose (R(2)=0.988), ethanol (R(2)=0.996) and acetic acid (R(2)=0.983). In conclusion, a combination of RS with chemometric methods can be applied to monitor multiple components of the fermentation process from start to finish with a single measurement in a short time. Copyright © 2013 Elsevier Ltd. All rights reserved.
Exploiting physical constraints for multi-spectral exo-planet detection
NASA Astrophysics Data System (ADS)
Thiébaut, Éric; Devaney, Nicholas; Langlois, Maud; Hanley, Kenneth
2016-07-01
We derive a physical model of the on-axis PSF for a high contrast imaging system such as GPI or SPHERE. This model is based on a multi-spectral Taylor series expansion of the diffraction pattern and predicts that the speckles should be a combination of spatial modes with deterministic chromatic magnification and weighting. We propose to remove most of the residuals by fitting this model on a set of images at multiple wavelengths and times. On simulated data, we demonstrate that our approach achieves very good speckle suppression without additional heuristic parameters. The residual speckles1, 2 set the most serious limitation in the detection of exo-planets in high contrast coronographic images provided by instruments such as SPHERE3 at the VLT, GPI4, 5 at Gemini, or SCExAO6 at Subaru. A number of post-processing methods have been proposed to remove as much as possible of the residual speckles while preserving the signal from the planets. These methods exploit the fact that the speckles and the planetary signal have different temporal and spectral behaviors. Some methods like LOCI7 are based on angular differential imaging8 (ADI), spectral differential imaging9, 10 (SDI), or on a combination of ADI and SDI.11 Instead of working on image differences, we propose to tackle the exo-planet detection as an inverse problem where a model of the residual speckles is fit on the set of multi-spectral images and, possibly, multiple exposures. In order to reduce the number of degrees of freedom, we impose specific constraints on the spatio-spectral distribution of stellar speckles. These constraints are deduced from a multi-spectral Taylor series expansion of the diffraction pattern for an on-axis source which implies that the speckles are a combination of spatial modes with deterministic chromatic magnification and weighting. Using simulated data, the efficiency of speckle removal by fitting the proposed multi-spectral model is compared to the result of using an approximation based on the singular value decomposition of the rescaled images. We show how the difficult problem to fitting a bilinear model on the can be solved in practise. The results are promising for further developments including application to real data and joint planet detection in multi-variate data (multi-spectral and multiple exposures images).
Earth Science Computational Architecture for Multi-disciplinary Investigations
NASA Astrophysics Data System (ADS)
Parker, J. W.; Blom, R.; Gurrola, E.; Katz, D.; Lyzenga, G.; Norton, C.
2005-12-01
Understanding the processes underlying Earth's deformation and mass transport requires a non-traditional, integrated, interdisciplinary, approach dependent on multiple space and ground based data sets, modeling, and computational tools. Currently, details of geophysical data acquisition, analysis, and modeling largely limit research to discipline domain experts. Interdisciplinary research requires a new computational architecture that is optimized to perform complex data processing of multiple solid Earth science data types in a user-friendly environment. A web-based computational framework is being developed and integrated with applications for automatic interferometric radar processing, and models for high-resolution deformation & gravity, forward models of viscoelastic mass loading over short wavelengths & complex time histories, forward-inverse codes for characterizing surface loading-response over time scales of days to tens of thousands of years, and inversion of combined space magnetic & gravity fields to constrain deep crustal and mantle properties. This framework combines an adaptation of the QuakeSim distributed services methodology with the Pyre framework for multiphysics development. The system uses a three-tier architecture, with a middle tier server that manages user projects, available resources, and security. This ensures scalability to very large networks of collaborators. Users log into a web page and have a personal project area, persistently maintained between connections, for each application. Upon selection of an application and host from a list of available entities, inputs may be uploaded or constructed from web forms and available data archives, including gravity, GPS and imaging radar data. The user is notified of job completion and directed to results posted via URLs. Interdisciplinary work is supported through easy availability of all applications via common browsers, application tutorials and reference guides, and worked examples with visual response. At the platform level, multi-physics application development and workflow are available in the enriched environment of the Pyre framework. Advantages for combining separate expert domains include: multiple application components efficiently interact through Python shared libraries, investigators may nimbly swap models and try new parameter values, and a rich array of common tools are inherent in the Pyre system. The first four specific investigations to use this framework are: Gulf Coast subsidence: understanding of partitioning between compaction, subsidence and growth faulting; Gravity & deformation of a layered spherical earth model due to large earthquakes; Rift setting of Lake Vostok, Antarctica; and global ice mass changes.
Multiscale sagebrush rangeland habitat modeling in southwest Wyoming
Homer, Collin G.; Aldridge, Cameron L.; Meyer, Debra K.; Coan, Michael J.; Bowen, Zachary H.
2009-01-01
Sagebrush-steppe ecosystems in North America have experienced dramatic elimination and degradation since European settlement. As a result, sagebrush-steppe dependent species have experienced drastic range contractions and population declines. Coordinated ecosystem-wide research, integrated with monitoring and management activities, would improve the ability to maintain existing sagebrush habitats. However, current data only identify resource availability locally, with rigorous spatial tools and models that accurately model and map sagebrush habitats over large areas still unavailable. Here we report on an effort to produce a rigorous large-area sagebrush-habitat classification and inventory with statistically validated products and estimates of precision in the State of Wyoming. This research employs a combination of significant new tools, including (1) modeling sagebrush rangeland as a series of independent continuous field components that can be combined and customized by any user at multiple spatial scales; (2) collecting ground-measured plot data on 2.4-meter imagery in the same season the satellite imagery is acquired; (3) effective modeling of ground-measured data on 2.4-meter imagery to maximize subsequent extrapolation; (4) acquiring multiple seasons (spring, summer, and fall) of an additional two spatial scales of imagery (30 meter and 56 meter) for optimal large-area modeling; (5) using regression tree classification technology that optimizes data mining of multiple image dates, ratios, and bands with ancillary data to extrapolate ground training data to coarser resolution sensors; and (6) employing rigorous accuracy assessment of model predictions to enable users to understand the inherent uncertainties. First-phase results modeled eight rangeland components (four primary targets and four secondary targets) as continuous field predictions. The primary targets included percent bare ground, percent herbaceousness, percent shrub, and percent litter. The four secondary targets included percent sagebrush (Artemisia spp.), percent big sagebrush (Artemisia tridentata), percent Wyoming sagebrush (Artemisia tridentata wyomingensis), and sagebrush height (centimeters). Results were validated by an independent accuracy assessment with root mean square error (RMSE) values ranging from 6.38 percent for bare ground to 2.99 percent for sagebrush at the QuickBird scale and RMSE values ranging from 12.07 percent for bare ground to 6.34 percent for sagebrush at the full Landsat scale. Subsequent project phases are now in progress, with plans to deliver products that improve accuracies of existing components, model new components, complete models over larger areas, track changes over time (from 1988 to 2007), and ultimately model wildlife population trends against these changes. We believe these results offer significant improvement in sagebrush rangeland quantification at multiple scales and offer users products that have been rigorously validated.
Suzuki, Hideaki; Tabata, Takahisa; Koizumi, Hiroki; Hohchi, Nobusuke; Takeuchi, Shoko; Kitamura, Takuro; Fujino, Yoshihisa; Ohbuchi, Toyoaki
2014-12-01
This study aimed to create a multiple regression model for predicting hearing outcomes of idiopathic sudden sensorineural hearing loss (ISSNHL). The participants were 205 consecutive patients (205 ears) with ISSNHL (hearing level ≥ 40 dB, interval between onset and treatment ≤ 30 days). They received systemic steroid administration combined with intratympanic steroid injection. Data were examined by simple and multiple regression analyses. Three hearing indices (percentage hearing improvement, hearing gain, and posttreatment hearing level [HLpost]) and 7 prognostic factors (age, days from onset to treatment, initial hearing level, initial hearing level at low frequencies, initial hearing level at high frequencies, presence of vertigo, and contralateral hearing level) were included in the multiple regression analysis as dependent and explanatory variables, respectively. In the simple regression analysis, the percentage hearing improvement, hearing gain, and HLpost showed significant correlation with 2, 5, and 6 of the 7 prognostic factors, respectively. The multiple correlation coefficients were 0.396, 0.503, and 0.714 for the percentage hearing improvement, hearing gain, and HLpost, respectively. Predicted values of HLpost calculated by the multiple regression equation were reliable with 70% probability with a 40-dB-width prediction interval. Prediction of HLpost by the multiple regression model may be useful to estimate the hearing prognosis of ISSNHL. © The Author(s) 2014.
Fieuws, Steffen; Willems, Guy; Larsen-Tangmose, Sara; Lynnerup, Niels; Boldsen, Jesper; Thevissen, Patrick
2016-03-01
When an estimate of age is needed, typically multiple indicators are present as found in skeletal or dental information. There exists a vast literature on approaches to estimate age from such multivariate data. Application of Bayes' rule has been proposed to overcome drawbacks of classical regression models but becomes less trivial as soon as the number of indicators increases. Each of the age indicators can lead to a different point estimate ("the most plausible value for age") and a prediction interval ("the range of possible values"). The major challenge in the combination of multiple indicators is not the calculation of a combined point estimate for age but the construction of an appropriate prediction interval. Ignoring the correlation between the age indicators results in intervals being too small. Boldsen et al. (2002) presented an ad-hoc procedure to construct an approximate confidence interval without the need to model the multivariate correlation structure between the indicators. The aim of the present paper is to bring under attention this pragmatic approach and to evaluate its performance in a practical setting. This is all the more needed since recent publications ignore the need for interval estimation. To illustrate and evaluate the method, Köhler et al. (1995) third molar scores are used to estimate the age in a dataset of 3200 male subjects in the juvenile age range.
Liu, Renzhi; Liu, Jing; Zhang, Zhijiao; Borthwick, Alistair; Zhang, Ke
2015-12-02
Over the past half century, a surprising number of major pollution incidents occurred due to tailings dam failures. Most previous studies of such incidents comprised forensic analyses of environmental impacts after a tailings dam failure, with few considering the combined pollution risk before incidents occur at a watershed-scale. We therefore propose Watershed-scale Tailings-pond Pollution Risk Analysis (WTPRA), designed for multiple mine tailings ponds, stemming from previous watershed-scale accidental pollution risk assessments. Transferred and combined risk is embedded using risk rankings of multiple routes of the "source-pathway-target" in the WTPRA. The previous approach is modified using multi-criteria analysis, dam failure models, and instantaneous water quality models, which are modified for application to multiple tailings ponds. The study area covers the basin of Gutanting Reservoir (the largest backup drinking water source for Beijing) in Zhangjiakou City, where many mine tailings ponds are located. The resultant map shows that risk is higher downstream of Gutanting Reservoir and in its two tributary basins (i.e., Qingshui River and Longyang River). Conversely, risk is lower in the midstream and upstream reaches. The analysis also indicates that the most hazardous mine tailings ponds are located in Chongli and Xuanhua, and that Guanting Reservoir is the most vulnerable receptor. Sensitivity and uncertainty analyses are performed to validate the robustness of the WTPRA method.
Liu, Renzhi; Liu, Jing; Zhang, Zhijiao; Borthwick, Alistair; Zhang, Ke
2015-01-01
Over the past half century, a surprising number of major pollution incidents occurred due to tailings dam failures. Most previous studies of such incidents comprised forensic analyses of environmental impacts after a tailings dam failure, with few considering the combined pollution risk before incidents occur at a watershed-scale. We therefore propose Watershed-scale Tailings-pond Pollution Risk Analysis (WTPRA), designed for multiple mine tailings ponds, stemming from previous watershed-scale accidental pollution risk assessments. Transferred and combined risk is embedded using risk rankings of multiple routes of the “source-pathway-target” in the WTPRA. The previous approach is modified using multi-criteria analysis, dam failure models, and instantaneous water quality models, which are modified for application to multiple tailings ponds. The study area covers the basin of Gutanting Reservoir (the largest backup drinking water source for Beijing) in Zhangjiakou City, where many mine tailings ponds are located. The resultant map shows that risk is higher downstream of Gutanting Reservoir and in its two tributary basins (i.e., Qingshui River and Longyang River). Conversely, risk is lower in the midstream and upstream reaches. The analysis also indicates that the most hazardous mine tailings ponds are located in Chongli and Xuanhua, and that Guanting Reservoir is the most vulnerable receptor. Sensitivity and uncertainty analyses are performed to validate the robustness of the WTPRA method. PMID:26633450
A subset of skin tumor modifier loci determines survival time of tumor-bearing mice
Nagase, Hiroki; Mao, Jian-Hua; Balmain, Allan
1999-01-01
Studies of mouse models of human cancer have established the existence of multiple tumor modifiers that influence parameters of cancer susceptibility such as tumor multiplicity, tumor size, or the probability of malignant progression. We have carried out an analysis of skin tumor susceptibility in interspecific Mus musculus/Mus spretus hybrid mice and have identified another seven loci showing either significant (six loci) or suggestive (one locus) linkage to tumor susceptibility or resistance. A specific search was carried out for skin tumor modifier loci associated with time of survival after development of a malignant tumor. A combination of resistance alleles at three markers [D6Mit15 (Skts12), D7Mit12 (Skts2), and D17Mit7 (Skts10)], all of which are close to or the same as loci associated with carcinoma incidence and/or papilloma multiplicity, is significantly associated with increased survival of mice with carcinomas, whereas the reverse combination of susceptibility alleles is significantly linked to early mortality caused by rapid carcinoma growth (χ2 = 25.22; P = 5.1 × 10−8). These data indicate that host genetic factors may be used to predict carcinoma growth rate and/or survival of individual backcross mice exposed to the same carcinogenic stimulus and suggest that mouse models may provide an approach to the identification of genetic modifiers of cancer survival in humans. PMID:10611333
NASA Astrophysics Data System (ADS)
Wu, Y.; Luo, Z.; Zhou, H.; Xu, C.
2017-12-01
Regional gravity field recovery is of great importance for understanding ocean circulation and currents in oceanography and investigating the structure of the lithosphere in geophysics. Under the framework of remove-compute-restore methodology (RCR), a regional approach using spherical radial basis functions (SRBFs) is set up for gravity field determination using the GOCE (Gravity Field and Steady-State Ocean Circulation Explorer) gravity gradient tensor, heterogeneous gravimetry and altimetry measurements. The additional value on regional model introduced by GOCE data is validated and quantified. Numerical experiments in a western European region show that the effects introduced by GOCE data display as long-wavelength patterns on the centimeter scale in terms of quasi-geoid heights, which may allow to highlight and reduce the remaining long-wavelength errors and biases in ground-based data and improve the regional model. The accuracy of the gravimetric quasi-geoid computed with a combination of three diagonal components is improved by 0.6 cm (0.5 cm) in the Netherlands (Belgium), compared to that derived from gravimetry and altimetry data alone, when GOCO05s is used as the reference model. Performances of different diagonal components and their combinations are not identical; the solution with vertical gradients shows highest quality when a single component is used. Incorporation of multiple components further improves the model, and the combination of three components shows the best fit to GPS/leveling data. Moreover, the contributions introduced by different components are heterogeneous in terms of spatial coverage and magnitude, although similar structures occur in the spatial domain. Contributions introduced by the vertical components have the most significant effects when a single component is applied. Combination of multiple components further magnifies these effects and improves the solutions, and the incorporation of three components has the most prominent effects. This work is supported by the State Scholarship Fund from Chinese Scholarship Council (201306270014), China Postdoctoral Science Foundation (No.2016M602301), and the National Natural Science Foundation of China (No. 41374023).
Combined monitoring, decision and control model for the human operator in a command and control desk
NASA Technical Reports Server (NTRS)
Muralidharan, R.; Baron, S.
1978-01-01
A report is given on the ongoing efforts to mode the human operator in the context of the task during the enroute/return phases in the ground based control of multiple flights of remotely piloted vehicles (RPV). The approach employed here uses models that have their analytical bases in control theory and in statistical estimation and decision theory. In particular, it draws heavily on the modes and the concepts of the optimal control model (OCM) of the human operator. The OCM is being extended into a combined monitoring, decision, and control model (DEMON) of the human operator by infusing decision theoretic notions that make it suitable for application to problems in which human control actions are infrequent and in which monitoring and decision-making are the operator's main activities. Some results obtained with a specialized version of DEMON for the RPV control problem are included.
SCEC UCVM - Unified California Velocity Model
NASA Astrophysics Data System (ADS)
Small, P.; Maechling, P. J.; Jordan, T. H.; Ely, G. P.; Taborda, R.
2011-12-01
The SCEC Unified California Velocity Model (UCVM) is a software framework for a state-wide California velocity model. UCVM provides researchers with two new capabilities: (1) the ability to query Vp, Vs, and density from any standard regional California velocity model through a uniform interface, and (2) the ability to combine multiple velocity models into a single state-wide model. These features are crucial in order to support large-scale ground motion simulations and to facilitate improvements in the underlying velocity models. UCVM provides integrated support for the following standard velocity models: SCEC CVM-H, SCEC CVM-S and the CVM-SI variant, USGS Bay Area (cencalvm), Lin-Thurber Statewide, and other smaller regional models. New models may be easily incorporated as they become available. Two query interfaces are provided: a Linux command line program, and a C application programming interface (API). The C API query interface is simple, fully independent of any specific model, and MPI-friendly. Input coordinates are geographic longitude/latitude and the vertical coordinate may be either depth or elevation. Output parameters include Vp, Vs, and density along with the identity of the model from which these material properties were obtained. In addition to access to the standard models, UCVM also includes a high resolution statewide digital elevation model, Vs30 map, and an optional near-surface geo-technical layer (GTL) based on Ely's Vs30-derived GTL. The elevation and Vs30 information is bundled along with the returned Vp,Vs velocities and density, so that all relevant information is retrieved with a single query. When the GTL is enabled, it is blended with the underlying crustal velocity models along a configurable transition depth range with an interpolation function. Multiple, possibly overlapping, regional velocity models may be combined together into a single state-wide model. This is accomplished by tiling the regional models on top of one another in three dimensions in a researcher-specified order. No reconciliation is performed within overlapping model regions, although a post-processing tool is provided to perform a simple numerical smoothing. Lastly, a 3D region from a combined model may be extracted and exported into a CVM-Etree. This etree may then be queried by UCVM much like a standard velocity model but with less overhead and generally better performance due to the efficiency of the etree data structure.
Forward modeling transient brightenings and microflares around an active region observed with Hi-C
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kobelski, Adam R.; McKenzie, David E., E-mail: kobelski@solar.physics.montana.edu
Small-scale flare-like brightenings around active regions are among the smallest and most fundamental of energetic transient events in the corona, providing a testbed for models of heating and active region dynamics. In a previous study, we modeled a large collection of these microflares observed with Hinode/X-Ray Telescope (XRT) using EBTEL and found that they required multiple heating events, but could not distinguish between multiple heating events on a single strand, or multiple strands each experiencing a single heating event. We present here a similar study, but with extreme-ultraviolet data of Active Region 11520 from the High Resolution Coronal Imager (Hi-C)more » sounding rocket. Hi-C provides an order of magnitude improvement to the spatial resolution of XRT, and a cooler temperature sensitivity, which combine to provide significant improvements to our ability to detect and model microflare activity around active regions. We have found that at the spatial resolution of Hi-C (≈0.''3), the events occur much more frequently than expected (57 events detected, only 1 or 2 expected), and are most likely made from strands of the order of 100 km wide, each of which is impulsively heated with multiple heating events. These findings tend to support bursty reconnection as the cause of the energy release responsible for the brightenings.« less
Linked independent component analysis for multimodal data fusion.
Groves, Adrian R; Beckmann, Christian F; Smith, Steve M; Woolrich, Mark W
2011-02-01
In recent years, neuroimaging studies have increasingly been acquiring multiple modalities of data and searching for task- or disease-related changes in each modality separately. A major challenge in analysis is to find systematic approaches for fusing these differing data types together to automatically find patterns of related changes across multiple modalities, when they exist. Independent Component Analysis (ICA) is a popular unsupervised learning method that can be used to find the modes of variation in neuroimaging data across a group of subjects. When multimodal data is acquired for the subjects, ICA is typically performed separately on each modality, leading to incompatible decompositions across modalities. Using a modular Bayesian framework, we develop a novel "Linked ICA" model for simultaneously modelling and discovering common features across multiple modalities, which can potentially have completely different units, signal- and contrast-to-noise ratios, voxel counts, spatial smoothnesses and intensity distributions. Furthermore, this general model can be configured to allow tensor ICA or spatially-concatenated ICA decompositions, or a combination of both at the same time. Linked ICA automatically determines the optimal weighting of each modality, and also can detect single-modality structured components when present. This is a fully probabilistic approach, implemented using Variational Bayes. We evaluate the method on simulated multimodal data sets, as well as on a real data set of Alzheimer's patients and age-matched controls that combines two very different types of structural MRI data: morphological data (grey matter density) and diffusion data (fractional anisotropy, mean diffusivity, and tensor mode). Copyright © 2010 Elsevier Inc. All rights reserved.
Kabeshova, A; Annweiler, C; Fantino, B; Philip, T; Gromov, V A; Launay, C P; Beauchet, O
2014-06-01
Regression tree (RT) analyses are particularly adapted to explore the risk of recurrent falling according to various combinations of fall risk factors compared to logistic regression models. The aims of this study were (1) to determine which combinations of fall risk factors were associated with the occurrence of recurrent falls in older community-dwellers, and (2) to compare the efficacy of RT and multiple logistic regression model for the identification of recurrent falls. A total of 1,760 community-dwelling volunteers (mean age ± standard deviation, 71.0 ± 5.1 years; 49.4 % female) were recruited prospectively in this cross-sectional study. Age, gender, polypharmacy, use of psychoactive drugs, fear of falling (FOF), cognitive disorders and sad mood were recorded. In addition, the history of falls within the past year was recorded using a standardized questionnaire. Among 1,760 participants, 19.7 % (n = 346) were recurrent fallers. The RT identified 14 nodes groups and 8 end nodes with FOF as the first major split. Among participants with FOF, those who had sad mood and polypharmacy formed the end node with the greatest OR for recurrent falls (OR = 6.06 with p < 0.001). Among participants without FOF, those who were male and not sad had the lowest OR for recurrent falls (OR = 0.25 with p < 0.001). The RT correctly classified 1,356 from 1,414 non-recurrent fallers (specificity = 95.6 %), and 65 from 346 recurrent fallers (sensitivity = 18.8 %). The overall classification accuracy was 81.0 %. The multiple logistic regression correctly classified 1,372 from 1,414 non-recurrent fallers (specificity = 97.0 %), and 61 from 346 recurrent fallers (sensitivity = 17.6 %). The overall classification accuracy was 81.4 %. Our results show that RT may identify specific combinations of risk factors for recurrent falls, the combination most associated with recurrent falls involving FOF, sad mood and polypharmacy. The FOF emerged as the risk factor strongly associated with recurrent falls. In addition, RT and multiple logistic regression were not sensitive enough to identify the majority of recurrent fallers but appeared efficient in detecting individuals not at risk of recurrent falls.
Graphical function mapping as a new way to explore cause-and-effect chains
Evans, Mary Anne
2016-01-01
Graphical function mapping provides a simple method for improving communication within interdisciplinary research teams and between scientists and nonscientists. This article introduces graphical function mapping using two examples and discusses its usefulness. Function mapping projects the outcome of one function into another to show the combined effect. Using this mathematical property in a simpler, even cartoon-like, graphical way allows the rapid combination of multiple information sources (models, empirical data, expert judgment, and guesses) in an intuitive visual to promote further discussion, scenario development, and clear communication.
Predicting the Toxicity of Adjuvant Breast Cancer Drug Combination Therapy
2012-09-01
diarrhea and interstitial lung disease/pneumonitis. From largest to smallest, our multiple dose (1,250 mg q24 h) model-predicted ratios of lapatinib...1977) A model for the kinetics of distribution of actinomycin-D in the beagle dog . J Pharmacol Exp Ther 200(3):469–478 31. Collins JM, Dedrick RL, King...a single agent for various tumor types (n = 2045), nausea (39%), diarrhea (39%) and vomiting (22%) were observed; other gastrointestinal events
Predicting the Toxicity of Adjuvant Breast Cancer Drug Combination Therapy
2013-03-01
diarrhea and interstitial lung disease/pneumonitis. From largest to smallest, our multiple dose (1,250 mg q24 h) model-predicted ratios of lapatinib...1977) A model for the kinetics of distribution of actinomycin-D in the beagle dog . J Pharmacol Exp Ther 200(3):469–478 31. Collins JM, Dedrick RL, King...as a single agent for various tumor types (n = 2045), nausea (39%), diarrhea (39%) and vomiting (22%) were observed; other gastrointestinal events
2008-02-01
clinician to distinguish between the effects of treatment and the effects of disease. Several different prediction models for multiple or- gan failure...treat- ment protocols and allow a clinician to distinguish the effect of treatment from effect of disease. In this study, our model predicted in...TNF produces a decrease in protein C activation by down regulating the expression of endothelial cell protein C receptor and thrombomodulin, both of
Balistrieri, Laurie S.; Nimick, David A.; Mebane, Christopher A.
2012-01-01
Evaluating water quality and the health of aquatic organisms is challenging in systems with systematic diel (24 hour) or less predictable runoff-induced changes in water composition. To advance our understanding of how to evaluate environmental health in these dynamic systems, field studies of diel cycling were conducted in two streams (Silver Bow Creek and High Ore Creek) affected by historical mining activities in southwestern Montana. A combination of sampling and modeling tools were used to assess the toxicity of metals in these systems. Diffusive Gradients in Thin Films (DGT) samplers were deployed at multiple time intervals during diel sampling to confirm that DGT integrates time-varying concentrations of dissolved metals. Thermodynamic speciation calculations using site specific water compositions, including time-integrated dissolved metal concentrations determined from DGT, and a competitive, multiple-metal biotic ligand model incorporated into the Windemere Humic Aqueous Model Version 6.0 (WHAM VI) were used to determine the chemical speciation of dissolved metals and biotic ligands. The model results were combined with previously collected toxicity data on cutthroat trout to derive a relationship that predicts the relative survivability of these fish at a given site. This integrative approach may prove useful for assessing water quality and toxicity of metals to aquatic organisms in dynamic systems and evaluating whether potential changes in environmental health of aquatic systems are due to anthropogenic activities or natural variability.
Guaitoli, Giambattista; Raimondi, Francesco; Gilsbach, Bernd K.; Gómez-Llorente, Yacob; Deyaert, Egon; Renzi, Fabiana; Li, Xianting; Schaffner, Adam; Jagtap, Pravin Kumar Ankush; Boldt, Karsten; von Zweydorf, Felix; Gotthardt, Katja; Lorimer, Donald D.; Yue, Zhenyu; Burgin, Alex; Janjic, Nebojsa; Sattler, Michael; Versées, Wim; Ueffing, Marius; Ubarretxena-Belandia, Iban; Kortholt, Arjan; Gloeckner, Christian Johannes
2016-01-01
Leucine-rich repeat kinase 2 (LRRK2) is a large, multidomain protein containing two catalytic domains: a Ras of complex proteins (Roc) G-domain and a kinase domain. Mutations associated with familial and sporadic Parkinson’s disease (PD) have been identified in both catalytic domains, as well as in several of its multiple putative regulatory domains. Several of these mutations have been linked to increased kinase activity. Despite the role of LRRK2 in the pathogenesis of PD, little is known about its overall architecture and how PD-linked mutations alter its function and enzymatic activities. Here, we have modeled the 3D structure of dimeric, full-length LRRK2 by combining domain-based homology models with multiple experimental constraints provided by chemical cross-linking combined with mass spectrometry, negative-stain EM, and small-angle X-ray scattering. Our model reveals dimeric LRRK2 has a compact overall architecture with a tight, multidomain organization. Close contacts between the N-terminal ankyrin and C-terminal WD40 domains, and their proximity—together with the LRR domain—to the kinase domain suggest an intramolecular mechanism for LRRK2 kinase activity regulation. Overall, our studies provide, to our knowledge, the first structural framework for understanding the role of the different domains of full-length LRRK2 in the pathogenesis of PD. PMID:27357661
NASA Astrophysics Data System (ADS)
Xu, Lei; Chen, Nengcheng; Zhang, Xiang
2018-02-01
Drought is an extreme natural disaster that can lead to huge socioeconomic losses. Drought prediction ahead of months is helpful for early drought warning and preparations. In this study, we developed a statistical model, two weighted dynamic models and a statistical-dynamic (hybrid) model for 1-6 month lead drought prediction in China. Specifically, statistical component refers to climate signals weighting by support vector regression (SVR), dynamic components consist of the ensemble mean (EM) and Bayesian model averaging (BMA) of the North American Multi-Model Ensemble (NMME) climatic models, and the hybrid part denotes a combination of statistical and dynamic components by assigning weights based on their historical performances. The results indicate that the statistical and hybrid models show better rainfall predictions than NMME-EM and NMME-BMA models, which have good predictability only in southern China. In the 2011 China winter-spring drought event, the statistical model well predicted the spatial extent and severity of drought nationwide, although the severity was underestimated in the mid-lower reaches of Yangtze River (MLRYR) region. The NMME-EM and NMME-BMA models largely overestimated rainfall in northern and western China in 2011 drought. In the 2013 China summer drought, the NMME-EM model forecasted the drought extent and severity in eastern China well, while the statistical and hybrid models falsely detected negative precipitation anomaly (NPA) in some areas. Model ensembles such as multiple statistical approaches, multiple dynamic models or multiple hybrid models for drought predictions were highlighted. These conclusions may be helpful for drought prediction and early drought warnings in China.
Optical-model abrasion cross sections for high-energy heavy ions
NASA Technical Reports Server (NTRS)
Townsend, L. W.
1981-01-01
Within the context of eikonal scattering theory, a generalized optical model potential approximation to the nucleus-nucleus multiple scattering series is used in an abrasion-ablation collision model to predict abrasion cross sections for relativistic projectile heavy ions. Unlike the optical limit of Glauber theory, which cannot be used for very light nuclei, the abrasion formalism is valid for any projectile target combination at any incident kinetic energy for which eikonal scattering theory can be utilized. Results are compared with experimental results and predictions from Glauber theory.
Modeling Quasi-Static and Fatigue-Driven Delamination Migration
NASA Technical Reports Server (NTRS)
De Carvalho, N. V.; Ratcliffe, J. G.; Chen, B. Y.; Pinho, S. T.; Baiz, P. M.; Tay, T. E.
2014-01-01
An approach was proposed and assessed for the high-fidelity modeling of progressive damage and failure in composite materials. It combines the Floating Node Method (FNM) and the Virtual Crack Closure Technique (VCCT) to represent multiple interacting failure mechanisms in a mesh-independent fashion. Delamination, matrix cracking, and migration were captured failure and migration criteria based on fracture mechanics. Quasi-static and fatigue loading were modeled within the same overall framework. The methodology proposed was illustrated by simulating the delamination migration test, showing good agreement with the available experimental data.
NASA Astrophysics Data System (ADS)
Shneider, M. N.; Voronin, A. A.; Zheltikov, A. M.
2011-11-01
The Goldman-Albus treatment of the action-potential dynamics is combined with a phenomenological description of molecular hyperpolarizabilities into a closed-form model of the action-potential-sensitive second-harmonic response of myelinated nerve fibers with nodes of Ranvier. This response is shown to be sensitive to nerve demyelination, thus enabling an optical diagnosis of various demyelinating diseases, including multiple sclerosis. The model is applied to examine the nonlinear-optical response of a three-neuron reverberating circuit—the basic element of short-term memory.
Tschechne, Stephan; Neumann, Heiko
2014-01-01
Visual structures in the environment are segmented into image regions and those combined to a representation of surfaces and prototypical objects. Such a perceptual organization is performed by complex neural mechanisms in the visual cortex of primates. Multiple mutually connected areas in the ventral cortical pathway receive visual input and extract local form features that are subsequently grouped into increasingly complex, more meaningful image elements. Such a distributed network of processing must be capable to make accessible highly articulated changes in shape boundary as well as very subtle curvature changes that contribute to the perception of an object. We propose a recurrent computational network architecture that utilizes hierarchical distributed representations of shape features to encode surface and object boundary over different scales of resolution. Our model makes use of neural mechanisms that model the processing capabilities of early and intermediate stages in visual cortex, namely areas V1–V4 and IT. We suggest that multiple specialized component representations interact by feedforward hierarchical processing that is combined with feedback signals driven by representations generated at higher stages. Based on this, global configurational as well as local information is made available to distinguish changes in the object's contour. Once the outline of a shape has been established, contextual contour configurations are used to assign border ownership directions and thus achieve segregation of figure and ground. The model, thus, proposes how separate mechanisms contribute to distributed hierarchical cortical shape representation and combine with processes of figure-ground segregation. Our model is probed with a selection of stimuli to illustrate processing results at different processing stages. We especially highlight how modulatory feedback connections contribute to the processing of visual input at various stages in the processing hierarchy. PMID:25157228
Tschechne, Stephan; Neumann, Heiko
2014-01-01
Visual structures in the environment are segmented into image regions and those combined to a representation of surfaces and prototypical objects. Such a perceptual organization is performed by complex neural mechanisms in the visual cortex of primates. Multiple mutually connected areas in the ventral cortical pathway receive visual input and extract local form features that are subsequently grouped into increasingly complex, more meaningful image elements. Such a distributed network of processing must be capable to make accessible highly articulated changes in shape boundary as well as very subtle curvature changes that contribute to the perception of an object. We propose a recurrent computational network architecture that utilizes hierarchical distributed representations of shape features to encode surface and object boundary over different scales of resolution. Our model makes use of neural mechanisms that model the processing capabilities of early and intermediate stages in visual cortex, namely areas V1-V4 and IT. We suggest that multiple specialized component representations interact by feedforward hierarchical processing that is combined with feedback signals driven by representations generated at higher stages. Based on this, global configurational as well as local information is made available to distinguish changes in the object's contour. Once the outline of a shape has been established, contextual contour configurations are used to assign border ownership directions and thus achieve segregation of figure and ground. The model, thus, proposes how separate mechanisms contribute to distributed hierarchical cortical shape representation and combine with processes of figure-ground segregation. Our model is probed with a selection of stimuli to illustrate processing results at different processing stages. We especially highlight how modulatory feedback connections contribute to the processing of visual input at various stages in the processing hierarchy.
Ham, Jungoh; Costa, Carlotta; Sano, Renata; Lochmann, Timothy L.; Sennott, Erin M.; Patel, Neha U.; Dastur, Anahita; Gomez-Caraballo, Maria; Krytska, Kateryna; Hata, Aaron N.; Floros, Konstantinos V.; Hughes, Mark T.; Jakubik, Charles T.; Heisey, Daniel A.R.; Ferrell, Justin T.; Bristol, Molly L.; March, Ryan J.; Yates, Craig; Hicks, Mark A.; Nakajima, Wataru; Gowda, Madhu; Windle, Brad E.; Dozmorov, Mikhail G.; Garnett, Mathew J.; McDermott, Ultan; Harada, Hisashi; Taylor, Shirley M.; Morgan, Iain M.; Benes, Cyril H.; Engelman, Jeffrey A.; Mossé, Yael P.; Faber, Anthony C.
2016-01-01
Summary Fewer than half of children with high-risk neuroblastoma survive. Many of these tumors harbor high-level amplification of MYCN, which correlates with poor disease outcome. Using data from our large drug screen we predicted, and subsequently demonstrated, that MYCN-amplified neuroblastomas are sensitive to the BCL-2 inhibitor ABT-199. This sensitivity occurs in part through low anti-apoptotic BCL-xL expression, high pro-apoptotic NOXA expression, and paradoxical, MYCN-driven upregulation of NOXA. Screening for enhancers of ABT-199 sensitivity in MYCN-amplified neuroblastomas, we demonstrate that the Aurora Kinase A inhibitor MLN8237 combines with ABT-199 to induce widespread apoptosis. In diverse models of MYCN-amplified neuroblastoma, including a patient-derived xenograft model, this combination uniformly induced tumor shrinkage, and in multiple instances led to complete tumor regression. PMID:26859456
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Kaiguang; Valle, Denis; Popescu, Sorin
2013-05-15
Model specification remains challenging in spectroscopy of plant biochemistry, as exemplified by the availability of various spectral indices or band combinations for estimating the same biochemical. This lack of consensus in model choice across applications argues for a paradigm shift in hyperspectral methods to address model uncertainty and misspecification. We demonstrated one such method using Bayesian model averaging (BMA), which performs variable/band selection and quantifies the relative merits of many candidate models to synthesize a weighted average model with improved predictive performances. The utility of BMA was examined using a portfolio of 27 foliage spectral–chemical datasets representing over 80 speciesmore » across the globe to estimate multiple biochemical properties, including nitrogen, hydrogen, carbon, cellulose, lignin, chlorophyll (a or b), carotenoid, polar and nonpolar extractives, leaf mass per area, and equivalent water thickness. We also compared BMA with partial least squares (PLS) and stepwise multiple regression (SMR). Results showed that all the biochemicals except carotenoid were accurately estimated from hyerspectral data with R2 values > 0.80.« less
Jbabdi, Saad; Sotiropoulos, Stamatios N; Savio, Alexander M; Graña, Manuel; Behrens, Timothy EJ
2012-01-01
In this article, we highlight an issue that arises when using multiple b-values in a model-based analysis of diffusion MR data for tractography. The non-mono-exponential decay, commonly observed in experimental data, is shown to induce over-fitting in the distribution of fibre orientations when not considered in the model. Extra fibre orientations perpendicular to the main orientation arise to compensate for the slower apparent signal decay at higher b-values. We propose a simple extension to the ball and stick model based on a continuous Gamma distribution of diffusivities, which significantly improves the fitting and reduces the over-fitting. Using in-vivo experimental data, we show that this model outperforms a simpler, noise floor model, especially at the interfaces between brain tissues, suggesting that partial volume effects are a major cause of the observed non-mono-exponential decay. This model may be helpful for future data acquisition strategies that may attempt to combine multiple shells to improve estimates of fibre orientations in white matter and near the cortex. PMID:22334356
Robust Real-Time Music Transcription with a Compositional Hierarchical Model.
Pesek, Matevž; Leonardis, Aleš; Marolt, Matija
2017-01-01
The paper presents a new compositional hierarchical model for robust music transcription. Its main features are unsupervised learning of a hierarchical representation of input data, transparency, which enables insights into the learned representation, as well as robustness and speed which make it suitable for real-world and real-time use. The model consists of multiple layers, each composed of a number of parts. The hierarchical nature of the model corresponds well to hierarchical structures in music. The parts in lower layers correspond to low-level concepts (e.g. tone partials), while the parts in higher layers combine lower-level representations into more complex concepts (tones, chords). The layers are learned in an unsupervised manner from music signals. Parts in each layer are compositions of parts from previous layers based on statistical co-occurrences as the driving force of the learning process. In the paper, we present the model's structure and compare it to other hierarchical approaches in the field of music information retrieval. We evaluate the model's performance for the multiple fundamental frequency estimation. Finally, we elaborate on extensions of the model towards other music information retrieval tasks.
NASA Astrophysics Data System (ADS)
Pohjoranta, Antti; Halinen, Matias; Pennanen, Jari; Kiviaho, Jari
2015-03-01
Generalized predictive control (GPC) is applied to control the maximum temperature in a solid oxide fuel cell (SOFC) stack and the temperature difference over the stack. GPC is a model predictive control method and the models utilized in this work are ARX-type (autoregressive with extra input), multiple input-multiple output, polynomial models that were identified from experimental data obtained from experiments with a complete SOFC system. The proposed control is evaluated by simulation with various input-output combinations, with and without constraints. A comparison with conventional proportional-integral-derivative (PID) control is also made. It is shown that if only the stack maximum temperature is controlled, a standard PID controller can be used to obtain output performance comparable to that obtained with the significantly more complex model predictive controller. However, in order to control the temperature difference over the stack, both the stack minimum and the maximum temperature need to be controlled and this cannot be done with a single PID controller. In such a case the model predictive controller provides a feasible and effective solution.
Assessing Women's Preferences and Preference Modeling for Breast Reconstruction Decision-Making.
Sun, Clement S; Cantor, Scott B; Reece, Gregory P; Crosby, Melissa A; Fingeret, Michelle C; Markey, Mia K
2014-03-01
Women considering breast reconstruction must make challenging trade-offs amongst issues that often conflict. It may be useful to quantify possible outcomes using a single summary measure to aid a breast cancer patient in choosing a form of breast reconstruction. In this study, we used multiattribute utility theory to combine multiple objectives to yield a summary value using nine different preference models. We elicited the preferences of 36 women, aged 32 or older with no history of breast cancer, for the patient-reported outcome measures of breast satisfaction, psychosocial well-being, chest well-being, abdominal well-being, and sexual wellbeing as measured by the BREAST-Q in addition to time lost to reconstruction and out-of-pocket cost. Participants ranked hypothetical breast reconstruction outcomes. We examined each multiattribute utility preference model and assessed how often each model agreed with participants' rankings. The median amount of time required to assess preferences was 34 minutes. Agreement among the nine preference models with the participants ranged from 75.9% to 78.9%. None of the preference models performed significantly worse than the best performing risk averse multiplicative model. We hypothesize an average theoretical agreement of 94.6% for this model if participant error is included. There was a statistically significant positive correlation with more unequal distribution of weight given to the seven attributes. We recommend the risk averse multiplicative model for modeling the preferences of patients considering different forms of breast reconstruction because it agreed most often with the participants in this study.
The great descriptor melting pot: mixing descriptors for the common good of QSAR models.
Tseng, Yufeng J; Hopfinger, Anton J; Esposito, Emilio Xavier
2012-01-01
The usefulness and utility of QSAR modeling depends heavily on the ability to estimate the values of molecular descriptors relevant to the endpoints of interest followed by an optimized selection of descriptors to form the best QSAR models from a representative set of the endpoints of interest. The performance of a QSAR model is directly related to its molecular descriptors. QSAR modeling, specifically model construction and optimization, has benefited from its ability to borrow from other unrelated fields, yet the molecular descriptors that form QSAR models have remained basically unchanged in both form and preferred usage. There are many types of endpoints that require multiple classes of descriptors (descriptors that encode 1D through multi-dimensional, 4D and above, content) needed to most fully capture the molecular features and interactions that contribute to the endpoint. The advantages of QSAR models constructed from multiple, and different, descriptor classes have been demonstrated in the exploration of markedly different, and principally biological systems and endpoints. Multiple examples of such QSAR applications using different descriptor sets are described and that examined. The take-home-message is that a major part of the future of QSAR analysis, and its application to modeling biological potency, ADME-Tox properties, general use in virtual screening applications, as well as its expanding use into new fields for building QSPR models, lies in developing strategies that combine and use 1D through nD molecular descriptors.
Hwang, Wonjun; Wang, Haitao; Kim, Hyunwoo; Kee, Seok-Cheol; Kim, Junmo
2011-04-01
The authors present a robust face recognition system for large-scale data sets taken under uncontrolled illumination variations. The proposed face recognition system consists of a novel illumination-insensitive preprocessing method, a hybrid Fourier-based facial feature extraction, and a score fusion scheme. First, in the preprocessing stage, a face image is transformed into an illumination-insensitive image, called an "integral normalized gradient image," by normalizing and integrating the smoothed gradients of a facial image. Then, for feature extraction of complementary classifiers, multiple face models based upon hybrid Fourier features are applied. The hybrid Fourier features are extracted from different Fourier domains in different frequency bandwidths, and then each feature is individually classified by linear discriminant analysis. In addition, multiple face models are generated by plural normalized face images that have different eye distances. Finally, to combine scores from multiple complementary classifiers, a log likelihood ratio-based score fusion scheme is applied. The proposed system using the face recognition grand challenge (FRGC) experimental protocols is evaluated; FRGC is a large available data set. Experimental results on the FRGC version 2.0 data sets have shown that the proposed method shows an average of 81.49% verification rate on 2-D face images under various environmental variations such as illumination changes, expression changes, and time elapses.
PeTTSy: a computational tool for perturbation analysis of complex systems biology models.
Domijan, Mirela; Brown, Paul E; Shulgin, Boris V; Rand, David A
2016-03-10
Over the last decade sensitivity analysis techniques have been shown to be very useful to analyse complex and high dimensional Systems Biology models. However, many of the currently available toolboxes have either used parameter sampling, been focused on a restricted set of model observables of interest, studied optimisation of a objective function, or have not dealt with multiple simultaneous model parameter changes where the changes can be permanent or temporary. Here we introduce our new, freely downloadable toolbox, PeTTSy (Perturbation Theory Toolbox for Systems). PeTTSy is a package for MATLAB which implements a wide array of techniques for the perturbation theory and sensitivity analysis of large and complex ordinary differential equation (ODE) based models. PeTTSy is a comprehensive modelling framework that introduces a number of new approaches and that fully addresses analysis of oscillatory systems. It examines sensitivity analysis of the models to perturbations of parameters, where the perturbation timing, strength, length and overall shape can be controlled by the user. This can be done in a system-global setting, namely, the user can determine how many parameters to perturb, by how much and for how long. PeTTSy also offers the user the ability to explore the effect of the parameter perturbations on many different types of outputs: period, phase (timing of peak) and model solutions. PeTTSy can be employed on a wide range of mathematical models including free-running and forced oscillators and signalling systems. To enable experimental optimisation using the Fisher Information Matrix it efficiently allows one to combine multiple variants of a model (i.e. a model with multiple experimental conditions) in order to determine the value of new experiments. It is especially useful in the analysis of large and complex models involving many variables and parameters. PeTTSy is a comprehensive tool for analysing large and complex models of regulatory and signalling systems. It allows for simulation and analysis of models under a variety of environmental conditions and for experimental optimisation of complex combined experiments. With its unique set of tools it makes a valuable addition to the current library of sensitivity analysis toolboxes. We believe that this software will be of great use to the wider biological, systems biology and modelling communities.
2012-01-01
Background We explore the benefits of applying a new proportional hazard model to analyze survival of breast cancer patients. As a parametric model, the hypertabastic survival model offers a closer fit to experimental data than Cox regression, and furthermore provides explicit survival and hazard functions which can be used as additional tools in the survival analysis. In addition, one of our main concerns is utilization of multiple gene expression variables. Our analysis treats the important issue of interaction of different gene signatures in the survival analysis. Methods The hypertabastic proportional hazards model was applied in survival analysis of breast cancer patients. This model was compared, using statistical measures of goodness of fit, with models based on the semi-parametric Cox proportional hazards model and the parametric log-logistic and Weibull models. The explicit functions for hazard and survival were then used to analyze the dynamic behavior of hazard and survival functions. Results The hypertabastic model provided the best fit among all the models considered. Use of multiple gene expression variables also provided a considerable improvement in the goodness of fit of the model, as compared to use of only one. By utilizing the explicit survival and hazard functions provided by the model, we were able to determine the magnitude of the maximum rate of increase in hazard, and the maximum rate of decrease in survival, as well as the times when these occurred. We explore the influence of each gene expression variable on these extrema. Furthermore, in the cases of continuous gene expression variables, represented by a measure of correlation, we were able to investigate the dynamics with respect to changes in gene expression. Conclusions We observed that use of three different gene signatures in the model provided a greater combined effect and allowed us to assess the relative importance of each in determination of outcome in this data set. These results point to the potential to combine gene signatures to a greater effect in cases where each gene signature represents some distinct aspect of the cancer biology. Furthermore we conclude that the hypertabastic survival models can be an effective survival analysis tool for breast cancer patients. PMID:23241496
A Model for the Design of Puzzle-Based Games Including Virtual and Physical Objects
ERIC Educational Resources Information Center
Melero, Javier; Hernandez-Leo, Davinia
2014-01-01
Multiple evidences in the Technology-Enhanced Learning domain indicate that Game-Based Learning can lead to positive effects in students' performance and motivation. Educational games can be completely virtual or can combine the use of physical objects or spaces in the real world. However, the potential effectiveness of these approaches…
ERIC Educational Resources Information Center
Topbas, Seyhun; Unal, Ozlem
2010-01-01
A single-subject alternating treatment design in combination with a staggered multiple baseline model across subjects was implemented with two 6:0 year-old girls, monozygotic twins, who were referred to a university clinic for evaluation and treatment. The treatment programme was structured according to variants of "minimal pair contrast…
Sources of carbonaceous PM2.5 were quantified in downtown Cleveland, OH and Chippewa Lake, OH located ~40 miles southwest of Cleveland during the Cleveland Multiple Air Pollutant Study (CMAPS). PM2.5 filter samples were collected daily during July-August 200...
Spatio-Temporal Data Model for Integrating Evolving Nation-Level Datasets
NASA Astrophysics Data System (ADS)
Sorokine, A.; Stewart, R. N.
2017-10-01
Ability to easily combine the data from diverse sources in a single analytical workflow is one of the greatest promises of the Big Data technologies. However, such integration is often challenging as datasets originate from different vendors, governments, and research communities that results in multiple incompatibilities including data representations, formats, and semantics. Semantics differences are hardest to handle: different communities often use different attribute definitions and associate the records with different sets of evolving geographic entities. Analysis of global socioeconomic variables across multiple datasets over prolonged time is often complicated by the difference in how boundaries and histories of countries or other geographic entities are represented. Here we propose an event-based data model for depicting and tracking histories of evolving geographic units (countries, provinces, etc.) and their representations in disparate data. The model addresses the semantic challenge of preserving identity of geographic entities over time by defining criteria for the entity existence, a set of events that may affect its existence, and rules for mapping between different representations (datasets). Proposed model is used for maintaining an evolving compound database of global socioeconomic and environmental data harvested from multiple sources. Practical implementation of our model is demonstrated using PostgreSQL object-relational database with the use of temporal, geospatial, and NoSQL database extensions.
Diffusional correlations among multiple active sites in a single enzyme.
Echeverria, Carlos; Kapral, Raymond
2014-04-07
Simulations of the enzymatic dynamics of a model enzyme containing multiple substrate binding sites indicate the existence of diffusional correlations in the chemical reactivity of the active sites. A coarse-grain, particle-based, mesoscopic description of the system, comprising the enzyme, the substrate, the product and solvent, is constructed to study these effects. The reactive and non-reactive dynamics is followed using a hybrid scheme that combines molecular dynamics for the enzyme, substrate and product molecules with multiparticle collision dynamics for the solvent. It is found that the reactivity of an individual active site in the multiple-active-site enzyme is reduced substantially, and this effect is analyzed and attributed to diffusive competition for the substrate among the different active sites in the enzyme.
Understanding intratumor heterogeneity by combining genome analysis and mathematical modeling.
Niida, Atsushi; Nagayama, Satoshi; Miyano, Satoru; Mimori, Koshi
2018-04-01
Cancer is composed of multiple cell populations with different genomes. This phenomenon called intratumor heterogeneity (ITH) is supposed to be a fundamental cause of therapeutic failure. Therefore, its principle-level understanding is a clinically important issue. To achieve this goal, an interdisciplinary approach combining genome analysis and mathematical modeling is essential. For example, we have recently performed multiregion sequencing to unveil extensive ITH in colorectal cancer. Moreover, by employing mathematical modeling of cancer evolution, we demonstrated that it is possible that this ITH is generated by neutral evolution. In this review, we introduce recent advances in a research field related to ITH and also discuss strategies for exploiting novel findings on ITH in a clinical setting. © 2018 The Authors. Cancer Science published by John Wiley & Sons Australia, Ltd on behalf of Japanese Cancer Association.
SynergyFinder: a web application for analyzing drug combination dose-response matrix data.
Ianevski, Aleksandr; He, Liye; Aittokallio, Tero; Tang, Jing
2017-08-01
Rational design of drug combinations has become a promising strategy to tackle the drug sensitivity and resistance problem in cancer treatment. To systematically evaluate the pre-clinical significance of pairwise drug combinations, functional screening assays that probe combination effects in a dose-response matrix assay are commonly used. To facilitate the analysis of such drug combination experiments, we implemented a web application that uses key functions of R-package SynergyFinder, and provides not only the flexibility of using multiple synergy scoring models, but also a user-friendly interface for visualizing the drug combination landscapes in an interactive manner. The SynergyFinder web application is freely accessible at https://synergyfinder.fimm.fi ; The R-package and its source-code are freely available at http://bioconductor.org/packages/release/bioc/html/synergyfinder.html . jing.tang@helsinki.fi. © The Author(s) 2017. Published by Oxford University Press.
Federated Tensor Factorization for Computational Phenotyping
Kim, Yejin; Sun, Jimeng; Yu, Hwanjo; Jiang, Xiaoqian
2017-01-01
Tensor factorization models offer an effective approach to convert massive electronic health records into meaningful clinical concepts (phenotypes) for data analysis. These models need a large amount of diverse samples to avoid population bias. An open challenge is how to derive phenotypes jointly across multiple hospitals, in which direct patient-level data sharing is not possible (e.g., due to institutional policies). In this paper, we developed a novel solution to enable federated tensor factorization for computational phenotyping without sharing patient-level data. We developed secure data harmonization and federated computation procedures based on alternating direction method of multipliers (ADMM). Using this method, the multiple hospitals iteratively update tensors and transfer secure summarized information to a central server, and the server aggregates the information to generate phenotypes. We demonstrated with real medical datasets that our method resembles the centralized training model (based on combined datasets) in terms of accuracy and phenotypes discovery while respecting privacy. PMID:29071165
Bakas, Spyridon; Zeng, Ke; Sotiras, Aristeidis; Rathore, Saima; Akbari, Hamed; Gaonkar, Bilwaj; Rozycki, Martin; Pati, Sarthak; Davatzikos, Christos
2016-01-01
We present an approach for segmenting low- and high-grade gliomas in multimodal magnetic resonance imaging volumes. The proposed approach is based on a hybrid generative-discriminative model. Firstly, a generative approach based on an Expectation-Maximization framework that incorporates a glioma growth model is used to segment the brain scans into tumor, as well as healthy tissue labels. Secondly, a gradient boosting multi-class classification scheme is used to refine tumor labels based on information from multiple patients. Lastly, a probabilistic Bayesian strategy is employed to further refine and finalize the tumor segmentation based on patient-specific intensity statistics from the multiple modalities. We evaluated our approach in 186 cases during the training phase of the BRAin Tumor Segmentation (BRATS) 2015 challenge and report promising results. During the testing phase, the algorithm was additionally evaluated in 53 unseen cases, achieving the best performance among the competing methods.
Applying the Extended Parallel Process Model to workplace safety messages.
Basil, Michael; Basil, Debra; Deshpande, Sameer; Lavack, Anne M
2013-01-01
The extended parallel process model (EPPM) proposes fear appeals are most effective when they combine threat and efficacy. Three studies conducted in the workplace safety context examine the use of various EPPM factors and their effects, especially multiplicative effects. Study 1 was a content analysis examining the use of EPPM factors in actual workplace safety messages. Study 2 experimentally tested these messages with 212 construction trainees. Study 3 replicated this experiment with 1,802 men across four English-speaking countries-Australia, Canada, the United Kingdom, and the United States. The results of these three studies (1) demonstrate the inconsistent use of EPPM components in real-world work safety communications, (2) support the necessity of self-efficacy for the effective use of threat, (3) show a multiplicative effect where communication effectiveness is maximized when all model components are present (severity, susceptibility, and efficacy), and (4) validate these findings with gory appeals across four English-speaking countries.
NASA Astrophysics Data System (ADS)
Chen, Hua-cai; Chen, Xing-dan; Lu, Yong-jun; Cao, Zhi-qiang
2006-01-01
Near infrared (NIR) reflectance spectroscopy was used to develop a fast determination method for total ginsenosides in Ginseng (Panax Ginseng) powder. The spectra were analyzed with multiplicative signal correction (MSC) correlation method. The best correlative spectra region with the total ginsenosides content was 1660 nm~1880 nm and 2230nm~2380 nm. The NIR calibration models of ginsenosides were built with multiple linear regression (MLR), principle component regression (PCR) and partial least squares (PLS) regression respectively. The results showed that the calibration model built with PLS combined with MSC and the optimal spectrum region was the best one. The correlation coefficient and the root mean square error of correction validation (RMSEC) of the best calibration model were 0.98 and 0.15% respectively. The optimal spectrum region for calibration was 1204nm~2014nm. The result suggested that using NIR to rapidly determinate the total ginsenosides content in ginseng powder were feasible.
Specificity, cross-talk and adaptation in Interferon signaling
NASA Astrophysics Data System (ADS)
Zilman, Anton
Innate immune system is the first line of defense of higher organisms against pathogens. It coordinates the behavior of millions of cells of multiple types, achieved through numerous signaling molecules. This talk focuses on the signaling specificity of a major class of signaling molecules - Type I Interferons - which are also used therapeutically in the treatment of a number of diseases, such as Hepatitis C, multiple sclerosis and some cancers. Puzzlingly, different Interferons act through the same cell surface receptor but have different effects on the target cells. They also exhibit a strange pattern of temporal cross-talk resulting in a serious clinical problem - loss of response to Interferon therapy. We combined mathematical modeling with quantitative experiments to develop a quantitative model of specificity and adaptation in the Interferon signaling pathway. The model resolves several outstanding experimental puzzles and directly affects the clinical use of Type I Interferons in treatment of viral hepatitis and other diseases.
Miller, Brian W.; Frid, Leonardo; Chang, Tony; Piekielek, N. B.; Hansen, Andrew J.; Morisette, Jeffrey T.
2015-01-01
State-and-transition simulation models (STSMs) are known for their ability to explore the combined effects of multiple disturbances, ecological dynamics, and management actions on vegetation. However, integrating the additional impacts of climate change into STSMs remains a challenge. We address this challenge by combining an STSM with species distribution modeling (SDM). SDMs estimate the probability of occurrence of a given species based on observed presence and absence locations as well as environmental and climatic covariates. Thus, in order to account for changes in habitat suitability due to climate change, we used SDM to generate continuous surfaces of species occurrence probabilities. These data were imported into ST-Sim, an STSM platform, where they dictated the probability of each cell transitioning between alternate potential vegetation types at each time step. The STSM was parameterized to capture additional processes of vegetation growth and disturbance that are relevant to a keystone species in the Greater Yellowstone Ecosystem—whitebark pine (Pinus albicaulis). We compared historical model runs against historical observations of whitebark pine and a key disturbance agent (mountain pine beetle, Dendroctonus ponderosae), and then projected the simulation into the future. Using this combination of correlative and stochastic simulation models, we were able to reproduce historical observations and identify key data gaps. Results indicated that SDMs and STSMs are complementary tools, and combining them is an effective way to account for the anticipated impacts of climate change, biotic interactions, and disturbances, while also allowing for the exploration of management options.
NASA Astrophysics Data System (ADS)
Abul Ehsan Bhuiyan, Md; Nikolopoulos, Efthymios I.; Anagnostou, Emmanouil N.; Quintana-Seguí, Pere; Barella-Ortiz, Anaïs
2018-02-01
This study investigates the use of a nonparametric, tree-based model, quantile regression forests (QRF), for combining multiple global precipitation datasets and characterizing the uncertainty of the combined product. We used the Iberian Peninsula as the study area, with a study period spanning 11 years (2000-2010). Inputs to the QRF model included three satellite precipitation products, CMORPH, PERSIANN, and 3B42 (V7); an atmospheric reanalysis precipitation and air temperature dataset; satellite-derived near-surface daily soil moisture data; and a terrain elevation dataset. We calibrated the QRF model for two seasons and two terrain elevation categories and used it to generate ensemble for these conditions. Evaluation of the combined product was based on a high-resolution, ground-reference precipitation dataset (SAFRAN) available at 5 km 1 h-1 resolution. Furthermore, to evaluate relative improvements and the overall impact of the combined product in hydrological response, we used the generated ensemble to force a distributed hydrological model (the SURFEX land surface model and the RAPID river routing scheme) and compared its streamflow simulation results with the corresponding simulations from the individual global precipitation and reference datasets. We concluded that the proposed technique could generate realizations that successfully encapsulate the reference precipitation and provide significant improvement in streamflow simulations, with reduction in systematic and random error on the order of 20-99 and 44-88 %, respectively, when considering the ensemble mean.
New approach based on tetrahedral-mesh geometry for accurate 4D Monte Carlo patient-dose calculation
NASA Astrophysics Data System (ADS)
Han, Min Cheol; Yeom, Yeon Soo; Kim, Chan Hyeong; Kim, Seonghoon; Sohn, Jason W.
2015-02-01
In the present study, to achieve accurate 4D Monte Carlo dose calculation in radiation therapy, we devised a new approach that combines (1) modeling of the patient body using tetrahedral-mesh geometry based on the patient’s 4D CT data, (2) continuous movement/deformation of the tetrahedral patient model by interpolation of deformation vector fields acquired through deformable image registration, and (3) direct transportation of radiation particles during the movement and deformation of the tetrahedral patient model. The results of our feasibility study show that it is certainly possible to construct 4D patient models (= phantoms) with sufficient accuracy using the tetrahedral-mesh geometry and to directly transport radiation particles during continuous movement and deformation of the tetrahedral patient model. This new approach not only produces more accurate dose distribution in the patient but also replaces the current practice of using multiple 3D voxel phantoms and combining multiple dose distributions after Monte Carlo simulations. For routine clinical application of our new approach, the use of fast automatic segmentation algorithms is a must. In order to achieve, simultaneously, both dose accuracy and computation speed, the number of tetrahedrons for the lungs should be optimized. Although the current computation speed of our new 4D Monte Carlo simulation approach is slow (i.e. ~40 times slower than that of the conventional dose accumulation approach), this problem is resolvable by developing, in Geant4, a dedicated navigation class optimized for particle transportation in tetrahedral-mesh geometry.
Mapping Dependence Between Extreme Rainfall and Storm Surge
NASA Astrophysics Data System (ADS)
Wu, Wenyan; McInnes, Kathleen; O'Grady, Julian; Hoeke, Ron; Leonard, Michael; Westra, Seth
2018-04-01
Dependence between extreme storm surge and rainfall can have significant implications for flood risk in coastal and estuarine regions. To supplement limited observational records, we use reanalysis surge data from a hydrodynamic model as the basis for dependence mapping, providing information at a resolution of approximately 30 km along the Australian coastline. We evaluated this approach by comparing the dependence estimates from modeled surge to that calculated using historical surge records from 79 tide gauges around Australia. The results show reasonable agreement between the two sets of dependence values, with the exception of lower seasonal variation in the modeled dependence values compared to the observed data, especially at locations where there are multiple processes driving extreme storm surge. This is due to the combined impact of local bathymetry as well as the resolution of the hydrodynamic model and its meteorological inputs. Meteorological drivers were also investigated for different combinations of extreme rainfall and surge—namely rain-only, surge-only, and coincident extremes—finding that different synoptic patterns are responsible for each combination. The ability to supplement observational records with high-resolution modeled surge data enables a much more precise quantification of dependence along the coastline, strengthening the physical basis for assessments of flood risk in coastal regions.
Efstathiou, Christos; Isukapalli, Sastry
2011-01-01
Allergic airway diseases represent a complex health problem which can be exacerbated by the synergistic action of pollen particles and air pollutants such as ozone. Understanding human exposures to aeroallergens requires accurate estimates of the spatial distribution of airborne pollen levels as well as of various air pollutants at different times. However, currently there are no established methods for estimating allergenic pollen emissions and concentrations over large geographic areas such as the United States. A mechanistic modeling system for describing pollen emissions and transport over extensive domains has been developed by adapting components of existing regional scale air quality models and vegetation databases. First, components of the Biogenic Emissions Inventory System (BEIS) were adapted to predict pollen emission patterns. Subsequently, the transport module of the Community Multiscale Air Quality (CMAQ) modeling system was modified to incorporate description of pollen transport. The combined model, CMAQ-pollen, allows for simultaneous prediction of multiple air pollutants and pollen levels in a single model simulation, and uses consistent assumptions related to the transport of multiple chemicals and pollen species. Application case studies for evaluating the combined modeling system included the simulation of birch and ragweed pollen levels for the year 2002, during their corresponding peak pollination periods (April for birch and September for ragweed). The model simulations were driven by previously evaluated meteorological model outputs and emissions inventories for the eastern United States for the simulation period. A semi-quantitative evaluation of CMAQ-pollen was performed using tree and ragweed pollen counts in Newark, NJ for the same time periods. The peak birch pollen concentrations were predicted to occur within two days of the peak measurements, while the temporal patterns closely followed the measured profiles of overall tree pollen. For the case of ragweed pollen, the model was able to capture the patterns observed during September 2002, but did not predict an early peak; this can be associated with a wider species pollination window and inadequate spatial information in current land cover databases. An additional sensitivity simulation was performed to comparatively evaluate the dispersion patterns predicted by CMAQ-pollen with those predicted by the Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) model, which is used extensively in aerobiological studies. The CMAQ estimated concentration plumes matched the equivalent pollen scenario modeled with HYSPLIT. The novel pollen modeling approach presented here allows simultaneous estimation of multiple airborne allergens and other air pollutants, and is being developed as a central component of an integrated population exposure modeling system, the Modeling Environment for Total Risk studies (MENTOR) for multiple, co-occurring contaminants that include aeroallergens and irritants. PMID:21516207
NASA Astrophysics Data System (ADS)
Efstathiou, Christos; Isukapalli, Sastry; Georgopoulos, Panos
2011-04-01
Allergic airway diseases represent a complex health problem which can be exacerbated by the synergistic action of pollen particles and air pollutants such as ozone. Understanding human exposures to aeroallergens requires accurate estimates of the spatial distribution of airborne pollen levels as well as of various air pollutants at different times. However, currently there are no established methods for estimating allergenic pollen emissions and concentrations over large geographic areas such as the United States. A mechanistic modeling system for describing pollen emissions and transport over extensive domains has been developed by adapting components of existing regional scale air quality models and vegetation databases. First, components of the Biogenic Emissions Inventory System (BEIS) were adapted to predict pollen emission patterns. Subsequently, the transport module of the Community Multiscale Air Quality (CMAQ) modeling system was modified to incorporate description of pollen transport. The combined model, CMAQ-pollen, allows for simultaneous prediction of multiple air pollutants and pollen levels in a single model simulation, and uses consistent assumptions related to the transport of multiple chemicals and pollen species. Application case studies for evaluating the combined modeling system included the simulation of birch and ragweed pollen levels for the year 2002, during their corresponding peak pollination periods (April for birch and September for ragweed). The model simulations were driven by previously evaluated meteorological model outputs and emissions inventories for the eastern United States for the simulation period. A semi-quantitative evaluation of CMAQ-pollen was performed using tree and ragweed pollen counts in Newark, NJ for the same time periods. The peak birch pollen concentrations were predicted to occur within two days of the peak measurements, while the temporal patterns closely followed the measured profiles of overall tree pollen. For the case of ragweed pollen, the model was able to capture the patterns observed during September 2002, but did not predict an early peak; this can be associated with a wider species pollination window and inadequate spatial information in current land cover databases. An additional sensitivity simulation was performed to comparatively evaluate the dispersion patterns predicted by CMAQ-pollen with those predicted by the Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) model, which is used extensively in aerobiological studies. The CMAQ estimated concentration plumes matched the equivalent pollen scenario modeled with HYSPLIT. The novel pollen modeling approach presented here allows simultaneous estimation of multiple airborne allergens and other air pollutants, and is being developed as a central component of an integrated population exposure modeling system, the Modeling Environment for Total Risk studies (MENTOR) for multiple, co-occurring contaminants that include aeroallergens and irritants.
Feature extraction from multiple data sources using genetic programming
NASA Astrophysics Data System (ADS)
Szymanski, John J.; Brumby, Steven P.; Pope, Paul A.; Eads, Damian R.; Esch-Mosher, Diana M.; Galassi, Mark C.; Harvey, Neal R.; McCulloch, Hersey D.; Perkins, Simon J.; Porter, Reid B.; Theiler, James P.; Young, Aaron C.; Bloch, Jeffrey J.; David, Nancy A.
2002-08-01
Feature extraction from imagery is an important and long-standing problem in remote sensing. In this paper, we report on work using genetic programming to perform feature extraction simultaneously from multispectral and digital elevation model (DEM) data. We use the GENetic Imagery Exploitation (GENIE) software for this purpose, which produces image-processing software that inherently combines spatial and spectral processing. GENIE is particularly useful in exploratory studies of imagery, such as one often does in combining data from multiple sources. The user trains the software by painting the feature of interest with a simple graphical user interface. GENIE then uses genetic programming techniques to produce an image-processing pipeline. Here, we demonstrate evolution of image processing algorithms that extract a range of land cover features including towns, wildfire burnscars, and forest. We use imagery from the DOE/NNSA Multispectral Thermal Imager (MTI) spacecraft, fused with USGS 1:24000 scale DEM data.
Meta-analysis identifies gene-by-environment interactions as demonstrated in a study of 4,965 mice.
Kang, Eun Yong; Han, Buhm; Furlotte, Nicholas; Joo, Jong Wha J; Shih, Diana; Davis, Richard C; Lusis, Aldons J; Eskin, Eleazar
2014-01-01
Identifying environmentally-specific genetic effects is a key challenge in understanding the structure of complex traits. Model organisms play a crucial role in the identification of such gene-by-environment interactions, as a result of the unique ability to observe genetically similar individuals across multiple distinct environments. Many model organism studies examine the same traits but under varying environmental conditions. For example, knock-out or diet-controlled studies are often used to examine cholesterol in mice. These studies, when examined in aggregate, provide an opportunity to identify genomic loci exhibiting environmentally-dependent effects. However, the straightforward application of traditional methodologies to aggregate separate studies suffers from several problems. First, environmental conditions are often variable and do not fit the standard univariate model for interactions. Additionally, applying a multivariate model results in increased degrees of freedom and low statistical power. In this paper, we jointly analyze multiple studies with varying environmental conditions using a meta-analytic approach based on a random effects model to identify loci involved in gene-by-environment interactions. Our approach is motivated by the observation that methods for discovering gene-by-environment interactions are closely related to random effects models for meta-analysis. We show that interactions can be interpreted as heterogeneity and can be detected without utilizing the traditional uni- or multi-variate approaches for discovery of gene-by-environment interactions. We apply our new method to combine 17 mouse studies containing in aggregate 4,965 distinct animals. We identify 26 significant loci involved in High-density lipoprotein (HDL) cholesterol, many of which are consistent with previous findings. Several of these loci show significant evidence of involvement in gene-by-environment interactions. An additional advantage of our meta-analysis approach is that our combined study has significantly higher power and improved resolution compared to any single study thus explaining the large number of loci discovered in the combined study.
Meta-Analysis Identifies Gene-by-Environment Interactions as Demonstrated in a Study of 4,965 Mice
Joo, Jong Wha J.; Shih, Diana; Davis, Richard C.; Lusis, Aldons J.; Eskin, Eleazar
2014-01-01
Identifying environmentally-specific genetic effects is a key challenge in understanding the structure of complex traits. Model organisms play a crucial role in the identification of such gene-by-environment interactions, as a result of the unique ability to observe genetically similar individuals across multiple distinct environments. Many model organism studies examine the same traits but under varying environmental conditions. For example, knock-out or diet-controlled studies are often used to examine cholesterol in mice. These studies, when examined in aggregate, provide an opportunity to identify genomic loci exhibiting environmentally-dependent effects. However, the straightforward application of traditional methodologies to aggregate separate studies suffers from several problems. First, environmental conditions are often variable and do not fit the standard univariate model for interactions. Additionally, applying a multivariate model results in increased degrees of freedom and low statistical power. In this paper, we jointly analyze multiple studies with varying environmental conditions using a meta-analytic approach based on a random effects model to identify loci involved in gene-by-environment interactions. Our approach is motivated by the observation that methods for discovering gene-by-environment interactions are closely related to random effects models for meta-analysis. We show that interactions can be interpreted as heterogeneity and can be detected without utilizing the traditional uni- or multi-variate approaches for discovery of gene-by-environment interactions. We apply our new method to combine 17 mouse studies containing in aggregate 4,965 distinct animals. We identify 26 significant loci involved in High-density lipoprotein (HDL) cholesterol, many of which are consistent with previous findings. Several of these loci show significant evidence of involvement in gene-by-environment interactions. An additional advantage of our meta-analysis approach is that our combined study has significantly higher power and improved resolution compared to any single study thus explaining the large number of loci discovered in the combined study. PMID:24415945
Constraint Based Modeling Going Multicellular.
Martins Conde, Patricia do Rosario; Sauter, Thomas; Pfau, Thomas
2016-01-01
Constraint based modeling has seen applications in many microorganisms. For example, there are now established methods to determine potential genetic modifications and external interventions to increase the efficiency of microbial strains in chemical production pipelines. In addition, multiple models of multicellular organisms have been created including plants and humans. While initially the focus here was on modeling individual cell types of the multicellular organism, this focus recently started to switch. Models of microbial communities, as well as multi-tissue models of higher organisms have been constructed. These models thereby can include different parts of a plant, like root, stem, or different tissue types in the same organ. Such models can elucidate details of the interplay between symbiotic organisms, as well as the concerted efforts of multiple tissues and can be applied to analyse the effects of drugs or mutations on a more systemic level. In this review we give an overview of the recent development of multi-tissue models using constraint based techniques and the methods employed when investigating these models. We further highlight advances in combining constraint based models with dynamic and regulatory information and give an overview of these types of hybrid or multi-level approaches.
Improving Measures via Examining the Behavior of Distractors in Multiple-Choice Tests
Sideridis, Georgios; Tsaousis, Ioannis; Al Harbi, Khaleel
2017-01-01
The purpose of the present article was to illustrate, using an example from a national assessment, the value from analyzing the behavior of distractors in measures that engage the multiple-choice format. A secondary purpose of the present article was to illustrate four remedial actions that can potentially improve the measurement of the construct(s) under study. Participants were 2,248 individuals who took a national examination of chemistry. The behavior of the distractors was analyzed by modeling their behavior within the Rasch model. Potentially informative distractors were (a) further modeled using the partial credit model, (b) split onto separate items and retested for model fit and parsimony, (c) combined to form a “super” item or testlet, and (d) reexamined after deleting low-ability individuals who likely guessed on those informative, albeit erroneous, distractors. Results indicated that all but the item split strategies were associated with better model fit compared with the original model. The best fitted model, however, involved modeling and crediting informative distractors via the partial credit model or eliminating the responses of low-ability individuals who likely guessed on informative distractors. The implications, advantages, and disadvantages of modeling informative distractors for measurement purposes are discussed. PMID:29795904
Dong, J Q; Zhang, X Y; Wang, S Z; Jiang, X F; Zhang, K; Ma, G W; Wu, M Q; Li, H; Zhang, H
2018-01-01
Plasma very low-density lipoprotein (VLDL) can be used to select for low body fat or abdominal fat (AF) in broilers, but its correlation with AF is limited. We investigated whether any other biochemical indicator can be used in combination with VLDL for a better selective effect. Nineteen plasma biochemical indicators were measured in male chickens from the Northeast Agricultural University broiler lines divergently selected for AF content (NEAUHLF) in the fed state at 46 and 48 d of age. The average concentration of every parameter for the 2 d was used for statistical analysis. Levels of these 19 plasma biochemical parameters were compared between the lean and fat lines. The phenotypic correlations between these plasma biochemical indicators and AF traits were analyzed. Then, multiple linear regression models were constructed to select the best model used for selecting against AF content. and the heritabilities of plasma indicators contained in the best models were estimated. The results showed that 11 plasma biochemical indicators (triglycerides, total bile acid, total protein, globulin, albumin/globulin, aspartate transaminase, alanine transaminase, gamma-glutamyl transpeptidase, uric acid, creatinine, and VLDL) differed significantly between the lean and fat lines (P < 0.01), and correlated significantly with AF traits (P < 0.05). The best multiple linear regression models based on albumin/globulin, VLDL, triglycerides, globulin, total bile acid, and uric acid, had higher R2 (0.73) than the model based only on VLDL (0.21). The plasma parameters included in the best models had moderate heritability estimates (0.21 ≤ h2 ≤ 0.43). These results indicate that these multiple linear regression models can be used to select for lean broiler chickens. © 2017 Poultry Science Association Inc.
Dynamic characterization and modeling of potting materials for electronics assemblies
NASA Astrophysics Data System (ADS)
Joshi, Vasant S.; Lee, Gilbert F.; Santiago, Jaime R.
2017-01-01
Prediction of survivability of encapsulated electronic components subject to impact relies on accurate modeling, which in turn needs both static and dynamic characterization of individual electronic components and encapsulation material to generate reliable material parameters for a robust material model. Current focus is on potting materials to mitigate high rate loading on impact. In this effort, difficulty arises in capturing one of the critical features characteristic of the loading environment in a high velocity impact: multiple loading events coupled with multi-axial stress states. Hence, potting materials need to be characterized well to understand its damping capacity at different frequencies and strain rates. An encapsulation scheme to protect electronic boards consists of multiple layers of filled as well as unfilled polymeric materials like Sylgard 184 and Trigger bond Epoxy # 20-3001. A combination of experiments conducted for characterization of materials used Split Hopkinson Pressure Bar (SHPB), and dynamic material analyzer (DMA). For material which behaves in an ideal manner, a master curve can be fitted to Williams-Landel-Ferry (WLF) model. To verify the applicability of WLF model, a new temperature-time shift (TTS) macro was written to compare idealized temperature shift factor with experimental incremental shift factor. Deviations can be readily observed by comparison of experimental data with the model fit to determine if model parameters reflect the actual material behavior. Similarly, another macro written for obtaining Ogden model parameter from Hopkinson Bar tests can readily indicate deviations from experimental high strain rate data. Experimental results for different materials used for mitigating impact, and ways to combine data from DMA and Hopkinson bar together with modeling refinements are presented.
Uno, Narumi; Abe, Satoshi; Oshimura, Mitsuo; Kazuki, Yasuhiro
2018-02-01
Chromosome transfer technology, including chromosome modification, enables the introduction of Mb-sized or multiple genes to desired cells or animals. This technology has allowed innovative developments to be made for models of human disease and humanized animals, including Down syndrome model mice and humanized transchromosomic (Tc) immunoglobulin mice. Genome editing techniques are developing rapidly, and permit modifications such as gene knockout and knockin to be performed in various cell lines and animals. This review summarizes chromosome transfer-related technologies and the combined technologies of chromosome transfer and genome editing mainly for the production of cell/animal models of human disease and humanized animal models. Specifically, these include: (1) chromosome modification with genome editing in Chinese hamster ovary cells and mouse A9 cells for efficient transfer to desired cell types; (2) single-nucleotide polymorphism modification in humanized Tc mice with genome editing; and (3) generation of a disease model of Down syndrome-associated hematopoiesis abnormalities by the transfer of human chromosome 21 to normal human embryonic stem cells and the induction of mutation(s) in the endogenous gene(s) with genome editing. These combinations of chromosome transfer and genome editing open up new avenues for drug development and therapy as well as for basic research.
Estimating demographic parameters using a combination of known-fate and open N-mixture models
Schmidt, Joshua H.; Johnson, Devin S.; Lindberg, Mark S.; Adams, Layne G.
2015-01-01
Accurate estimates of demographic parameters are required to infer appropriate ecological relationships and inform management actions. Known-fate data from marked individuals are commonly used to estimate survival rates, whereas N-mixture models use count data from unmarked individuals to estimate multiple demographic parameters. However, a joint approach combining the strengths of both analytical tools has not been developed. Here we develop an integrated model combining known-fate and open N-mixture models, allowing the estimation of detection probability, recruitment, and the joint estimation of survival. We demonstrate our approach through both simulations and an applied example using four years of known-fate and pack count data for wolves (Canis lupus). Simulation results indicated that the integrated model reliably recovered parameters with no evidence of bias, and survival estimates were more precise under the joint model. Results from the applied example indicated that the marked sample of wolves was biased toward individuals with higher apparent survival rates than the unmarked pack mates, suggesting that joint estimates may be more representative of the overall population. Our integrated model is a practical approach for reducing bias while increasing precision and the amount of information gained from mark–resight data sets. We provide implementations in both the BUGS language and an R package.
Estimating demographic parameters using a combination of known-fate and open N-mixture models.
Schmidt, Joshua H; Johnson, Devin S; Lindberg, Mark S; Adams, Layne G
2015-10-01
Accurate estimates of demographic parameters are required to infer appropriate ecological relationships and inform management actions. Known-fate data from marked individuals are commonly used to estimate survival rates, whereas N-mixture models use count data from unmarked individuals to estimate multiple demographic parameters. However, a joint approach combining the strengths of both analytical tools has not been developed. Here we develop an integrated model combining known-fate and open N-mixture models, allowing the estimation of detection probability, recruitment, and the joint estimation of survival. We demonstrate our approach through both simulations and an applied example using four years of known-fate and pack count data for wolves (Canis lupus). Simulation results indicated that the integrated model reliably recovered parameters with no evidence of bias, and survival estimates were more precise under the joint model. Results from the applied example indicated that the marked sample of wolves was biased toward individuals with higher apparent survival rates than the unmarked pack mates, suggesting that joint estimates may be more representative of the overall population. Our integrated model is a practical approach for reducing bias while increasing precision and the amount of information gained from mark-resight data sets. We provide implementations in both the BUGS language and an R package.
Method and apparatus for multiple-projection, dual-energy x-ray absorptiometry scanning
NASA Technical Reports Server (NTRS)
Feldmesser, Howard S. (Inventor); Magee, Thomas C. (Inventor); Charles, Jr., Harry K. (Inventor); Beck, Thomas J. (Inventor)
2007-01-01
Methods and apparatuses for advanced, multiple-projection, dual-energy X-ray absorptiometry scanning systems include combinations of a conical collimator; a high-resolution two-dimensional detector; a portable, power-capped, variable-exposure-time power supply; an exposure-time control element; calibration monitoring; a three-dimensional anti-scatter-grid; and a gantry-gantry base assembly that permits up to seven projection angles for overlapping beams. Such systems are capable of high precision bone structure measurements that can support three dimensional bone modeling and derivations of bone strength, risk of injury, and efficacy of countermeasures among other properties.
Life and light: exotic photosynthesis in binary and multiple-star systems.
O'Malley-James, J T; Raven, J A; Cockell, C S; Greaves, J S
2012-02-01
The potential for Earth-like planets within binary/multiple-star systems to host photosynthetic life was evaluated by modeling the levels of photosynthetically active radiation (PAR) such planets receive. Combinations of M and G stars in (i) close-binary systems; (ii) wide-binary systems, and (iii) three-star systems were investigated, and a range of stable radiation environments were found to be possible. These environmental conditions allow for the possibility of familiar, but also more exotic, forms of photosynthetic life, such as IR photosynthesizers and organisms that are specialized for specific spectral niches.
Multisource passive acoustic tracking: an application of random finite set data fusion
NASA Astrophysics Data System (ADS)
Ali, Andreas M.; Hudson, Ralph E.; Lorenzelli, Flavio; Yao, Kung
2010-04-01
Multisource passive acoustic tracking is useful in animal bio-behavioral study by replacing or enhancing human involvement during and after field data collection. Multiple simultaneous vocalizations are a common occurrence in a forest or a jungle, where many species are encountered. Given a set of nodes that are capable of producing multiple direction-of-arrivals (DOAs), such data needs to be combined into meaningful estimates. Random Finite Set provides the mathematical probabilistic model, which is suitable for analysis and optimal estimation algorithm synthesis. Then the proposed algorithm has been verified using a simulation and a controlled test experiment.
Accounting for multiple sources of uncertainty in impact assessments: The example of the BRACE study
NASA Astrophysics Data System (ADS)
O'Neill, B. C.
2015-12-01
Assessing climate change impacts often requires the use of multiple scenarios, types of models, and data sources, leading to a large number of potential sources of uncertainty. For example, a single study might require a choice of a forcing scenario, climate model, bias correction and/or downscaling method, societal development scenario, model (typically several) for quantifying elements of societal development such as economic and population growth, biophysical model (such as for crop yields or hydrology), and societal impact model (e.g. economic or health model). Some sources of uncertainty are reduced or eliminated by the framing of the question. For example, it may be useful to ask what an impact outcome would be conditional on a given societal development pathway, forcing scenario, or policy. However many sources of uncertainty remain, and it is rare for all or even most of these sources to be accounted for. I use the example of a recent integrated project on the Benefits of Reduced Anthropogenic Climate changE (BRACE) to explore useful approaches to uncertainty across multiple components of an impact assessment. BRACE comprises 23 papers that assess the differences in impacts between two alternative climate futures: those associated with Representative Concentration Pathways (RCPs) 4.5 and 8.5. It quantifies difference in impacts in terms of extreme events, health, agriculture, tropical cyclones, and sea level rise. Methodologically, it includes climate modeling, statistical analysis, integrated assessment modeling, and sector-specific impact modeling. It employs alternative scenarios of both radiative forcing and societal development, but generally uses a single climate model (CESM), partially accounting for climate uncertainty by drawing heavily on large initial condition ensembles. Strengths and weaknesses of the approach to uncertainty in BRACE are assessed. Options under consideration for improving the approach include the use of perturbed physics ensembles of CESM, employing results from multiple climate models, and combining the results from single impact models with statistical representations of uncertainty across multiple models. A key consideration is the relationship between the question being addressed and the uncertainty approach.
2014-09-01
and Subpial Pathology in Multiple Sclerosis by Combined PET and MRI PRINCIPAL INVESTIGATOR: Dr. Caterina Mainero...studies in multiple sclerosis (MS) suggested that cortical demyelinating lesions, which are hardly detected in vivo on conventional magnetic resonance...disease progression in many MS cases. 15. SUBJECT TERMS Multiple sclerosis ; cortex; cortical sulci; neuroinflammation; microglia; cortical
Gemini Planet Imager Spectroscopy of the HR 8799 Planets c and d
Ingraham, Patrick; Marley, Mark S.; Saumon, Didier; ...
2014-09-30
During the first-light run of the Gemini Planet Imager we obtained K-band spectra of exoplanets HR 8799 c and d. Analysis of the spectra indicates that planet d may be warmer than planet c. Comparisons to recent patchy cloud models and previously obtained observations over multiple wavelengths confirm that thick clouds combined with horizontal variation in the cloud cover generally reproduce the planets’ spectral energy distributions.When combined with the 3 to 4μm photometric data points, the observations provide strong constraints on the atmospheric methane content for both planets. Lastly, the data also provide further evidence that future modeling efforts mustmore » include cloud opacity, possibly including cloud holes, disequilibrium chemistry, and super-solar metallicity.« less
NASA Astrophysics Data System (ADS)
Vilhelmsen, Troels N.; Ferré, Ty P. A.
2016-04-01
Hydrological models are often developed to forecasting future behavior in response due to natural or human induced changes in stresses affecting hydrologic systems. Commonly, these models are conceptualized and calibrated based on existing data/information about the hydrological conditions. However, most hydrologic systems lack sufficient data to constrain models with adequate certainty to support robust decision making. Therefore, a key element of a hydrologic study is the selection of additional data to improve model performance. Given the nature of hydrologic investigations, it is not practical to select data sequentially, i.e. to choose the next observation, collect it, refine the model, and then repeat the process. Rather, for timing and financial reasons, measurement campaigns include multiple wells or sampling points. There is a growing body of literature aimed at defining the expected data worth based on existing models. However, these are almost all limited to identifying single additional observations. In this study, we present a methodology for simultaneously selecting multiple potential new observations based on their expected ability to reduce the uncertainty of the forecasts of interest. This methodology is based on linear estimates of the predictive uncertainty, and it can be used to determine the optimal combinations of measurements (location and number) established to reduce the uncertainty of multiple predictions. The outcome of the analysis is an estimate of the optimal sampling locations; the optimal number of samples; as well as a probability map showing the locations within the investigated area that are most likely to provide useful information about the forecasting of interest.
Joint Effects of Ambient Air Pollutants on Pediatric Asthma ...
Background: Because ambient air pollution exposure occurs in the form of mixtures, consideration of joint effects of multiple pollutants may advance our understanding of air pollution health effects. Methods: We assessed the joint effect of selected ambient air pollutant combinations (groups of oxidant, secondary, traffic, power plant, and criteria pollutants constructed using combinations of criteria gases, fine particulate matter (PM2.5) and PM2.5 components) on warm season pediatric asthma emergency department (ED) visits in Atlanta during 1998-2004. Joint effects were assessed using multi-pollutant Poisson generalized linear models controlling for time trends, meteorology and daily non-asthma respiratory ED visit counts. Rate ratios (RR) were calculated for the combined effect of an interquartile-range increment in the concentration of each pollutant. Results: Increases in all of the selected pollutant combinations were associated with increases in pediatric asthma ED visits [e.g., joint effect rate ratio=1.13 (95% confidence interval 1.06-1.21) for criteria pollutants (including ozone, carbon monoxide, nitrogen dioxide, sulfur dioxide, and PM2.5)]. Joint effect estimates were smaller than estimates calculated based on summing results from single-pollutant models, due to control for confounding. Compared with models without interactions, joint effect estimates from models including first-order pollutant interactions were similar for oxidant a
Intraspecific density dependence and a guild of consumers coexisting on one resource.
McPeek, Mark A
2012-12-01
The importance of negative intraspecific density dependence to promoting species coexistence in a community is well accepted. However, such mechanisms are typically omitted from more explicit models of community dynamics. Here I analyze a variation of the Rosenzweig-MacArthur consumer-resource model that includes negative intraspecific density dependence for consumers to explore its effect on the coexistence of multiple consumers feeding on a single resource. This analysis demonstrates that a guild of multiple consumers can easily coexist on a single resource if each limits its own abundance to some degree, and stronger intraspecific density dependence permits a wider variety of consumers to coexist. The mechanism permitting multiple consumers to coexist works in a fashion similar to apparent competition or to each consumer having its own specialized predator. These results argue for a more explicit emphasis on how negative intraspecific density dependence is generated and how these mechanisms combine with species interactions to shape overall community structure.
Synergistic Anti-arrhythmic Effects in Human Atria with Combined Use of Sodium Blockers and Acacetin
Ni, Haibo; Whittaker, Dominic G.; Wang, Wei; Giles, Wayne R.; Narayan, Sanjiv M.; Zhang, Henggui
2017-01-01
Atrial fibrillation (AF) is the most common cardiac arrhythmia. Developing effective and safe anti-AF drugs remains an unmet challenge. Simultaneous block of both atrial-specific ultra-rapid delayed rectifier potassium (K+) current (IKur) and the Na+ current (INa) has been hypothesized to be anti-AF, without inducing significant QT prolongation and ventricular side effects. However, the antiarrhythmic advantage of simultaneously blocking these two channels vs. individual block in the setting of AF-induced electrical remodeling remains to be documented. Furthermore, many IKur blockers such as acacetin and AVE0118, partially inhibit other K+ currents in the atria. Whether this multi-K+-block produces greater anti-AF effects compared with selective IKur-block has not been fully understood. The aim of this study was to use computer models to (i) assess the impact of multi-K+-block as exhibited by many IKur blokers, and (ii) evaluate the antiarrhythmic effect of blocking IKur and INa, either alone or in combination, on atrial and ventricular electrical excitation and recovery in the setting of AF-induced electrical-remodeling. Contemporary mathematical models of human atrial and ventricular cells were modified to incorporate dose-dependent actions of acacetin (a multichannel blocker primarily inhibiting IKur while less potently blocking Ito, IKr, and IKs). Rate- and atrial-selective inhibition of INa was also incorporated into the models. These single myocyte models were then incorporated into multicellular two-dimensional (2D) and three-dimensional (3D) anatomical models of the human atria. As expected, application of IKur blocker produced pronounced action potential duration (APD) prolongation in atrial myocytes. Furthermore, combined multiple K+-channel block that mimicked the effects of acacetin exhibited synergistic APD prolongations. Synergistically anti-AF effects following inhibition of INa and combined IKur/K+-channels were also observed. The attainable maximal AF-selectivity of INa inhibition was greatly augmented by blocking IKur or multiple K+-currents in the atrial myocytes. This enhanced anti-arrhythmic effects of combined block of Na+- and K+-channels were also seen in 2D and 3D simulations; specially, there was an enhanced efficacy in terminating re-entrant excitation waves, exerting improved antiarrhythmic effects in the human atria as compared to a single-channel block. However, in the human ventricular myocytes and tissue, cellular repolarization and computed QT intervals were modestly affected in the presence of actions of acacetin and INa blockers (either alone or in combination). In conclusion, this study demonstrates synergistic antiarrhythmic benefits of combined block of IKur and INa, as well as those of INa and combined multi K+-current block of acacetin, without significant alterations of ventricular repolarization and QT intervals. This approach may be a valuable strategy for the treatment of AF. PMID:29218016
Zhu, Huayang; Ricote, Sandrine; Coors, W Grover; Kee, Robert J
2015-01-01
A model-based interpretation of measured equilibrium conductivity and conductivity relaxation is developed to establish thermodynamic, transport, and kinetics parameters for multiple charged defect conducting (MCDC) ceramic materials. The present study focuses on 10% yttrium-doped barium zirconate (BZY10). In principle, using the Nernst-Einstein relationship, equilibrium conductivity measurements are sufficient to establish thermodynamic and transport properties. However, in practice it is difficult to establish unique sets of properties using equilibrium conductivity alone. Combining equilibrium and conductivity-relaxation measurements serves to significantly improve the quantitative fidelity of the derived material properties. The models are developed using a Nernst-Planck-Poisson (NPP) formulation, which enables the quantitative representation of conductivity relaxations caused by very large changes in oxygen partial pressure.
Jacobs, J V; Horak, F B; Tran, V K; Nutt, J G
2006-01-01
Objectives Clinicians often base the implementation of therapies on the presence of postural instability in subjects with Parkinson's disease (PD). These decisions are frequently based on the pull test from the Unified Parkinson's Disease Rating Scale (UPDRS). We sought to determine whether combining the pull test, the one‐leg stance test, the functional reach test, and UPDRS items 27–29 (arise from chair, posture, and gait) predicts balance confidence and falling better than any test alone. Methods The study included 67 subjects with PD. Subjects performed the one‐leg stance test, the functional reach test, and the UPDRS motor exam. Subjects also responded to the Activities‐specific Balance Confidence (ABC) scale and reported how many times they fell during the previous year. Regression models determined the combination of tests that optimally predicted mean ABC scores or categorised fall frequency. Results When all tests were included in a stepwise linear regression, only gait (UPDRS item 29), the pull test (UPDRS item 30), and the one‐leg stance test, in combination, represented significant predictor variables for mean ABC scores (r2 = 0.51). A multinomial logistic regression model including the one‐leg stance test and gait represented the model with the fewest significant predictor variables that correctly identified the most subjects as fallers or non‐fallers (85% of subjects were correctly identified). Conclusions Multiple balance tests (including the one‐leg stance test, and the gait and pull test items of the UPDRS) that assess different types of postural stress provide an optimal assessment of postural stability in subjects with PD. PMID:16484639
Glaholt, Stephen P; Chen, Celia Y; Demidenko, Eugene; Bugge, Deenie M; Folt, Carol L; Shaw, Joseph R
2012-08-15
The study of stressor interactions by eco-toxicologists using nonlinear response variables is limited by required amounts of a priori knowledge, complexity of experimental designs, the use of linear models, and the lack of use of optimal designs of nonlinear models to characterize complex interactions. Therefore, we developed AID, an adaptive-iterative design for eco-toxicologist to more accurately and efficiently examine complex multiple stressor interactions. AID incorporates the power of the general linear model and A-optimal criteria with an iterative process that: 1) minimizes the required amount of a priori knowledge, 2) simplifies the experimental design, and 3) quantifies both individual and interactive effects. Once a stable model is determined, the best fit model is identified and the direction and magnitude of stressors, individually and all combinations (including complex interactions) are quantified. To validate AID, we selected five commonly co-occurring components of polluted aquatic systems, three metal stressors (Cd, Zn, As) and two water chemistry parameters (pH, hardness) to be tested using standard acute toxicity tests in which Daphnia mortality is the (nonlinear) response variable. We found after the initial data input of experimental data, although literature values (e.g. EC-values) may also be used, and after only two iterations of AID, our dose response model was stable. The model ln(Cd)*ln(Zn) was determined the best predictor of Daphnia mortality response to the combined effects of Cd, Zn, As, pH, and hardness. This model was then used to accurately identify and quantify the strength of both greater- (e.g. As*Cd) and less-than additive interactions (e.g. Cd*Zn). Interestingly, our study found only binary interactions significant, not higher order interactions. We conclude that AID is more efficient and effective at assessing multiple stressor interactions than current methods. Other applications, including life-history endpoints commonly used by regulators, could benefit from AID's efficiency in assessing water quality criteria. Copyright © 2012 Elsevier B.V. All rights reserved.
[A review of pension status quo in China and domestic and overseas pension models].
Si, J H; Li, L M
2016-10-10
With the aging of population and progressive decline of traditional pension model, the problems in the aged supporting have caused serious social concern in China. Since 1980' s, different opinions about pension models have been suggested in many research papers. This paper summarizes the characteristics of different pension model used in both China and abroad in terms of the financial sources of the aged supporting, life style and the combination with medical service, suggesting to establish a pension model with Chinese characteristics to provide multiple and personalized services on the basis of China' s national situation and successful experiences of other countries.
Varrassi, Giustino; Hanna, Magdi; Macheras, Giorgos; Montero, Antonio; Montes Perez, Antonio; Meissner, Winfried; Perrot, Serge; Scarpignato, Carmelo
2017-06-01
Untreated and under-treated pain represent one of the most pervasive health problems, which is worsening as the population ages and accrues risk for pain. Multiple treatment options are available, most of which have one mechanism of action, and cannot be prescribed at unlimited doses due to the ceiling of efficacy and/or safety concerns. Another limitation of single-agent analgesia is that, in general, pain is due to multiple causes. Combining drugs from different classes, with different and complementary mechanism(s) of action, provides a better opportunity for effective analgesia at reduced doses of individual agents. Therefore, there is a potential reduction of adverse events, often dose-related. Analgesic combinations are recommended by several organizations and are used in clinical practice. Provided the two agents are combined in a fixed-dose ratio, the resulting medication may offer advantages over extemporaneous combinations. Dexketoprofen/tramadol (25 mg/75 mg) is a new oral fixed-dose combination offering a comprehensive multimodal approach to moderate-to-severe acute pain that encompasses central analgesic action, peripheral analgesic effect and anti-inflammatory activity, together with a good tolerability profile. The analgesic efficacy of dexketoprofen/tramadol combination is complemented by a favorable pharmacokinetic and pharmacodynamic profile, characterized by rapid onset and long duration of action. This has been well documented in both somatic- and visceral-pain human models. This review discusses the available clinical evidence and the future possible applications of dexketoprofen/tramadol fixed-dose combination that may play an important role in the management of moderate-to-severe acute pain.
Marcus, Alonna; Wilder, David A
2009-01-01
Peer video modeling was compared to self video modeling to teach 3 children with autism to respond appropriately to (i.e., identify or label) novel letters. A combination multiple baseline and multielement design was used to compare the two procedures. Results showed that all 3 participants met the mastery criterion in the self-modeling condition, whereas only 1 of the participants met the mastery criterion in the peer-modeling condition. In addition, the participant who met the mastery criterion in both conditions reached the criterion more quickly in the self-modeling condition. Results are discussed in terms of their implications for teaching new skills to children with autism.
Time series sightability modeling of animal populations.
ArchMiller, Althea A; Dorazio, Robert M; St Clair, Katherine; Fieberg, John R
2018-01-01
Logistic regression models-or "sightability models"-fit to detection/non-detection data from marked individuals are often used to adjust for visibility bias in later detection-only surveys, with population abundance estimated using a modified Horvitz-Thompson (mHT) estimator. More recently, a model-based alternative for analyzing combined detection/non-detection and detection-only data was developed. This approach seemed promising, since it resulted in similar estimates as the mHT when applied to data from moose (Alces alces) surveys in Minnesota. More importantly, it provided a framework for developing flexible models for analyzing multiyear detection-only survey data in combination with detection/non-detection data. During initial attempts to extend the model-based approach to multiple years of detection-only data, we found that estimates of detection probabilities and population abundance were sensitive to the amount of detection-only data included in the combined (detection/non-detection and detection-only) analysis. Subsequently, we developed a robust hierarchical modeling approach where sightability model parameters are informed only by the detection/non-detection data, and we used this approach to fit a fixed-effects model (FE model) with year-specific parameters and a temporally-smoothed model (TS model) that shares information across years via random effects and a temporal spline. The abundance estimates from the TS model were more precise, with decreased interannual variability relative to the FE model and mHT abundance estimates, illustrating the potential benefits from model-based approaches that allow information to be shared across years.
A Bayesian trans-dimensional approach for the fusion of multiple geophysical datasets
NASA Astrophysics Data System (ADS)
JafarGandomi, Arash; Binley, Andrew
2013-09-01
We propose a Bayesian fusion approach to integrate multiple geophysical datasets with different coverage and sensitivity. The fusion strategy is based on the capability of various geophysical methods to provide enough resolution to identify either subsurface material parameters or subsurface structure, or both. We focus on electrical resistivity as the target material parameter and electrical resistivity tomography (ERT), electromagnetic induction (EMI), and ground penetrating radar (GPR) as the set of geophysical methods. However, extending the approach to different sets of geophysical parameters and methods is straightforward. Different geophysical datasets are entered into a trans-dimensional Markov chain Monte Carlo (McMC) search-based joint inversion algorithm. The trans-dimensional property of the McMC algorithm allows dynamic parameterisation of the model space, which in turn helps to avoid bias of the post-inversion results towards a particular model. Given that we are attempting to develop an approach that has practical potential, we discretize the subsurface into an array of one-dimensional earth-models. Accordingly, the ERT data that are collected by using two-dimensional acquisition geometry are re-casted to a set of equivalent vertical electric soundings. Different data are inverted either individually or jointly to estimate one-dimensional subsurface models at discrete locations. We use Shannon's information measure to quantify the information obtained from the inversion of different combinations of geophysical datasets. Information from multiple methods is brought together via introducing joint likelihood function and/or constraining the prior information. A Bayesian maximum entropy approach is used for spatial fusion of spatially dispersed estimated one-dimensional models and mapping of the target parameter. We illustrate the approach with a synthetic dataset and then apply it to a field dataset. We show that the proposed fusion strategy is successful not only in enhancing the subsurface information but also as a survey design tool to identify the appropriate combination of the geophysical tools and show whether application of an individual method for further investigation of a specific site is beneficial.
Phenomenology of stochastic exponential growth
NASA Astrophysics Data System (ADS)
Pirjol, Dan; Jafarpour, Farshid; Iyer-Biswas, Srividya
2017-06-01
Stochastic exponential growth is observed in a variety of contexts, including molecular autocatalysis, nuclear fission, population growth, inflation of the universe, viral social media posts, and financial markets. Yet literature on modeling the phenomenology of these stochastic dynamics has predominantly focused on one model, geometric Brownian motion (GBM), which can be described as the solution of a Langevin equation with linear drift and linear multiplicative noise. Using recent experimental results on stochastic exponential growth of individual bacterial cell sizes, we motivate the need for a more general class of phenomenological models of stochastic exponential growth, which are consistent with the observation that the mean-rescaled distributions are approximately stationary at long times. We show that this behavior is not consistent with GBM, instead it is consistent with power-law multiplicative noise with positive fractional powers. Therefore, we consider this general class of phenomenological models for stochastic exponential growth, provide analytical solutions, and identify the important dimensionless combination of model parameters, which determines the shape of the mean-rescaled distribution. We also provide a prescription for robustly inferring model parameters from experimentally observed stochastic growth trajectories.
Digging into the corona: A modeling framework trained with Sun-grazing comet observations
NASA Astrophysics Data System (ADS)
Jia, Y. D.; Pesnell, W. D.; Bryans, P.; Downs, C.; Liu, W.; Schwartz, S. J.
2017-12-01
Images of comets diving into the low corona have been captured a few times in the past decade. Structures visible at various wavelengths during these encounters indicate a strong variation of the ambient conditions of the corona. We combine three numerical models: a global coronal model, a particle transportation model, and a cometary plasma interaction model into one framework to model the interaction of such Sun-grazing comets with plasma in the low corona. In our framework, cometary vapors are ionized via multiple channels and then captured by the coronal magnetic field. In seconds, these ions are further ionized into their highest charge state, which is revealed by certain coronal emission lines. Constrained by observations, we apply our framework to trace back to the local conditions of the ambient corona, and their spatial/time variation over a broad range of scales. Once trained by multiple stages of the comet's journey in the low corona, we illustrate how this framework can leverage these unique observations to probe the structure of the solar corona and solar wind.
When one model is not enough: combining epistemic tools in systems biology.
Green, Sara
2013-06-01
In recent years, the philosophical focus of the modeling literature has shifted from descriptions of general properties of models to an interest in different model functions. It has been argued that the diversity of models and their correspondingly different epistemic goals are important for developing intelligible scientific theories (Leonelli, 2007; Levins, 2006). However, more knowledge is needed on how a combination of different epistemic means can generate and stabilize new entities in science. This paper will draw on Rheinberger's practice-oriented account of knowledge production. The conceptual repertoire of Rheinberger's historical epistemology offers important insights for an analysis of the modelling practice. I illustrate this with a case study on network modeling in systems biology where engineering approaches are applied to the study of biological systems. I shall argue that the use of multiple representational means is an essential part of the dynamic of knowledge generation. It is because of-rather than in spite of-the diversity of constraints of different models that the interlocking use of different epistemic means creates a potential for knowledge production. Copyright © 2013 Elsevier Ltd. All rights reserved.
Application of Harmony Search algorithm to the solution of groundwater management models
NASA Astrophysics Data System (ADS)
Tamer Ayvaz, M.
2009-06-01
This study proposes a groundwater resources management model in which the solution is performed through a combined simulation-optimization model. A modular three-dimensional finite difference groundwater flow model, MODFLOW is used as the simulation model. This model is then combined with a Harmony Search (HS) optimization algorithm which is based on the musical process of searching for a perfect state of harmony. The performance of the proposed HS based management model is tested on three separate groundwater management problems: (i) maximization of total pumping from an aquifer (steady-state); (ii) minimization of the total pumping cost to satisfy the given demand (steady-state); and (iii) minimization of the pumping cost to satisfy the given demand for multiple management periods (transient). The sensitivity of HS algorithm is evaluated by performing a sensitivity analysis which aims to determine the impact of related solution parameters on convergence behavior. The results show that HS yields nearly same or better solutions than the previous solution methods and may be used to solve management problems in groundwater modeling.
Modeling human diseases: an education in interactions and interdisciplinary approaches.
Zon, Leonard
2016-06-01
Traditionally, most investigators in the biomedical arena exploit one model system in the course of their careers. Occasionally, an investigator will switch models. The selection of a suitable model system is a crucial step in research design. Factors to consider include the accuracy of the model as a reflection of the human disease under investigation, the numbers of animals needed and ease of husbandry, its physiology and developmental biology, and the ability to apply genetics and harness the model for drug discovery. In my lab, we have primarily used the zebrafish but combined it with other animal models and provided a framework for others to consider the application of developmental biology for therapeutic discovery. Our interdisciplinary approach has led to many insights into human diseases and to the advancement of candidate drugs to clinical trials. Here, I draw on my experiences to highlight the importance of combining multiple models, establishing infrastructure and genetic tools, forming collaborations, and interfacing with the medical community for successful translation of basic findings to the clinic. © 2016. Published by The Company of Biologists Ltd.
Virtual Plant Tissue: Building Blocks for Next-Generation Plant Growth Simulation
De Vos, Dirk; Dzhurakhalov, Abdiravuf; Stijven, Sean; Klosiewicz, Przemyslaw; Beemster, Gerrit T. S.; Broeckhove, Jan
2017-01-01
Motivation: Computational modeling of plant developmental processes is becoming increasingly important. Cellular resolution plant tissue simulators have been developed, yet they are typically describing physiological processes in an isolated way, strongly delimited in space and time. Results: With plant systems biology moving toward an integrative perspective on development we have built the Virtual Plant Tissue (VPTissue) package to couple functional modules or models in the same framework and across different frameworks. Multiple levels of model integration and coordination enable combining existing and new models from different sources, with diverse options in terms of input/output. Besides the core simulator the toolset also comprises a tissue editor for manipulating tissue geometry and cell, wall, and node attributes in an interactive manner. A parameter exploration tool is available to study parameter dependence of simulation results by distributing calculations over multiple systems. Availability: Virtual Plant Tissue is available as open source (EUPL license) on Bitbucket (https://bitbucket.org/vptissue/vptissue). The project has a website https://vptissue.bitbucket.io. PMID:28523006
Modeling Pan Evaporation for Kuwait by Multiple Linear Regression
Almedeij, Jaber
2012-01-01
Evaporation is an important parameter for many projects related to hydrology and water resources systems. This paper constitutes the first study conducted in Kuwait to obtain empirical relations for the estimation of daily and monthly pan evaporation as functions of available meteorological data of temperature, relative humidity, and wind speed. The data used here for the modeling are daily measurements of substantial continuity coverage, within a period of 17 years between January 1993 and December 2009, which can be considered representative of the desert climate of the urban zone of the country. Multiple linear regression technique is used with a procedure of variable selection for fitting the best model forms. The correlations of evaporation with temperature and relative humidity are also transformed in order to linearize the existing curvilinear patterns of the data by using power and exponential functions, respectively. The evaporation models suggested with the best variable combinations were shown to produce results that are in a reasonable agreement with observation values. PMID:23226984
Nonlinear aeroservoelastic analysis of a controlled multiple-actuated-wing model with free-play
NASA Astrophysics Data System (ADS)
Huang, Rui; Hu, Haiyan; Zhao, Yonghui
2013-10-01
In this paper, the effects of structural nonlinearity due to free-play in both leading-edge and trailing-edge outboard control surfaces on the linear flutter control system are analyzed for an aeroelastic model of three-dimensional multiple-actuated-wing. The free-play nonlinearities in the control surfaces are modeled theoretically by using the fictitious mass approach. The nonlinear aeroelastic equations of the presented model can be divided into nine sub-linear modal-based aeroelastic equations according to the different combinations of deflections of the leading-edge and trailing-edge outboard control surfaces. The nonlinear aeroelastic responses can be computed based on these sub-linear aeroelastic systems. To demonstrate the effects of nonlinearity on the linear flutter control system, a single-input and single-output controller and a multi-input and multi-output controller are designed based on the unconstrained optimization techniques. The numerical results indicate that the free-play nonlinearity can lead to either limit cycle oscillations or divergent motions when the linear control system is implemented.
Li, Cheng-Wei; Chen, Bor-Sen
2016-01-01
Epigenetic and microRNA (miRNA) regulation are associated with carcinogenesis and the development of cancer. By using the available omics data, including those from next-generation sequencing (NGS), genome-wide methylation profiling, candidate integrated genetic and epigenetic network (IGEN) analysis, and drug response genome-wide microarray analysis, we constructed an IGEN system based on three coupling regression models that characterize protein-protein interaction networks (PPINs), gene regulatory networks (GRNs), miRNA regulatory networks (MRNs), and epigenetic regulatory networks (ERNs). By applying system identification method and principal genome-wide network projection (PGNP) to IGEN analysis, we identified the core network biomarkers to investigate bladder carcinogenic mechanisms and design multiple drug combinations for treating bladder cancer with minimal side-effects. The progression of DNA repair and cell proliferation in stage 1 bladder cancer ultimately results not only in the derepression of miR-200a and miR-200b but also in the regulation of the TNF pathway to metastasis-related genes or proteins, cell proliferation, and DNA repair in stage 4 bladder cancer. We designed a multiple drug combination comprising gefitinib, estradiol, yohimbine, and fulvestrant for treating stage 1 bladder cancer with minimal side-effects, and another multiple drug combination comprising gefitinib, estradiol, chlorpromazine, and LY294002 for treating stage 4 bladder cancer with minimal side-effects.
Multiplicative mixing of object identity and image attributes in single inferior temporal neurons.
Ratan Murty, N Apurva; Arun, S P
2018-04-03
Object recognition is challenging because the same object can produce vastly different images, mixing signals related to its identity with signals due to its image attributes, such as size, position, rotation, etc. Previous studies have shown that both signals are present in high-level visual areas, but precisely how they are combined has remained unclear. One possibility is that neurons might encode identity and attribute signals multiplicatively so that each can be efficiently decoded without interference from the other. Here, we show that, in high-level visual cortex, responses of single neurons can be explained better as a product rather than a sum of tuning for object identity and tuning for image attributes. This subtle effect in single neurons produced substantially better population decoding of object identity and image attributes in the neural population as a whole. This property was absent both in low-level vision models and in deep neural networks. It was also unique to invariances: when tested with two-part objects, neural responses were explained better as a sum than as a product of part tuning. Taken together, our results indicate that signals requiring separate decoding, such as object identity and image attributes, are combined multiplicatively in IT neurons, whereas signals that require integration (such as parts in an object) are combined additively. Copyright © 2018 the Author(s). Published by PNAS.
Finite-Length Line Source Superposition Model (FLLSSM)
NASA Astrophysics Data System (ADS)
1980-03-01
A linearized thermal conduction model was developed to economically determine media temperatures in geologic repositories for nuclear wastes. Individual canisters containing either high level waste or spent fuel assemblies were represented as finite length line sources in a continuous media. The combined effects of multiple canisters in a representative storage pattern were established at selected points of interest by superposition of the temperature rises calculated for each canister. The methodology is outlined and the computer code FLLSSM which performs required numerical integrations and superposition operations is described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faulds, James E.; Hinz, Nicholas H.; Coolbaugh, Mark F.
We have undertaken an integrated geologic, geochemical, and geophysical study of a broad 240-km-wide, 400-km-long transect stretching from west-central to eastern Nevada in the Great Basin region of the western USA. The main goal of this study is to produce a comprehensive geothermal potential map that incorporates up to 11 parameters and identifies geothermal play fairways that represent potential blind or hidden geothermal systems. Our new geothermal potential map incorporates: 1) heat flow; 2) geochemistry from springs and wells; 3) structural setting; 4) recency of faulting; 5) slip rates on Quaternary faults; 6) regional strain rate; 7) slip and dilationmore » tendency on Quaternary faults; 8) seismologic data; 9) gravity data; 10) magnetotelluric data (where available); and 11) seismic reflection data (primarily from the Carson Sink and Steptoe basins). The transect is respectively anchored on its western and eastern ends by regional 3D modeling of the Carson Sink and Steptoe basins, which will provide more detailed geothermal potential maps of these two promising areas. To date, geological, geochemical, and geophysical data sets have been assembled into an ArcGIS platform and combined into a preliminary predictive geothermal play fairway model using various statistical techniques. The fairway model consists of the following components, each of which are represented in grid-cell format in ArcGIS and combined using specified weights and mathematical operators: 1) structural component of permeability; 2) regional-scale component of permeability; 3) combined permeability, and 4) heat source model. The preliminary model demonstrates that the multiple data sets can be successfully combined into a comprehensive favorability map. An initial evaluation using known geothermal systems as benchmarks to test interpretations indicates that the preliminary modeling has done a good job assigning relative ranks of geothermal potential. However, a major challenge is defining logical relative rankings of each parameter and how best to combine the multiple data sets into the geothermal potential/ permeability map. Ongoing feedback and data analysis are in use to revise the grouping and weighting of some parameters in order to develop a more robust, optimized, final model. The final product will incorporate more parameters into a geothermal potential map than any previous effort in the region and may serve as a prototype to develop comprehensive geothermal potential maps for other regions.« less
Deep ensemble learning of sparse regression models for brain disease diagnosis.
Suk, Heung-Il; Lee, Seong-Whan; Shen, Dinggang
2017-04-01
Recent studies on brain imaging analysis witnessed the core roles of machine learning techniques in computer-assisted intervention for brain disease diagnosis. Of various machine-learning techniques, sparse regression models have proved their effectiveness in handling high-dimensional data but with a small number of training samples, especially in medical problems. In the meantime, deep learning methods have been making great successes by outperforming the state-of-the-art performances in various applications. In this paper, we propose a novel framework that combines the two conceptually different methods of sparse regression and deep learning for Alzheimer's disease/mild cognitive impairment diagnosis and prognosis. Specifically, we first train multiple sparse regression models, each of which is trained with different values of a regularization control parameter. Thus, our multiple sparse regression models potentially select different feature subsets from the original feature set; thereby they have different powers to predict the response values, i.e., clinical label and clinical scores in our work. By regarding the response values from our sparse regression models as target-level representations, we then build a deep convolutional neural network for clinical decision making, which thus we call 'Deep Ensemble Sparse Regression Network.' To our best knowledge, this is the first work that combines sparse regression models with deep neural network. In our experiments with the ADNI cohort, we validated the effectiveness of the proposed method by achieving the highest diagnostic accuracies in three classification tasks. We also rigorously analyzed our results and compared with the previous studies on the ADNI cohort in the literature. Copyright © 2017 Elsevier B.V. All rights reserved.
A ternary age-mixing model to explain contaminant occurrence in a deep supply well
Jurgens, Bryant; Bexfield, Laura M.; Eberts, Sandra
2014-01-01
The age distribution of water from a public-supply well in a deep alluvial aquifer was estimated and used to help explain arsenic variability in the water. The age distribution was computed using a ternary mixing model that combines three lumped parameter models of advection-dispersion transport of environmental tracers, which represent relatively recent recharge (post- 1950s) containing volatile organic compounds (VOCs), old intermediate depth groundwater (about 6500 years) that was free of drinking-water contaminants, and very old, deep groundwater (more than 21,000 years) containing arsenic above the USEPA maximum contaminant level of 10 µg/L. The ternary mixing model was calibrated to tritium, chloroflorocarbon-113, and carbon-14 (14C) concentrations that were measured in water samples collected on multiple occasions. Variability in atmospheric 14C over the past 50,000 years was accounted for in the interpretation of 14C as a tracer. Calibrated ternary models indicate the fraction of deep, very old groundwater entering the well varies substantially throughout the year and was highest following long periods of nonoperation or infrequent operation, which occured during the winter season when water demand was low. The fraction of young water entering the well was about 11% during the summer when pumping peaked to meet water demand and about 3% to 6% during the winter months. This paper demonstrates how collection of multiple tracers can be used in combination with simplified models of fluid flow to estimate the age distribution and thus fraction of contaminated groundwater reaching a supply well under different pumping conditions.
NASA Astrophysics Data System (ADS)
Fernández-Manso, O.; Fernández-Manso, A.; Quintano, C.
2014-09-01
Aboveground biomass (AGB) estimation from optical satellite data is usually based on regression models of original or synthetic bands. To overcome the poor relation between AGB and spectral bands due to mixed-pixels when a medium spatial resolution sensor is considered, we propose to base the AGB estimation on fraction images from Linear Spectral Mixture Analysis (LSMA). Our study area is a managed Mediterranean pine woodland (Pinus pinaster Ait.) in central Spain. A total of 1033 circular field plots were used to estimate AGB from Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) optical data. We applied Pearson correlation statistics and stepwise multiple regression to identify suitable predictors from the set of variables of original bands, fraction imagery, Normalized Difference Vegetation Index and Tasselled Cap components. Four linear models and one nonlinear model were tested. A linear combination of ASTER band 2 (red, 0.630-0.690 μm), band 8 (short wave infrared 5, 2.295-2.365 μm) and green vegetation fraction (from LSMA) was the best AGB predictor (Radj2=0.632, the root-mean-squared error of estimated AGB was 13.3 Mg ha-1 (or 37.7%), resulting from cross-validation), rather than other combinations of the above cited independent variables. Results indicated that using ASTER fraction images in regression models improves the AGB estimation in Mediterranean pine forests. The spatial distribution of the estimated AGB, based on a multiple linear regression model, may be used as baseline information for forest managers in future studies, such as quantifying the regional carbon budget, fuel accumulation or monitoring of management practices.
Deep ensemble learning of sparse regression models for brain disease diagnosis
Suk, Heung-Il; Lee, Seong-Whan; Shen, Dinggang
2018-01-01
Recent studies on brain imaging analysis witnessed the core roles of machine learning techniques in computer-assisted intervention for brain disease diagnosis. Of various machine-learning techniques, sparse regression models have proved their effectiveness in handling high-dimensional data but with a small number of training samples, especially in medical problems. In the meantime, deep learning methods have been making great successes by outperforming the state-of-the-art performances in various applications. In this paper, we propose a novel framework that combines the two conceptually different methods of sparse regression and deep learning for Alzheimer’s disease/mild cognitive impairment diagnosis and prognosis. Specifically, we first train multiple sparse regression models, each of which is trained with different values of a regularization control parameter. Thus, our multiple sparse regression models potentially select different feature subsets from the original feature set; thereby they have different powers to predict the response values, i.e., clinical label and clinical scores in our work. By regarding the response values from our sparse regression models as target-level representations, we then build a deep convolutional neural network for clinical decision making, which thus we call ‘ Deep Ensemble Sparse Regression Network.’ To our best knowledge, this is the first work that combines sparse regression models with deep neural network. In our experiments with the ADNI cohort, we validated the effectiveness of the proposed method by achieving the highest diagnostic accuracies in three classification tasks. We also rigorously analyzed our results and compared with the previous studies on the ADNI cohort in the literature. PMID:28167394
A Ternary Age-Mixing Model to Explain Contaminant Occurrence in a Deep Supply Well
Jurgens, Bryant C; Bexfield, Laura M; Eberts, Sandra M
2014-01-01
The age distribution of water from a public-supply well in a deep alluvial aquifer was estimated and used to help explain arsenic variability in the water. The age distribution was computed using a ternary mixing model that combines three lumped parameter models of advection-dispersion transport of environmental tracers, which represent relatively recent recharge (post-1950s) containing volatile organic compounds (VOCs), old intermediate depth groundwater (about 6500 years) that was free of drinking-water contaminants, and very old, deep groundwater (more than 21,000 years) containing arsenic above the USEPA maximum contaminant level of 10 µg/L. The ternary mixing model was calibrated to tritium, chloroflorocarbon-113, and carbon-14 (14C) concentrations that were measured in water samples collected on multiple occasions. Variability in atmospheric 14C over the past 50,000 years was accounted for in the interpretation of 14C as a tracer. Calibrated ternary models indicate the fraction of deep, very old groundwater entering the well varies substantially throughout the year and was highest following long periods of nonoperation or infrequent operation, which occured during the winter season when water demand was low. The fraction of young water entering the well was about 11% during the summer when pumping peaked to meet water demand and about 3% to 6% during the winter months. This paper demonstrates how collection of multiple tracers can be used in combination with simplified models of fluid flow to estimate the age distribution and thus fraction of contaminated groundwater reaching a supply well under different pumping conditions. PMID:24597520
NASA Astrophysics Data System (ADS)
Fukuda, Jun'ichi; Johnson, Kaj M.
2010-06-01
We present a unified theoretical framework and solution method for probabilistic, Bayesian inversions of crustal deformation data. The inversions involve multiple data sets with unknown relative weights, model parameters that are related linearly or non-linearly through theoretic models to observations, prior information on model parameters and regularization priors to stabilize underdetermined problems. To efficiently handle non-linear inversions in which some of the model parameters are linearly related to the observations, this method combines both analytical least-squares solutions and a Monte Carlo sampling technique. In this method, model parameters that are linearly and non-linearly related to observations, relative weights of multiple data sets and relative weights of prior information and regularization priors are determined in a unified Bayesian framework. In this paper, we define the mixed linear-non-linear inverse problem, outline the theoretical basis for the method, provide a step-by-step algorithm for the inversion, validate the inversion method using synthetic data and apply the method to two real data sets. We apply the method to inversions of multiple geodetic data sets with unknown relative data weights for interseismic fault slip and locking depth. We also apply the method to the problem of estimating the spatial distribution of coseismic slip on faults with unknown fault geometry, relative data weights and smoothing regularization weight.
Code of Federal Regulations, 2011 CFR
2011-01-01
... DC area and for COLA areas with multiple survey areas? 591.216 Section 591.216 Administrative... combine survey data for the DC area and for COLA areas with multiple survey areas? (a) Washington, DC... equal weights to compute an overall average by item for the DC area. (b) COLA areas with multiple survey...
Code of Federal Regulations, 2014 CFR
2014-01-01
... DC area and for COLA areas with multiple survey areas? 591.216 Section 591.216 Administrative... combine survey data for the DC area and for COLA areas with multiple survey areas? (a) Washington, DC... equal weights to compute an overall average by item for the DC area. (b) COLA areas with multiple survey...
Code of Federal Regulations, 2010 CFR
2010-01-01
... DC area and for COLA areas with multiple survey areas? 591.216 Section 591.216 Administrative... combine survey data for the DC area and for COLA areas with multiple survey areas? (a) Washington, DC... equal weights to compute an overall average by item for the DC area. (b) COLA areas with multiple survey...
Code of Federal Regulations, 2013 CFR
2013-01-01
... DC area and for COLA areas with multiple survey areas? 591.216 Section 591.216 Administrative... combine survey data for the DC area and for COLA areas with multiple survey areas? (a) Washington, DC... equal weights to compute an overall average by item for the DC area. (b) COLA areas with multiple survey...
Code of Federal Regulations, 2012 CFR
2012-01-01
... DC area and for COLA areas with multiple survey areas? 591.216 Section 591.216 Administrative... combine survey data for the DC area and for COLA areas with multiple survey areas? (a) Washington, DC... equal weights to compute an overall average by item for the DC area. (b) COLA areas with multiple survey...
Design of pulse waveform for waveform division multiple access UWB wireless communication system.
Yin, Zhendong; Wang, Zhirui; Liu, Xiaohui; Wu, Zhilu
2014-01-01
A new multiple access scheme, Waveform Division Multiple Access (WDMA) based on the orthogonal wavelet function, is presented. After studying the correlation properties of different categories of single wavelet functions, the one with the best correlation property will be chosen as the foundation for combined waveform. In the communication system, each user is assigned to different combined orthogonal waveform. Demonstrated by simulation, combined waveform is more suitable than single wavelet function to be a communication medium in WDMA system. Due to the excellent orthogonality, the bit error rate (BER) of multiuser with combined waveforms is so close to that of single user in a synchronous system. That is to say, the multiple access interference (MAI) is almost eliminated. Furthermore, even in an asynchronous system without multiuser detection after matched filters, the result is still pretty ideal and satisfactory by using the third combination mode that will be mentioned in the study.
Improving Flash Flood Prediction in Multiple Environments
NASA Astrophysics Data System (ADS)
Broxton, P. D.; Troch, P. A.; Schaffner, M.; Unkrich, C.; Goodrich, D.; Wagener, T.; Yatheendradas, S.
2009-12-01
Flash flooding is a major concern in many fast responding headwater catchments . There are many efforts to model and to predict these flood events, though it is not currently possible to adequately predict the nature of flash flood events with a single model, and furthermore, many of these efforts do not even consider snow, which can, by itself, or in combination with rainfall events, cause destructive floods. The current research is aimed at broadening the applicability of flash flood modeling. Specifically, we will take a state of the art flash flood model that is designed to work with warm season precipitation in arid environments, the KINematic runoff and EROSion model (KINEROS2), and combine it with a continuous subsurface flow model and an energy balance snow model. This should improve its predictive capacity in humid environments where lateral subsurface flow significantly contributes to streamflow, and it will make possible the prediction of flooding events that involve rain-on-snow or rapid snowmelt. By modeling changes in the hydrologic state of a catchment before a flood begins, we can also better understand the factors or combination of factors that are necessary to produce large floods. Broadening the applicability of an already state of the art flash flood model, such as KINEROS2, is logical because flash floods can occur in all types of environments, and it may lead to better predictions, which are necessary to preserve life and property.