Using Structural Equation Modeling To Fit Models Incorporating Principal Components.
ERIC Educational Resources Information Center
Dolan, Conor; Bechger, Timo; Molenaar, Peter
1999-01-01
Considers models incorporating principal components from the perspectives of structural-equation modeling. These models include the following: (1) the principal-component analysis of patterned matrices; (2) multiple analysis of variance based on principal components; and (3) multigroup principal-components analysis. Discusses fitting these models…
CHALLENGES OF PROCESSING BIOLOGICAL DATA FOR INCORPORATION INTO A LAKE EUTROPHICATION MODEL
A eutrophication model is in development as part of the Lake Michigan Mass Balance Project (LMMBP). Successful development and calibration of this model required the processing and incorporation of extensive biological data. Data were drawn from multiple sources, including nutrie...
Siddique, Juned; Harel, Ofer; Crespi, Catherine M.; Hedeker, Donald
2014-01-01
The true missing data mechanism is never known in practice. We present a method for generating multiple imputations for binary variables that formally incorporates missing data mechanism uncertainty. Imputations are generated from a distribution of imputation models rather than a single model, with the distribution reflecting subjective notions of missing data mechanism uncertainty. Parameter estimates and standard errors are obtained using rules for nested multiple imputation. Using simulation, we investigate the impact of missing data mechanism uncertainty on post-imputation inferences and show that incorporating this uncertainty can increase the coverage of parameter estimates. We apply our method to a longitudinal smoking cessation trial where nonignorably missing data were a concern. Our method provides a simple approach for formalizing subjective notions regarding nonresponse and can be implemented using existing imputation software. PMID:24634315
Incorporation of multiple cloud layers for ultraviolet radiation modeling studies
NASA Technical Reports Server (NTRS)
Charache, Darryl H.; Abreu, Vincent J.; Kuhn, William R.; Skinner, Wilbert R.
1994-01-01
Cloud data sets compiled from surface observations were used to develop an algorithm for incorporating multiple cloud layers into a multiple-scattering radiative transfer model. Aerosol extinction and ozone data sets were also incorporated to estimate the seasonally averaged ultraviolet (UV) flux reaching the surface of the Earth in the Detroit, Michigan, region for the years 1979-1991, corresponding to Total Ozone Mapping Spectrometer (TOMS) version 6 ozone observations. The calculated UV spectrum was convolved with an erythema action spectrum to estimate the effective biological exposure for erythema. Calculations show that decreasing the total column density of ozone by 1% leads to an increase in erythemal exposure by approximately 1.1-1.3%, in good agreement with previous studies. A comparison of the UV radiation budget at the surface between a single cloud layer method and a multiple cloud layer method presented here is discussed, along with limitations of each technique. With improved parameterization of cloud properties, and as knowledge of biological effects of UV exposure increase, inclusion of multiple cloud layers may be important in accurately determining the biologically effective UV budget at the surface of the Earth.
Goal-Directed Aiming: Two Components but Multiple Processes
ERIC Educational Resources Information Center
Elliott, Digby; Hansen, Steve; Grierson, Lawrence E. M.; Lyons, James; Bennett, Simon J.; Hayes, Spencer J.
2010-01-01
This article reviews the behavioral literature on the control of goal-directed aiming and presents a multiple-process model of limb control. The model builds on recent variants of Woodworth's (1899) two-component model of speed-accuracy relations in voluntary movement and incorporates ideas about dynamic online limb control based on prior…
ERIC Educational Resources Information Center
Abes, Elisa S.; Jones, Susan R.; McEwen, Marylu K.
2007-01-01
We reconceptualize Jones and McEwen's (2000) model of multiple dimensions of identity by incorporating meaning making, based on the results of Abes and Jones's (2004) study of lesbian college students. Narratives of three students who utilize different orders of Kegan's (1994) meaning making (formulaic, transitional, and foundational, as described…
Multiple outcomes are often measured on each experimental unit in toxicology experiments. These multiple observations typically imply the existence of correlation between endpoints, and a statistical analysis that incorporates it may result in improved inference. When both disc...
Varughese, Eunice A.; Brinkman, Nichole E; Anneken, Emily M; Cashdollar, Jennifer S; Fout, G. Shay; Furlong, Edward T.; Kolpin, Dana W.; Glassmeyer, Susan T.; Keely, Scott P
2017-01-01
incorporated into a Bayesian model to more accurately determine viral load in both source and treated water. Results of the Bayesian model indicated that viruses are present in source water and treated water. By using a Bayesian framework that incorporates inhibition, as well as many other parameters that affect viral detection, this study offers an approach for more accurately estimating the occurrence of viral pathogens in environmental waters.
Dynamic electrical impedance imaging with the interacting multiple model scheme.
Kim, Kyung Youn; Kim, Bong Seok; Kim, Min Chan; Kim, Sin; Isaacson, David; Newell, Jonathan C
2005-04-01
In this paper, an effective dynamical EIT imaging scheme is presented for on-line monitoring of the abruptly changing resistivity distribution inside the object, based on the interacting multiple model (IMM) algorithm. The inverse problem is treated as a stochastic nonlinear state estimation problem with the time-varying resistivity (state) being estimated on-line with the aid of the IMM algorithm. In the design of the IMM algorithm multiple models with different process noise covariance are incorporated to reduce the modeling uncertainty. Simulations and phantom experiments are provided to illustrate the proposed algorithm.
NASA Technical Reports Server (NTRS)
Richey, Edward, III
1995-01-01
This research aims to develop the methods and understanding needed to incorporate time and loading variable dependent environmental effects on fatigue crack propagation (FCP) into computerized fatigue life prediction codes such as NASA FLAGRO (NASGRO). In particular, the effect of loading frequency on FCP rates in alpha + beta titanium alloys exposed to an aqueous chloride solution is investigated. The approach couples empirical modeling of environmental FCP with corrosion fatigue experiments. Three different computer models have been developed and incorporated in the DOS executable program. UVAFAS. A multiple power law model is available, and can fit a set of fatigue data to a multiple power law equation. A model has also been developed which implements the Wei and Landes linear superposition model, as well as an interpolative model which can be utilized to interpolate trends in fatigue behavior based on changes in loading characteristics (stress ratio, frequency, and hold times).
Quantitative Predictive Models for Systemic Toxicity (SOT)
Models to identify systemic and specific target organ toxicity were developed to help transition the field of toxicology towards computational models. By leveraging multiple data sources to incorporate read-across and machine learning approaches, a quantitative model of systemic ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Urrego-Blanco, Jorge R.; Hunke, Elizabeth C.; Urban, Nathan M.
Here, we implement a variance-based distance metric (D n) to objectively assess skill of sea ice models when multiple output variables or uncertainties in both model predictions and observations need to be considered. The metric compares observations and model data pairs on common spatial and temporal grids improving upon highly aggregated metrics (e.g., total sea ice extent or volume) by capturing the spatial character of model skill. The D n metric is a gamma-distributed statistic that is more general than the χ 2 statistic commonly used to assess model fit, which requires the assumption that the model is unbiased andmore » can only incorporate observational error in the analysis. The D n statistic does not assume that the model is unbiased, and allows the incorporation of multiple observational data sets for the same variable and simultaneously for different variables, along with different types of variances that can characterize uncertainties in both observations and the model. This approach represents a step to establish a systematic framework for probabilistic validation of sea ice models. The methodology is also useful for model tuning by using the D n metric as a cost function and incorporating model parametric uncertainty as part of a scheme to optimize model functionality. We apply this approach to evaluate different configurations of the standalone Los Alamos sea ice model (CICE) encompassing the parametric uncertainty in the model, and to find new sets of model configurations that produce better agreement than previous configurations between model and observational estimates of sea ice concentration and thickness.« less
Urrego-Blanco, Jorge R.; Hunke, Elizabeth C.; Urban, Nathan M.; ...
2017-04-01
Here, we implement a variance-based distance metric (D n) to objectively assess skill of sea ice models when multiple output variables or uncertainties in both model predictions and observations need to be considered. The metric compares observations and model data pairs on common spatial and temporal grids improving upon highly aggregated metrics (e.g., total sea ice extent or volume) by capturing the spatial character of model skill. The D n metric is a gamma-distributed statistic that is more general than the χ 2 statistic commonly used to assess model fit, which requires the assumption that the model is unbiased andmore » can only incorporate observational error in the analysis. The D n statistic does not assume that the model is unbiased, and allows the incorporation of multiple observational data sets for the same variable and simultaneously for different variables, along with different types of variances that can characterize uncertainties in both observations and the model. This approach represents a step to establish a systematic framework for probabilistic validation of sea ice models. The methodology is also useful for model tuning by using the D n metric as a cost function and incorporating model parametric uncertainty as part of a scheme to optimize model functionality. We apply this approach to evaluate different configurations of the standalone Los Alamos sea ice model (CICE) encompassing the parametric uncertainty in the model, and to find new sets of model configurations that produce better agreement than previous configurations between model and observational estimates of sea ice concentration and thickness.« less
Optimal design of compact and connected nature reserves for multiple species.
Wang, Yicheng; Önal, Hayri
2016-04-01
When designing a conservation reserve system for multiple species, spatial attributes of the reserves must be taken into account at species level. The existing optimal reserve design literature considers either one spatial attribute or when multiple attributes are considered the analysis is restricted only to one species. We built a linear integer programing model that incorporates compactness and connectivity of the landscape reserved for multiple species. The model identifies multiple reserves that each serve a subset of target species with a specified coverage probability threshold to ensure the species' long-term survival in the reserve, and each target species is covered (protected) with another probability threshold at the reserve system level. We modeled compactness by minimizing the total distance between selected sites and central sites, and we modeled connectivity of a selected site to its designated central site by selecting at least one of its adjacent sites that has a nearer distance to the central site. We considered structural distance and functional distances that incorporated site quality between sites. We tested the model using randomly generated data on 2 species, one ground species that required structural connectivity and the other an avian species that required functional connectivity. We applied the model to 10 bird species listed as endangered by the state of Illinois (U.S.A.). Spatial coherence and selection cost of the reserves differed substantially depending on the weights assigned to these 2 criteria. The model can be used to design a reserve system for multiple species, especially species whose habitats are far apart in which case multiple disjunct but compact and connected reserves are advantageous. The model can be modified to increase or decrease the distance between reserves to reduce or promote population connectivity. © 2015 Society for Conservation Biology.
Double-multiple streamtube model for studying vertical-axis wind turbines
NASA Astrophysics Data System (ADS)
Paraschivoiu, Ion
1988-08-01
This work describes the present state-of-the-art in double-multiple streamtube method for modeling the Darrieus-type vertical-axis wind turbine (VAWT). Comparisons of the analytical results with the other predictions and available experimental data show a good agreement. This method, which incorporates dynamic-stall and secondary effects, can be used for generating a suitable aerodynamic-load model for structural design analysis of the Darrieus rotor.
NASA Astrophysics Data System (ADS)
Harkrider, Curtis Jason
2000-08-01
The incorporation of gradient-index (GRIN) material into optical systems offers novel and practical solutions to lens design problems. However, widespread use of gradient-index optics has been limited by poor correlation between gradient-index designs and the refractive index profiles produced by ion exchange between glass and molten salt. Previously, a design-for- manufacture model was introduced that connected the design and fabrication processes through use of diffusion modeling linked with lens design software. This project extends the design-for-manufacture model into a time- varying boundary condition (TVBC) diffusion model. TVBC incorporates the time-dependent phenomenon of melt poisoning and introduces a new index profile control method, multiple-step diffusion. The ions displaced from the glass during the ion exchange fabrication process can reduce the total change in refractive index (Δn). Chemical equilibrium is used to model this melt poisoning process. Equilibrium experiments are performed in a titania silicate glass and chemically analyzed. The equilibrium model is fit to ion concentration data that is used to calculate ion exchange boundary conditions. The boundary conditions are changed purposely to control the refractive index profile in multiple-step TVBC diffusion. The glass sample is alternated between ion exchange with a molten salt bath and annealing. The time of each diffusion step can be used to exert control on the index profile. The TVBC computer model is experimentally verified and incorporated into the design- for-manufacture subroutine that runs in lens design software. The TVBC design-for-manufacture model is useful for fabrication-based tolerance analysis of gradient-index lenses and for the design of manufactureable GRIN lenses. Several optical elements are designed and fabricated using multiple-step diffusion, verifying the accuracy of the model. The strength of multiple-step diffusion process lies in its versatility. An axicon, imaging lens, and curved radial lens, all with different index profile requirements, are designed out of a single glass composition.
SIMULATING SUB-DECADAL CHANNEL MORPHOLOGIC CHANGE IN EPHEMERAL STREAM NETWORKS
A distributed watershed model was modified to simulate cumulative channel morphologic
change from multiple runoff events in ephemeral stream networks. The model incorporates the general design of the event-based Kinematic Runoff and" Erosion Model (KINEROS), which describes t...
Liu, Shuguang; Tan, Zhengxi; Chen, Mingshi; Liu, Jinxun; Wein, Anne; Li, Zhengpeng; Huang, Shengli; Oeding, Jennifer; Young, Claudia; Verma, Shashi B.; Suyker, Andrew E.; Faulkner, Stephen P.
2012-01-01
The General Ensemble Biogeochemical Modeling System (GEMS) was es in individual models, it uses multiple site-scale biogeochemical models to perform model simulations. Second, it adopts Monte Carlo ensemble simulations of each simulation unit (one site/pixel or group of sites/pixels with similar biophysical conditions) to incorporate uncertainties and variability (as measured by variances and covariance) of input variables into model simulations. In this chapter, we illustrate the applications of GEMS at the site and regional scales with an emphasis on incorporating agricultural practices. Challenges in modeling soil carbon dynamics and greenhouse emissions are also discussed.
Modeling is a useful tool for quantifying ecosystem services and understanding their temporal dynamics. Here we describe a hybrid regional modeling approach for sub-basins of the Calapooia watershed that incorporates both a precipitation-runoff model and an indexed regression mo...
Using Video-Based Modeling to Promote Acquisition of Fundamental Motor Skills
ERIC Educational Resources Information Center
Obrusnikova, Iva; Rattigan, Peter J.
2016-01-01
Video-based modeling is becoming increasingly popular for teaching fundamental motor skills to children in physical education. Two frequently used video-based instructional strategies that incorporate modeling are video prompting (VP) and video modeling (VM). Both strategies have been used across multiple disciplines and populations to teach a…
Generalized multiple kernel learning with data-dependent priors.
Mao, Qi; Tsang, Ivor W; Gao, Shenghua; Wang, Li
2015-06-01
Multiple kernel learning (MKL) and classifier ensemble are two mainstream methods for solving learning problems in which some sets of features/views are more informative than others, or the features/views within a given set are inconsistent. In this paper, we first present a novel probabilistic interpretation of MKL such that maximum entropy discrimination with a noninformative prior over multiple views is equivalent to the formulation of MKL. Instead of using the noninformative prior, we introduce a novel data-dependent prior based on an ensemble of kernel predictors, which enhances the prediction performance of MKL by leveraging the merits of the classifier ensemble. With the proposed probabilistic framework of MKL, we propose a hierarchical Bayesian model to learn the proposed data-dependent prior and classification model simultaneously. The resultant problem is convex and other information (e.g., instances with either missing views or missing labels) can be seamlessly incorporated into the data-dependent priors. Furthermore, a variety of existing MKL models can be recovered under the proposed MKL framework and can be readily extended to incorporate these priors. Extensive experiments demonstrate the benefits of our proposed framework in supervised and semisupervised settings, as well as in tasks with partial correspondence among multiple views.
Hostetter, Nathan; Gardner, Beth; Evans, Allen F.; Cramer, Bradley M.; Payton, Quinn; Collis, Ken; Roby, Daniel D.
2017-01-01
We developed a state-space mark-recapture-recovery model that incorporates multiple recovery types and state uncertainty to estimate survival of an anadromous fish species. We apply the model to a dataset of out-migrating juvenile steelhead trout (Oncorhynchus mykiss) tagged with passive integrated transponders, recaptured during outmigration, and recovered on bird colonies in the Columbia River basin (2008-2014). Recoveries on bird colonies are often ignored in survival studies because the river reach of mortality is often unknown, which we model as a form of state uncertainty. Median outmigration survival from release to the lower river (river kilometer 729 to 75) ranged from 0.27 to 0.35, depending on year. Recovery probabilities were frequently >0.20 in the first river reach following tagging, indicating that one out of five fish that died in that reach was recovered on a bird colony. Integrating dead recovery data provided increased parameter precision, estimation of where birds consumed fish, and survival estimates across larger spatial scales. More generally, these modeling approaches provide a flexible framework to integrate multiple sources of tag recovery data into mark-recapture studies.
Incorporating the life course model into MCH nutrition leadership education and training programs.
Haughton, Betsy; Eppig, Kristen; Looney, Shannon M; Cunningham-Sabo, Leslie; Spear, Bonnie A; Spence, Marsha; Stang, Jamie S
2013-01-01
Life course perspective, social determinants of health, and health equity have been combined into one comprehensive model, the life course model (LCM), for strategic planning by US Health Resources and Services Administration's Maternal and Child Health Bureau. The purpose of this project was to describe a faculty development process; identify strategies for incorporation of the LCM into nutrition leadership education and training at the graduate and professional levels; and suggest broader implications for training, research, and practice. Nineteen representatives from 6 MCHB-funded nutrition leadership education and training programs and 10 federal partners participated in a one-day session that began with an overview of the models and concluded with guided small group discussions on how to incorporate them into maternal and child health (MCH) leadership training using obesity as an example. Written notes from group discussions were compiled and coded emergently. Content analysis determined the most salient themes about incorporating the models into training. Four major LCM-related themes emerged, three of which were about training: (1) incorporation by training grants through LCM-framed coursework and experiences for trainees, and similarly framed continuing education and skills development for professionals; (2) incorporation through collaboration with other training programs and state and community partners, and through advocacy; and (3) incorporation by others at the federal and local levels through policy, political, and prevention efforts. The fourth theme focused on anticipated challenges of incorporating the model in training. Multiple methods for incorporating the LCM into MCH training and practice are warranted. Challenges to incorporating include the need for research and related policy development.
Scott, Finlay; Jardim, Ernesto; Millar, Colin P; Cerviño, Santiago
2016-01-01
Estimating fish stock status is very challenging given the many sources and high levels of uncertainty surrounding the biological processes (e.g. natural variability in the demographic rates), model selection (e.g. choosing growth or stock assessment models) and parameter estimation. Incorporating multiple sources of uncertainty in a stock assessment allows advice to better account for the risks associated with proposed management options, promoting decisions that are more robust to such uncertainty. However, a typical assessment only reports the model fit and variance of estimated parameters, thereby underreporting the overall uncertainty. Additionally, although multiple candidate models may be considered, only one is selected as the 'best' result, effectively rejecting the plausible assumptions behind the other models. We present an applied framework to integrate multiple sources of uncertainty in the stock assessment process. The first step is the generation and conditioning of a suite of stock assessment models that contain different assumptions about the stock and the fishery. The second step is the estimation of parameters, including fitting of the stock assessment models. The final step integrates across all of the results to reconcile the multi-model outcome. The framework is flexible enough to be tailored to particular stocks and fisheries and can draw on information from multiple sources to implement a broad variety of assumptions, making it applicable to stocks with varying levels of data availability The Iberian hake stock in International Council for the Exploration of the Sea (ICES) Divisions VIIIc and IXa is used to demonstrate the framework, starting from length-based stock and indices data. Process and model uncertainty are considered through the growth, natural mortality, fishing mortality, survey catchability and stock-recruitment relationship. Estimation uncertainty is included as part of the fitting process. Simple model averaging is used to integrate across the results and produce a single assessment that considers the multiple sources of uncertainty.
Incorporating Hydroepidemiology into the Epidemia Malaria Early Warning System
NASA Astrophysics Data System (ADS)
Wimberly, M. C.; Merkord, C. L.; Henebry, G. M.; Senay, G. B.
2014-12-01
Early warning of the timing and locations of malaria epidemics can facilitate the targeting of resources for prevention and emergency response. In response to this need, we are developing the Epidemic Prognosis Incorporating Disease and Environmental Monitoring for Integrated Assessment (EPIDEMIA) computer system. EPIDEMIA incorporates software for capturing, processing, and integrating environmental and epidemiological data from multiple sources; data assimilation techniques that continually update models and forecasts; and a web-based interface that makes the resulting information available to public health decision makers. The system will enable forecasts that incorporate lagged responses to environmental risk factors as well as information about recent trends in malaria cases. Because the egg, larval, and pupal stages of mosquito development occur in aquatic habitats, information about the spatial and temporal distributions of stagnant water bodies is critical for modeling malaria risk. Potential sources of hydrological data include satellite-derived rainfall estimates, evapotranspiration (ET) calculated using a simplified surface energy balance model, and estimates of soil moisture and fractional water cover from passive microwave radiometry. We used partial least squares regression to analyze and visualize seasonal patterns of these variables in relation to malaria cases using data from 49 districts in the Amhara region of Ethiopia. Seasonal patterns of rainfall were strongly associated with the incidence and seasonality of malaria across the region, and model fit was improved by the addition of remotely-sensed ET and soil moisture variables. The results highlight the importance of remotely-sensed hydrological data for modeling malaria risk in this region and emphasize the value of an ensemble approach that utilizes multiple sources of information about precipitation and land surface wetness. These variables will be incorporated into the forecasting models at the core of the EPIDEMIA system, and. future model development will involve a cycle of continuous forecasting, accuracy assessment, and model refinement.
Multiplicity Control in Structural Equation Modeling: Incorporating Parameter Dependencies
ERIC Educational Resources Information Center
Smith, Carrie E.; Cribbie, Robert A.
2013-01-01
When structural equation modeling (SEM) analyses are conducted, significance tests for all important model relationships (parameters including factor loadings, covariances, etc.) are typically conducted at a specified nominal Type I error rate ([alpha]). Despite the fact that many significance tests are often conducted in SEM, rarely is…
Multiple network-constrained regressions expand insights into influenza vaccination responses.
Avey, Stefan; Mohanty, Subhasis; Wilson, Jean; Zapata, Heidi; Joshi, Samit R; Siconolfi, Barbara; Tsang, Sui; Shaw, Albert C; Kleinstein, Steven H
2017-07-15
Systems immunology leverages recent technological advancements that enable broad profiling of the immune system to better understand the response to infection and vaccination, as well as the dysregulation that occurs in disease. An increasingly common approach to gain insights from these large-scale profiling experiments involves the application of statistical learning methods to predict disease states or the immune response to perturbations. However, the goal of many systems studies is not to maximize accuracy, but rather to gain biological insights. The predictors identified using current approaches can be biologically uninterpretable or present only one of many equally predictive models, leading to a narrow understanding of the underlying biology. Here we show that incorporating prior biological knowledge within a logistic modeling framework by using network-level constraints on transcriptional profiling data significantly improves interpretability. Moreover, incorporating different types of biological knowledge produces models that highlight distinct aspects of the underlying biology, while maintaining predictive accuracy. We propose a new framework, Logistic Multiple Network-constrained Regression (LogMiNeR), and apply it to understand the mechanisms underlying differential responses to influenza vaccination. Although standard logistic regression approaches were predictive, they were minimally interpretable. Incorporating prior knowledge using LogMiNeR led to models that were equally predictive yet highly interpretable. In this context, B cell-specific genes and mTOR signaling were associated with an effective vaccination response in young adults. Overall, our results demonstrate a new paradigm for analyzing high-dimensional immune profiling data in which multiple networks encoding prior knowledge are incorporated to improve model interpretability. The R source code described in this article is publicly available at https://bitbucket.org/kleinstein/logminer . steven.kleinstein@yale.edu or stefan.avey@yale.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
The Impact of Sample Size and Other Factors When Estimating Multilevel Logistic Models
ERIC Educational Resources Information Center
Schoeneberger, Jason A.
2016-01-01
The design of research studies utilizing binary multilevel models must necessarily incorporate knowledge of multiple factors, including estimation method, variance component size, or number of predictors, in addition to sample sizes. This Monte Carlo study examined the performance of random effect binary outcome multilevel models under varying…
Modeling caprock fracture, CO2 migration and time dependent fault healing: A numerical study.
NASA Astrophysics Data System (ADS)
MacFarlane, J.; Mukerji, T.; Vanorio, T.
2017-12-01
The Campi Flegrei caldera, located near Naples, Italy, is one of the highest risk volcanoes on Earth due to its recent unrest and urban setting. A unique history of surface uplift within the caldera is characterized by long duration uplift and subsidence cycles which are periodically interrupted by rapid, short period uplift events. Several models have been proposed to explain this history; in this study we will present a hydro-mechanical model that takes into account the caprock that seismic studies show to exist at 1-2 km depth. Specifically, we develop a finite element model of the caldera and use a modified version of fault-valve theory to represent fracture within the caprock. The model accounts for fault healing using a simplified, time-dependent fault sealing model. Multiple fracture events are incorporated by using previous solutions to test prescribed conditions and determine changes in rock properties, such as porosity and permeability. Although fault-valve theory has been used to model single fractures and recharge, this model is unique in its ability to model multiple fracture events. By incorporating multiple fracture events we can assess changes in both long and short-term reservoir behavior at Campi Flegrei. By varying the model inputs, we model the poro-elastic response to CO2 injection at depth and the resulting surface deformation. The goal is to enable geophysicists to better interpret surface observations and predict outcomes from observed changes in reservoir conditions.
Liu, Xiaolei; Huang, Meng; Fan, Bin; Buckler, Edward S.; Zhang, Zhiwu
2016-01-01
False positives in a Genome-Wide Association Study (GWAS) can be effectively controlled by a fixed effect and random effect Mixed Linear Model (MLM) that incorporates population structure and kinship among individuals to adjust association tests on markers; however, the adjustment also compromises true positives. The modified MLM method, Multiple Loci Linear Mixed Model (MLMM), incorporates multiple markers simultaneously as covariates in a stepwise MLM to partially remove the confounding between testing markers and kinship. To completely eliminate the confounding, we divided MLMM into two parts: Fixed Effect Model (FEM) and a Random Effect Model (REM) and use them iteratively. FEM contains testing markers, one at a time, and multiple associated markers as covariates to control false positives. To avoid model over-fitting problem in FEM, the associated markers are estimated in REM by using them to define kinship. The P values of testing markers and the associated markers are unified at each iteration. We named the new method as Fixed and random model Circulating Probability Unification (FarmCPU). Both real and simulated data analyses demonstrated that FarmCPU improves statistical power compared to current methods. Additional benefits include an efficient computing time that is linear to both number of individuals and number of markers. Now, a dataset with half million individuals and half million markers can be analyzed within three days. PMID:26828793
Optimized production planning model for a multi-plant cultivation system under uncertainty
NASA Astrophysics Data System (ADS)
Ke, Shunkui; Guo, Doudou; Niu, Qingliang; Huang, Danfeng
2015-02-01
An inexact multi-constraint programming model under uncertainty was developed by incorporating a production plan algorithm into the crop production optimization framework under the multi-plant collaborative cultivation system. In the production plan, orders from the customers are assigned to a suitable plant under the constraints of plant capabilities and uncertainty parameters to maximize profit and achieve customer satisfaction. The developed model and solution method were applied to a case study of a multi-plant collaborative cultivation system to verify its applicability. As determined in the case analysis involving different orders from customers, the period of plant production planning and the interval between orders can significantly affect system benefits. Through the analysis of uncertain parameters, reliable and practical decisions can be generated using the suggested model of a multi-plant collaborative cultivation system.
INCORPORATING CONCENTRATION DEPENDENCE IN STABLE ISOTOPE MIXING MODELS
Stable isotopes are frequently used to quantify the contributions of multiple sources to a mixture; e.g., C and N isotopic signatures can be used to determine the fraction of three food sources in a consumer's diet. The standard dual isotope, three source linear mixing model ass...
Toward improved calibration of watershed models: multisite many objective measures of information
USDA-ARS?s Scientific Manuscript database
This paper presents a computational framework for incorporation of disparate information from observed hydrologic responses at multiple locations into the calibration of watershed models. The framework consists of four components: (i) an a-priori characterization of system behavior; (ii) a formal an...
A model for diagnosing and explaining multiple disorders.
Jamieson, P W
1991-08-01
The ability to diagnose multiple interacting disorders and explain them in a coherent causal framework has only partially been achieved in medical expert systems. This paper proposes a causal model for diagnosing and explaining multiple disorders whose key elements are: physician-directed hypotheses generation, object-oriented knowledge representation, and novel explanation heuristics. The heuristics modify and link the explanations to make the physician aware of diagnostic complexities. A computer program incorporating the model currently is in use for diagnosing peripheral nerve and muscle disorders. The program successfully diagnoses and explains interactions between diseases in terms of underlying pathophysiologic concepts. The model offers a new architecture for medical domains where reasoning from first principles is difficult but explanation of disease interactions is crucial for the system's operation.
NASA Astrophysics Data System (ADS)
Xu, Yiming; Smith, Scot E.; Grunwald, Sabine; Abd-Elrahman, Amr; Wani, Suhas P.
2017-01-01
Soil prediction models based on spectral indices from some multispectral images are too coarse to characterize spatial pattern of soil properties in small and heterogeneous agricultural lands. Image pan-sharpening has seldom been utilized in Digital Soil Mapping research before. This research aimed to analyze the effects of pan-sharpened (PAN) remote sensing spectral indices on soil prediction models in smallholder farm settings. This research fused the panchromatic band and multispectral (MS) bands of WorldView-2, GeoEye-1, and Landsat 8 images in a village in Southern India by Brovey, Gram-Schmidt and Intensity-Hue-Saturation methods. Random Forest was utilized to develop soil total nitrogen (TN) and soil exchangeable potassium (Kex) prediction models by incorporating multiple spectral indices from the PAN and MS images. Overall, our results showed that PAN remote sensing spectral indices have similar spectral characteristics with soil TN and Kex as MS remote sensing spectral indices. There is no soil prediction model incorporating the specific type of pan-sharpened spectral indices always had the strongest prediction capability of soil TN and Kex. The incorporation of pan-sharpened remote sensing spectral data not only increased the spatial resolution of the soil prediction maps, but also enhanced the prediction accuracy of soil prediction models. Small farms with limited footprint, fragmented ownership and diverse crop cycle should benefit greatly from the pan-sharpened high spatial resolution imagery for soil property mapping. Our results show that multiple high and medium resolution images can be used to map soil properties suggesting the possibility of an improvement in the maps' update frequency. Additionally, the results should benefit the large agricultural community through the reduction of routine soil sampling cost and improved prediction accuracy.
Effect of genetic polymorphisms on development of gout.
Urano, Wako; Taniguchi, Atsuo; Inoue, Eisuke; Sekita, Chieko; Ichikawa, Naomi; Koseki, Yumi; Kamatani, Naoyuki; Yamanaka, Hisashi
2013-08-01
To validate the association between genetic polymorphisms and gout in Japanese patients, and to investigate the cumulative effects of multiple genetic factors on the development of gout. Subjects were 153 Japanese male patients with gout and 532 male controls. The genotypes of 11 polymorphisms in the 10 genes that have been indicated to be associated with serum uric acid levels or gout were determined. The cumulative effects of the genetic polymorphisms were investigated using a weighted genotype risk score (wGRS) based on the number of risk alleles and the OR for gout. A model to discriminate between patients with gout and controls was constructed by incorporating the wGRS and clinical factors. C statistics method was applied to evaluate the capability of the model to discriminate gout patients from controls. Seven polymorphisms were shown to be associated with gout. The mean wGRS was significantly higher in patients with gout (15.2 ± 2.01) compared to controls (13.4 ± 2.10; p < 0.0001). The C statistic for the model using genetic information alone was 0.72, while the C statistic was 0.81 for the full model that incorporated all genetic and clinical factors. Accumulation of multiple genetic factors is associated with the development of gout. A prediction model for gout that incorporates genetic and clinical factors may be useful for identifying individuals who are at risk of gout.
Simulating Coupling Complexity in Space Plasmas: First Results from a new code
NASA Astrophysics Data System (ADS)
Kryukov, I.; Zank, G. P.; Pogorelov, N. V.; Raeder, J.; Ciardo, G.; Florinski, V. A.; Heerikhuisen, J.; Li, G.; Petrini, F.; Shematovich, V. I.; Winske, D.; Shaikh, D.; Webb, G. M.; Yee, H. M.
2005-12-01
The development of codes that embrace 'coupling complexity' via the self-consistent incorporation of multiple physical scales and multiple physical processes in models has been identified by the NRC Decadal Survey in Solar and Space Physics as a crucial necessary development in simulation/modeling technology for the coming decade. The National Science Foundation, through its Information Technology Research (ITR) Program, is supporting our efforts to develop a new class of computational code for plasmas and neutral gases that integrates multiple scales and multiple physical processes and descriptions. We are developing a highly modular, parallelized, scalable code that incorporates multiple scales by synthesizing 3 simulation technologies: 1) Computational fluid dynamics (hydrodynamics or magneto-hydrodynamics-MHD) for the large-scale plasma; 2) direct Monte Carlo simulation of atoms/neutral gas, and 3) transport code solvers to model highly energetic particle distributions. We are constructing the code so that a fourth simulation technology, hybrid simulations for microscale structures and particle distributions, can be incorporated in future work, but for the present, this aspect will be addressed at a test-particle level. This synthesis we will provide a computational tool that will advance our understanding of the physics of neutral and charged gases enormously. Besides making major advances in basic plasma physics and neutral gas problems, this project will address 3 Grand Challenge space physics problems that reflect our research interests: 1) To develop a temporal global heliospheric model which includes the interaction of solar and interstellar plasma with neutral populations (hydrogen, helium, etc., and dust), test-particle kinetic pickup ion acceleration at the termination shock, anomalous cosmic ray production, interaction with galactic cosmic rays, while incorporating the time variability of the solar wind and the solar cycle. 2) To develop a coronal mass ejection and interplanetary shock propagation model for the inner and outer heliosphere, including, at a test-particle level, wave-particle interactions and particle acceleration at traveling shock waves and compression regions. 3) To develop an advanced Geospace General Circulation Model (GGCM) capable of realistically modeling space weather events, in particular the interaction with CMEs and geomagnetic storms. Furthermore, by implementing scalable run-time supports and sophisticated off- and on-line prediction algorithms, we anticipate important advances in the development of automatic and intelligent system software to optimize a wide variety of 'embedded' computations on parallel computers. Finally, public domain MHD and hydrodynamic codes had a transforming effect on space and astrophysics. We expect that our new generation, open source, public domain multi-scale code will have a similar transformational effect in a variety of disciplines, opening up new classes of problems to physicists and engineers alike.
Multiple hypotheses testing of fish incidence patterns in an urbanized ecosystem
Chizinski, C.J.; Higgins, C.L.; Shavlik, C.E.; Pope, K.L.
2006-01-01
Ecological and evolutionary theories have focused traditionally on natural processes with little attempt to incorporate anthropogenic influences despite the fact that humans are such an integral part of virtually all ecosystems. A series of alternate models that incorporated anthropogenic factors and traditional ecological mechanisms of invasion to account for fish incidence patterns in urban lakes was tested. The models were based on fish biology, human intervention, and habitat characteristics. However, the only models to account for empirical patterns were those that included fish invasiveness, which incorporated species-specific information about overall tolerance and fecundity. This suggests that species-specific characteristics are more important in general distributional patterns than human-mediated dispersal. Better information of illegal stocking activities is needed to improve human-mediated models, and more insight into basic life history of ubiquitous species is needed to truly understand underlying mechanisms of biotic homogenization. ?? Springer 2005.
Do Knowledge-Component Models Need to Incorporate Representational Competencies?
ERIC Educational Resources Information Center
Rau, Martina Angela
2017-01-01
Traditional knowledge-component models describe students' content knowledge (e.g., their ability to carry out problem-solving procedures or their ability to reason about a concept). In many STEM domains, instruction uses multiple visual representations such as graphs, figures, and diagrams. The use of visual representations implies a…
Extensions of Rasch's Multiplicative Poisson Model.
ERIC Educational Resources Information Center
Jansen, Margo G. H.; van Duijn, Marijtje A. J.
1992-01-01
A model developed by G. Rasch that assumes scores on some attainment tests can be realizations of a Poisson process is explained and expanded by assuming a prior distribution, with fixed but unknown parameters, for the subject parameters. How additional between-subject and within-subject factors can be incorporated is discussed. (SLD)
An Evaluation of Curriculum Materials Based Upon the Socio-Scientific Reasoning Model.
ERIC Educational Resources Information Center
Henkin, Gayle; And Others
To address the need to develop a scientifically literate citizenry, the socio-scientific reasoning model was created to guide curriculum development. Goals of this developmental approach include increasing: (1) students' skills in dealing with problems containing multiple interacting variables; (2) students' decision-making skills incorporating a…
Ma, Songyun; Scheider, Ingo; Bargmann, Swantje
2016-09-01
An anisotropic constitutive model is proposed in the framework of finite deformation to capture several damage mechanisms occurring in the microstructure of dental enamel, a hierarchical bio-composite. It provides the basis for a homogenization approach for an efficient multiscale (in this case: multiple hierarchy levels) investigation of the deformation and damage behavior. The influence of tension-compression asymmetry and fiber-matrix interaction on the nonlinear deformation behavior of dental enamel is studied by 3D micromechanical simulations under different loading conditions and fiber lengths. The complex deformation behavior and the characteristics and interaction of three damage mechanisms in the damage process of enamel are well captured. The proposed constitutive model incorporating anisotropic damage is applied to the first hierarchical level of dental enamel and validated by experimental results. The effect of the fiber orientation on the damage behavior and compressive strength is studied by comparing micro-pillar experiments of dental enamel at the first hierarchical level in multiple directions of fiber orientation. A very good agreement between computational and experimental results is found for the damage evolution process of dental enamel. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Thompson, Matthew; Weigl, Bernhard; Fitzpatrick, Annette; Ide, Nicole
2016-01-01
Current frameworks for evaluating diagnostic tests are constrained by a focus on diagnostic accuracy, and assume that all aspects of the testing process and test attributes are discrete and equally important. Determining the balance between the benefits and harms associated with new or existing tests has been overlooked. Yet, this is critically important information for stakeholders involved in developing, testing, and implementing tests. This is particularly important for point of care tests (POCTs) where tradeoffs exist between numerous aspects of the testing process and test attributes. We developed a new model that multiple stakeholders (e.g., clinicians, patients, researchers, test developers, industry, regulators, and health care funders) can use to visualize the multiple attributes of tests, the interactions that occur between these attributes, and their impacts on health outcomes. We use multiple examples to illustrate interactions between test attributes (test availability, test experience, and test results) and outcomes, including several POCTs. The model could be used to prioritize research and development efforts, and inform regulatory submissions for new diagnostics. It could potentially provide a way to incorporate the relative weights that various subgroups or clinical settings might place on different test attributes. Our model provides a novel way that multiple stakeholders can use to visualize test attributes, their interactions, and impacts on individual and population outcomes. We anticipate that this will facilitate more informed decision making around diagnostic tests.
Multiple model self-tuning control for a class of nonlinear systems
NASA Astrophysics Data System (ADS)
Huang, Miao; Wang, Xin; Wang, Zhenlei
2015-10-01
This study develops a novel nonlinear multiple model self-tuning control method for a class of nonlinear discrete-time systems. An increment system model and a modified robust adaptive law are proposed to expand the application range, thus eliminating the assumption that either the nonlinear term of the nonlinear system or its differential term is global-bounded. The nonlinear self-tuning control method can address the situation wherein the nonlinear system is not subject to a globally uniformly asymptotically stable zero dynamics by incorporating the pole-placement scheme. A novel, nonlinear control structure based on this scheme is presented to improve control precision. Stability and convergence can be confirmed when the proposed multiple model self-tuning control method is applied. Furthermore, simulation results demonstrate the effectiveness of the proposed method.
Media, Mental Imagery, and Memory.
ERIC Educational Resources Information Center
Clark, Robert L.
1978-01-01
Thirty-two students at the University of Oregon were tested to determine the effects of media on mental imagery and memory. The model incorporates a dual coding hypothesis, and five single and multiple channel treatments were used. (Author/JEG)
Integration of Multiple Data Sources to Simulate the Dynamics of Land Systems
Deng, Xiangzheng; Su, Hongbo; Zhan, Jinyan
2008-01-01
In this paper we present and develop a new model, which we have called Dynamics of Land Systems (DLS). The DLS model is capable of integrating multiple data sources to simulate the dynamics of a land system. Three main modules are incorporated in DLS: a spatial regression module, to explore the relationship between land uses and influencing factors, a scenario analysis module of the land uses of a region during the simulation period and a spatial disaggregation module, to allocate land use changes from a regional level to disaggregated grid cells. A case study on Taips County in North China is incorporated in this paper to test the functionality of DLS. The simulation results under the baseline, economic priority and environmental scenarios help to understand the land system dynamics and project near future land-use trajectories of a region, in order to focus management decisions on land uses and land use planning. PMID:27879726
O'Neill, Liam; Dexter, Franklin
2005-11-01
We compare two techniques for increasing the transparency and face validity of Data Envelopment Analysis (DEA) results for managers at a single decision-making unit: multifactor efficiency (MFE) and non-radial super-efficiency (NRSE). Both methods incorporate the slack values from the super-efficient DEA model to provide a more robust performance measure than radial super-efficiency scores. MFE and NRSE are equivalent for unique optimal solutions and a single output. MFE incorporates the slack values from multiple output variables, whereas NRSE does not. MFE can be more transparent to managers since it involves no additional optimization steps beyond the DEA, whereas NRSE requires several. We compare results for operating room managers at an Iowa hospital evaluating its growth potential for multiple surgical specialties. In addition, we address the problem of upward bias of the slack values of the super-efficient DEA model.
A Practical Approach to Address Uncertainty in Stakeholder Deliberations.
Gregory, Robin; Keeney, Ralph L
2017-03-01
This article addresses the difficulties of incorporating uncertainty about consequence estimates as part of stakeholder deliberations involving multiple alternatives. Although every prediction of future consequences necessarily involves uncertainty, a large gap exists between common practices for addressing uncertainty in stakeholder deliberations and the procedures of prescriptive decision-aiding models advanced by risk and decision analysts. We review the treatment of uncertainty at four main phases of the deliberative process: with experts asked to describe possible consequences of competing alternatives, with stakeholders who function both as individuals and as members of coalitions, with the stakeholder committee composed of all stakeholders, and with decisionmakers. We develop and recommend a model that uses certainty equivalents as a theoretically robust and practical approach for helping diverse stakeholders to incorporate uncertainties when evaluating multiple-objective alternatives as part of public policy decisions. © 2017 Society for Risk Analysis.
ERIC Educational Resources Information Center
Wichaidit, Patcharee Rompayom; Wichaidit, Sittichai
2016-01-01
Learning chemistry may be difficult for students for several reasons, such as the abstract nature of many chemistry concepts and the fact that students may view chemistry as irrelevant to their everyday lives. Teaching chemistry in familiar contexts and the use of multiple representations are seen as effective approaches for enhancing students'…
Contraction Options and Optimal Multiple-Stopping in Spectrally Negative Lévy Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamazaki, Kazutoshi, E-mail: kyamazak@kansai-u.ac.jp
This paper studies the optimal multiple-stopping problem arising in the context of the timing option to withdraw from a project in stages. The profits are driven by a general spectrally negative Lévy process. This allows the model to incorporate sudden declines of the project values, generalizing greatly the classical geometric Brownian motion model. We solve the one-stage case as well as the extension to the multiple-stage case. The optimal stopping times are of threshold-type and the value function admits an expression in terms of the scale function. A series of numerical experiments are conducted to verify the optimality and tomore » evaluate the efficiency of the algorithm.« less
INCORPORATING NONCHEMICAL STRESSORS INTO CUMMULATIVE RISK ASSESSMENTS
The risk assessment paradigm has begun to shift from assessing single chemicals using "reasonable worst case" assumptions for individuals to considering multiple chemicals and community-based models. Inherent in community-based risk assessment is examination of all stressors a...
New developments in UTMOST : application to electronic stability control.
DOT National Transportation Integrated Search
2009-10-01
The Unified Tool for Mapping Opportunities for Safety Technology (UTMOST) : is a model of crash data that incorporates the complex relationships among different : vehicle and driver variables. It is designed to visualize the effect of multiple safety...
Vugrin, Eric D.; Rostron, Brian L.; Verzi, Stephen J.; ...
2015-03-27
Background Recent declines in US cigarette smoking prevalence have coincided with increases in use of other tobacco products. Multiple product tobacco models can help assess the population health impacts associated with use of a wide range of tobacco products. Methods and Findings We present a multi-state, dynamical systems population structure model that can be used to assess the effects of tobacco product use behaviors on population health. The model incorporates transition behaviors, such as initiation, cessation, switching, and dual use, related to the use of multiple products. The model tracks product use prevalence and mortality attributable to tobacco use formore » the overall population and by sex and age group. The model can also be used to estimate differences in these outcomes between scenarios by varying input parameter values. We demonstrate model capabilities by projecting future cigarette smoking prevalence and smoking-attributable mortality and then simulating the effects of introduction of a hypothetical new lower-risk tobacco product under a variety of assumptions about product use. Sensitivity analyses were conducted to examine the range of population impacts that could occur due to differences in input values for product use and risk. We demonstrate that potential benefits from cigarette smokers switching to the lower-risk product can be offset over time through increased initiation of this product. Model results show that population health benefits are particularly sensitive to product risks and initiation, switching, and dual use behaviors. Conclusion Our model incorporates the variety of tobacco use behaviors and risks that occur with multiple products. As such, it can evaluate the population health impacts associated with the introduction of new tobacco products or policies that may result in product switching or dual use. Further model development will include refinement of data inputs for non-cigarette tobacco products and inclusion of health outcomes such as morbidity and disability.« less
Vugrin, Eric D.; Rostron, Brian L.; Verzi, Stephen J.; Brodsky, Nancy S.; Brown, Theresa J.; Choiniere, Conrad J.; Coleman, Blair N.; Paredes, Antonio; Apelberg, Benjamin J.
2015-01-01
Background Recent declines in US cigarette smoking prevalence have coincided with increases in use of other tobacco products. Multiple product tobacco models can help assess the population health impacts associated with use of a wide range of tobacco products. Methods and Findings We present a multi-state, dynamical systems population structure model that can be used to assess the effects of tobacco product use behaviors on population health. The model incorporates transition behaviors, such as initiation, cessation, switching, and dual use, related to the use of multiple products. The model tracks product use prevalence and mortality attributable to tobacco use for the overall population and by sex and age group. The model can also be used to estimate differences in these outcomes between scenarios by varying input parameter values. We demonstrate model capabilities by projecting future cigarette smoking prevalence and smoking-attributable mortality and then simulating the effects of introduction of a hypothetical new lower-risk tobacco product under a variety of assumptions about product use. Sensitivity analyses were conducted to examine the range of population impacts that could occur due to differences in input values for product use and risk. We demonstrate that potential benefits from cigarette smokers switching to the lower-risk product can be offset over time through increased initiation of this product. Model results show that population health benefits are particularly sensitive to product risks and initiation, switching, and dual use behaviors. Conclusion Our model incorporates the variety of tobacco use behaviors and risks that occur with multiple products. As such, it can evaluate the population health impacts associated with the introduction of new tobacco products or policies that may result in product switching or dual use. Further model development will include refinement of data inputs for non-cigarette tobacco products and inclusion of health outcomes such as morbidity and disability. PMID:25815840
Vugrin, Eric D; Rostron, Brian L; Verzi, Stephen J; Brodsky, Nancy S; Brown, Theresa J; Choiniere, Conrad J; Coleman, Blair N; Paredes, Antonio; Apelberg, Benjamin J
2015-01-01
Recent declines in US cigarette smoking prevalence have coincided with increases in use of other tobacco products. Multiple product tobacco models can help assess the population health impacts associated with use of a wide range of tobacco products. We present a multi-state, dynamical systems population structure model that can be used to assess the effects of tobacco product use behaviors on population health. The model incorporates transition behaviors, such as initiation, cessation, switching, and dual use, related to the use of multiple products. The model tracks product use prevalence and mortality attributable to tobacco use for the overall population and by sex and age group. The model can also be used to estimate differences in these outcomes between scenarios by varying input parameter values. We demonstrate model capabilities by projecting future cigarette smoking prevalence and smoking-attributable mortality and then simulating the effects of introduction of a hypothetical new lower-risk tobacco product under a variety of assumptions about product use. Sensitivity analyses were conducted to examine the range of population impacts that could occur due to differences in input values for product use and risk. We demonstrate that potential benefits from cigarette smokers switching to the lower-risk product can be offset over time through increased initiation of this product. Model results show that population health benefits are particularly sensitive to product risks and initiation, switching, and dual use behaviors. Our model incorporates the variety of tobacco use behaviors and risks that occur with multiple products. As such, it can evaluate the population health impacts associated with the introduction of new tobacco products or policies that may result in product switching or dual use. Further model development will include refinement of data inputs for non-cigarette tobacco products and inclusion of health outcomes such as morbidity and disability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vugrin, Eric D.; Rostron, Brian L.; Verzi, Stephen J.
Background Recent declines in US cigarette smoking prevalence have coincided with increases in use of other tobacco products. Multiple product tobacco models can help assess the population health impacts associated with use of a wide range of tobacco products. Methods and Findings We present a multi-state, dynamical systems population structure model that can be used to assess the effects of tobacco product use behaviors on population health. The model incorporates transition behaviors, such as initiation, cessation, switching, and dual use, related to the use of multiple products. The model tracks product use prevalence and mortality attributable to tobacco use formore » the overall population and by sex and age group. The model can also be used to estimate differences in these outcomes between scenarios by varying input parameter values. We demonstrate model capabilities by projecting future cigarette smoking prevalence and smoking-attributable mortality and then simulating the effects of introduction of a hypothetical new lower-risk tobacco product under a variety of assumptions about product use. Sensitivity analyses were conducted to examine the range of population impacts that could occur due to differences in input values for product use and risk. We demonstrate that potential benefits from cigarette smokers switching to the lower-risk product can be offset over time through increased initiation of this product. Model results show that population health benefits are particularly sensitive to product risks and initiation, switching, and dual use behaviors. Conclusion Our model incorporates the variety of tobacco use behaviors and risks that occur with multiple products. As such, it can evaluate the population health impacts associated with the introduction of new tobacco products or policies that may result in product switching or dual use. Further model development will include refinement of data inputs for non-cigarette tobacco products and inclusion of health outcomes such as morbidity and disability.« less
Analysis and prediction of Multiple-Site Damage (MSD) fatigue crack growth
NASA Technical Reports Server (NTRS)
Dawicke, D. S.; Newman, J. C., Jr.
1992-01-01
A technique was developed to calculate the stress intensity factor for multiple interacting cracks. The analysis was verified through comparison with accepted methods of calculating stress intensity factors. The technique was incorporated into a fatigue crack growth prediction model and used to predict the fatigue crack growth life for multiple-site damage (MSD). The analysis was verified through comparison with experiments conducted on uniaxially loaded flat panels with multiple cracks. Configuration with nearly equal and unequal crack distribution were examined. The fatigue crack growth predictions agreed within 20 percent of the experimental lives for all crack configurations considered.
Improved double-multiple streamtube model for the Darrieus-type vertical axis wind turbine
NASA Astrophysics Data System (ADS)
Berg, D. E.
Double streamtube codes model the curved blade (Darrieus-type) vertical axis wind turbine (VAWT) as a double actuator fish arrangement (one half) and use conservation of momentum principles to determine the forces acting on the turbine blades and the turbine performance. Sandia National Laboratories developed a double multiple streamtube model for the VAWT which incorporates the effects of the incident wind boundary layer, nonuniform velocity between the upwind and downwind sections of the rotor, dynamic stall effects and local blade Reynolds number variations. The theory underlying this VAWT model is described, as well as the code capabilities. Code results are compared with experimental data from two VAWT's and with the results from another double multiple streamtube and a vortex filament code. The effects of neglecting dynamic stall and horizontal wind velocity distribution are also illustrated.
ERIC Educational Resources Information Center
Cohen, Alexander B.; Tenenbaum, Gershon; English, R. William
2006-01-01
A multiple case study investigation is reported in which emotions and performance were assessed within the probabilistic individual zone of optimal functioning (IZOF) model (Kamata, Tenenbaum, & Hanin, 2002) to develop idiosyncratic emotion-performance profiles. These profiles were incorporated into a psychological skills training (PST)…
Kondo, Yumi; Zhao, Yinshan; Petkau, John
2017-05-30
Identification of treatment responders is a challenge in comparative studies where treatment efficacy is measured by multiple longitudinally collected continuous and count outcomes. Existing procedures often identify responders on the basis of only a single outcome. We propose a novel multiple longitudinal outcome mixture model that assumes that, conditionally on a cluster label, each longitudinal outcome is from a generalized linear mixed effect model. We utilize a Monte Carlo expectation-maximization algorithm to obtain the maximum likelihood estimates of our high-dimensional model and classify patients according to their estimated posterior probability of being a responder. We demonstrate the flexibility of our novel procedure on two multiple sclerosis clinical trial datasets with distinct data structures. Our simulation study shows that incorporating multiple outcomes improves the responder identification performance; this can occur even if some of the outcomes are ineffective. Our general procedure facilitates the identification of responders who are comprehensively defined by multiple outcomes from various distributions. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
McLaughlin, Jacqueline E; McLaughlin, Gerald W; McLaughlin, Josetta S; White, Carla Y
2016-01-03
This study explored new models of diversity for health professions education that incorporate multiple attributes and examined differences in diversity based on urbanicity, geographic region, and institutional structure. Simpson's Diversity Index was used to develop race, gender, and interprofessional diversity indices for health professions schools in the United States (N = 318). Sullivan's extension was used to develop a composite diversity index that incorporated multiple individual attributes for each school. Pearson's r was used to investigate correlations between continuous variables. ANOVA and independent t-tests were used to compare groups based on urbanicity, geographic region, and Basic Carnegie Classification. Mean (SD) for race, gender, and interprofessional diversity indices were 0.36(0.17), 0.45(0.07), and 0.22(0.27) respectively. All correlations between the three indices were weak. The composite diversity index for this sample was 0.34(0.13). Significant differences in diversity were found between institutions based on urbanicity, Basic Carnegie Classification, and geographic region. Multidimensional models provide support for expanding measures of diversity to include multiple characteristics and attributes. The approach demonstrated in this study enables institutions to complement and extend traditional measures of diversity as a means of providing evidence for decision-making and progress towards institutional initiatives.
McLaughlin, Gerald W.; McLaughlin, Josetta S.; White, Carla Y.
2016-01-01
Objectives This study explored new models of diversity for health professions education that incorporate multiple attributes and examined differences in diversity based on urbanicity, geographic region, and institutional structure. Methods Simpson’s Diversity Index was used to develop race, gender, and interprofessional diversity indices for health professions schools in the United States (N = 318). Sullivan’s extension was used to develop a composite diversity index that incorporated multiple individual attributes for each school. Pearson’s r was used to investigate correlations between continuous variables. ANOVA and independent t-tests were used to compare groups based on urbanicity, geographic region, and Basic Carnegie Classification. Results Mean (SD) for race, gender, and interprofessional diversity indices were 0.36(0.17), 0.45(0.07), and 0.22(0.27) respectively. All correlations between the three indices were weak. The composite diversity index for this sample was 0.34(0.13). Significant differences in diversity were found between institutions based on urbanicity, Basic Carnegie Classification, and geographic region. Conclusions Multidimensional models provide support for expanding measures of diversity to include multiple characteristics and attributes. The approach demonstrated in this study enables institutions to complement and extend traditional measures of diversity as a means of providing evidence for decision-making and progress towards institutional initiatives. PMID:26724917
Monitoring and Modeling Performance of Communications in Computational Grids
NASA Technical Reports Server (NTRS)
Frumkin, Michael A.; Le, Thuy T.
2003-01-01
Computational grids may include many machines located in a number of sites. For efficient use of the grid we need to have an ability to estimate the time it takes to communicate data between the machines. For dynamic distributed grids it is unrealistic to know exact parameters of the communication hardware and the current communication traffic and we should rely on a model of the network performance to estimate the message delivery time. Our approach to a construction of such a model is based on observation of the messages delivery time with various message sizes and time scales. We record these observations in a database and use them to build a model of the message delivery time. Our experiments show presence of multiple bands in the logarithm of the message delivery times. These multiple bands represent multiple paths messages travel between the grid machines and are incorporated in our multiband model.
Laura Phillips-Mao; Susan M. Galatowitsch; Stephanie A. Snyder; Robert G. Haight
2016-01-01
Incorporating climate change into conservation decision-making at site and population scales is challenging due to uncertainties associated with localized climate change impacts and population responses to multiple interacting impacts and adaptation strategies. We explore the use of spatially explicit population models to facilitate scenario analysis, a conservation...
Samuel A. Cushman; Nicholas B. Elliot; David W. Macdonald; Andrew J. Loveridge
2015-01-01
Habitat loss and fragmentation are among the major drivers of population declines and extinction, particularly in large carnivores. Connectivity models provide practical tools for assessing fragmentation effects and developing mitigation or conservation responses. To be useful to conservation practitioners, connectivity models need to incorporate multiple scales and...
ERIC Educational Resources Information Center
Sharif, Rukhsar
2017-01-01
This conceptual paper serves to create a model of creativity and innovation at different organizational levels. It draws on John Holland's Theory of Vocational Choice (1973) as the basis for its structure by incorporating the six different personality types from his theory: conventional, enterprising, realistic, social, investigative, and…
ERIC Educational Resources Information Center
Ram, Nilam; Grimm, Kevin J.
2009-01-01
Growth mixture modeling (GMM) is a method for identifying multiple unobserved sub-populations, describing longitudinal change within each unobserved sub-population, and examining differences in change among unobserved sub-populations. We provide a practical primer that may be useful for researchers beginning to incorporate GMM analysis into their…
Segmentation of prostate boundaries from ultrasound images using statistical shape model.
Shen, Dinggang; Zhan, Yiqiang; Davatzikos, Christos
2003-04-01
This paper presents a statistical shape model for the automatic prostate segmentation in transrectal ultrasound images. A Gabor filter bank is first used to characterize the prostate boundaries in ultrasound images in both multiple scales and multiple orientations. The Gabor features are further reconstructed to be invariant to the rotation of the ultrasound probe and incorporated in the prostate model as image attributes for guiding the deformable segmentation. A hierarchical deformation strategy is then employed, in which the model adaptively focuses on the similarity of different Gabor features at different deformation stages using a multiresolution technique, i.e., coarse features first and fine features later. A number of successful experiments validate the algorithm.
ERIC Educational Resources Information Center
Kordaki, Maria
2015-01-01
This study focuses on the role of multiple solution tasks (MST) incorporating multiple learning tools and representation systems (MTRS) in encouraging each student to develop multiple perspectives on the learning concepts under study and creativity of thought. Specifically, two types of MST were used, namely tasks that allowed and demanded…
NASA Astrophysics Data System (ADS)
Chaudhari, Rajan; Heim, Andrew J.; Li, Zhijun
2015-05-01
Evidenced by the three-rounds of G-protein coupled receptors (GPCR) Dock competitions, improving homology modeling methods of helical transmembrane proteins including the GPCRs, based on templates of low sequence identity, remains an eminent challenge. Current approaches addressing this challenge adopt the philosophy of "modeling first, refinement next". In the present work, we developed an alternative modeling approach through the novel application of available multiple templates. First, conserved inter-residue interactions are derived from each additional template through conservation analysis of each template-target pairwise alignment. Then, these interactions are converted into distance restraints and incorporated in the homology modeling process. This approach was applied to modeling of the human β2 adrenergic receptor using the bovin rhodopsin and the human protease-activated receptor 1 as templates and improved model quality was demonstrated compared to the homology model generated by standard single-template and multiple-template methods. This method of "refined restraints first, modeling next", provides a fast and complementary way to the current modeling approaches. It allows rational identification and implementation of additional conserved distance restraints extracted from multiple templates and/or experimental data, and has the potential to be applicable to modeling of all helical transmembrane proteins.
Nomura, Emi M.; Reber, Paul J.
2012-01-01
Considerable evidence has argued in favor of multiple neural systems supporting human category learning, one based on conscious rule inference and one based on implicit information integration. However, there have been few attempts to study potential system interactions during category learning. The PINNACLE (Parallel Interactive Neural Networks Active in Category Learning) model incorporates multiple categorization systems that compete to provide categorization judgments about visual stimuli. Incorporating competing systems requires inclusion of cognitive mechanisms associated with resolving this competition and creates a potential credit assignment problem in handling feedback. The hypothesized mechanisms make predictions about internal mental states that are not always reflected in choice behavior, but may be reflected in neural activity. Two prior functional magnetic resonance imaging (fMRI) studies of category learning were re-analyzed using PINNACLE to identify neural correlates of internal cognitive states on each trial. These analyses identified additional brain regions supporting the two types of category learning, regions particularly active when the systems are hypothesized to be in maximal competition, and found evidence of covert learning activity in the “off system” (the category learning system not currently driving behavior). These results suggest that PINNACLE provides a plausible framework for how competing multiple category learning systems are organized in the brain and shows how computational modeling approaches and fMRI can be used synergistically to gain access to cognitive processes that support complex decision-making machinery. PMID:24962771
The biological processes by which environmental pollutants induce adverse health effects is most likely regulated by complex interactions dependent upon the route of exposure, dose, kinetics of distribution, and multiple cellular responses. To further complicate deciphering thes...
MCAID--A Generalized Text Driver.
ERIC Educational Resources Information Center
Ahmed, K.; Dickinson, C. J.
MCAID is a relatively machine-independent technique for writing computer-aided instructional material consisting of descriptive text, multiple choice questions, and the ability to call compiled subroutines to perform extensive calculations. It was specially developed to incorporate test-authoring around complex mathematical models to explore a…
NASA Astrophysics Data System (ADS)
Millar, David J.; Ewers, Brent E.; Mackay, D. Scott; Peckham, Scott; Reed, David E.; Sekoni, Adewale
2017-09-01
Mountain pine beetle outbreaks in western North America have led to extensive forest mortality, justifiably generating interest in improving our understanding of how this type of ecological disturbance affects hydrological cycles. While observational studies and simulations have been used to elucidate the effects of mountain beetle mortality on hydrological fluxes, an ecologically mechanistic model of forest evapotranspiration (ET) evaluated against field data has yet to be developed. In this work, we use the Terrestrial Regional Ecosystem Exchange Simulator (TREES) to incorporate the ecohydrological impacts of mountain pine beetle disturbance on ET for a lodgepole pine-dominated forest equipped with an eddy covariance tower. An existing degree-day model was incorporated that predicted the life cycle of mountain pine beetles, along with an empirically derived submodel that allowed sap flux to decline as a function of temperature-dependent blue stain fungal growth. The eddy covariance footprint was divided into multiple cohorts for multiple growing seasons, including representations of recently attacked trees and the compensatory effects of regenerating understory, using two different spatial scaling methods. Our results showed that using a multiple cohort approach matched eddy covariance-measured ecosystem-scale ET fluxes well, and showed improved performance compared to model simulations assuming a binary framework of only areas of live and dead overstory. Cumulative growing season ecosystem-scale ET fluxes were 8 - 29% greater using the multicohort approach during years in which beetle attacks occurred, highlighting the importance of including compensatory ecological mechanism in ET models.
Phillips, Lawrence; Pearl, Lisa
2015-11-01
The informativity of a computational model of language acquisition is directly related to how closely it approximates the actual acquisition task, sometimes referred to as the model's cognitive plausibility. We suggest that though every computational model necessarily idealizes the modeled task, an informative language acquisition model can aim to be cognitively plausible in multiple ways. We discuss these cognitive plausibility checkpoints generally and then apply them to a case study in word segmentation, investigating a promising Bayesian segmentation strategy. We incorporate cognitive plausibility by using an age-appropriate unit of perceptual representation, evaluating the model output in terms of its utility, and incorporating cognitive constraints into the inference process. Our more cognitively plausible model shows a beneficial effect of cognitive constraints on segmentation performance. One interpretation of this effect is as a synergy between the naive theories of language structure that infants may have and the cognitive constraints that limit the fidelity of their inference processes, where less accurate inference approximations are better when the underlying assumptions about how words are generated are less accurate. More generally, these results highlight the utility of incorporating cognitive plausibility more fully into computational models of language acquisition. Copyright © 2015 Cognitive Science Society, Inc.
The Forbes 400, the Pareto power-law and efficient markets
NASA Astrophysics Data System (ADS)
Klass, O. S.; Biham, O.; Levy, M.; Malcai, O.; Solomon, S.
2007-01-01
Statistical regularities at the top end of the wealth distribution in the United States are examined using the Forbes 400 lists of richest Americans, published between 1988 and 2003. It is found that the wealths are distributed according to a power-law (Pareto) distribution. This result is explained using a simple stochastic model of multiple investors that incorporates the efficient market hypothesis as well as the multiplicative nature of financial market fluctuations.
NASA Astrophysics Data System (ADS)
Fu, Congsheng; Wang, Guiling; Goulden, Michael L.; Scott, Russell L.; Bible, Kenneth; Cardon, Zoe G.
2016-05-01
Effects of hydraulic redistribution (HR) on hydrological, biogeochemical, and ecological processes have been demonstrated in the field, but the current generation of standard earth system models does not include a representation of HR. Though recent studies have examined the effect of incorporating HR into land surface models, few (if any) have done cross-site comparisons for contrasting climate regimes and multiple vegetation types via the integration of measurement and modeling. Here, we incorporated the HR scheme of Ryel et al. (2002) into the NCAR Community Land Model Version 4.5 (CLM4.5), and examined the ability of the resulting hybrid model to capture the magnitude of HR flux and/or soil moisture dynamics from which HR can be directly inferred, to assess the impact of HR on land surface water and energy budgets, and to explore how the impact may depend on climate regimes and vegetation conditions. Eight AmeriFlux sites with contrasting climate regimes and multiple vegetation types were studied, including the Wind River Crane site in Washington State, the Santa Rita Mesquite savanna site in southern Arizona, and six sites along the Southern California Climate Gradient. HR flux, evapotranspiration (ET), and soil moisture were properly simulated in the present study, even in the face of various uncertainties. Our cross-ecosystem comparison showed that the timing, magnitude, and direction (upward or downward) of HR vary across ecosystems, and incorporation of HR into CLM4.5 improved the model-measurement matches of evapotranspiration, Bowen ratio, and soil moisture particularly during dry seasons. Our results also reveal that HR has important hydrological impact in ecosystems that have a pronounced dry season but are not overall so dry that sparse vegetation and very low soil moisture limit HR.
NASA Astrophysics Data System (ADS)
Cronkite-Ratcliff, C.; Phelps, G. A.; Boucher, A.
2011-12-01
In many geologic settings, the pathways of groundwater flow are controlled by geologic heterogeneities which have complex geometries. Models of these geologic heterogeneities, and consequently, their effects on the simulated pathways of groundwater flow, are characterized by uncertainty. Multiple-point geostatistics, which uses a training image to represent complex geometric descriptions of geologic heterogeneity, provides a stochastic approach to the analysis of geologic uncertainty. Incorporating multiple-point geostatistics into numerical models provides a way to extend this analysis to the effects of geologic uncertainty on the results of flow simulations. We present two case studies to demonstrate the application of multiple-point geostatistics to numerical flow simulation in complex geologic settings with both static and dynamic conditioning data. Both cases involve the development of a training image from a complex geometric description of the geologic environment. Geologic heterogeneity is modeled stochastically by generating multiple equally-probable realizations, all consistent with the training image. Numerical flow simulation for each stochastic realization provides the basis for analyzing the effects of geologic uncertainty on simulated hydraulic response. The first case study is a hypothetical geologic scenario developed using data from the alluvial deposits in Yucca Flat, Nevada. The SNESIM algorithm is used to stochastically model geologic heterogeneity conditioned to the mapped surface geology as well as vertical drill-hole data. Numerical simulation of groundwater flow and contaminant transport through geologic models produces a distribution of hydraulic responses and contaminant concentration results. From this distribution of results, the probability of exceeding a given contaminant concentration threshold can be used as an indicator of uncertainty about the location of the contaminant plume boundary. The second case study considers a characteristic lava-flow aquifer system in Pahute Mesa, Nevada. A 3D training image is developed by using object-based simulation of parametric shapes to represent the key morphologic features of rhyolite lava flows embedded within ash-flow tuffs. In addition to vertical drill-hole data, transient pressure head data from aquifer tests can be used to constrain the stochastic model outcomes. The use of both static and dynamic conditioning data allows the identification of potential geologic structures that control hydraulic response. These case studies demonstrate the flexibility of the multiple-point geostatistics approach for considering multiple types of data and for developing sophisticated models of geologic heterogeneities that can be incorporated into numerical flow simulations.
2013-10-21
depend on the quality of allocating resources. This work uses a reliability model of system and environmental covariates incorporating information at...state space. Further, the use of condition variables allows for the direct modeling of maintenance impact with the assumption that a nominal value ... value ), the model in the application of aviation maintenance can provide a useful estimation of reliability at multiple levels. Adjusted survival
Validation of a Sensor-Driven Modeling Paradigm for Multiple Source Reconstruction with FFT-07 Data
2009-05-01
operational warning and reporting (information) systems that combine automated data acquisition, analysis , source reconstruction, display and distribution of...report and to incorporate this operational ca- pability into the integrative multiscale urban modeling system implemented in the com- putational...Journal of Fluid Mechanics, 180, 529–556. [27] Flesch, T., Wilson, J. D., and Yee, E. (1995), Backward- time Lagrangian stochastic dispersion models
Dissecting effects of complex mixtures: who's afraid of informative priors?
Thomas, Duncan C; Witte, John S; Greenland, Sander
2007-03-01
Epidemiologic studies commonly investigate multiple correlated exposures, which are difficult to analyze appropriately. Hierarchical modeling provides a promising approach for analyzing such data by adding a higher-level structure or prior model for the exposure effects. This prior model can incorporate additional information on similarities among the correlated exposures and can be parametric, semiparametric, or nonparametric. We discuss the implications of applying these models and argue for their expanded use in epidemiology. While a prior model adds assumptions to the conventional (first-stage) model, all statistical methods (including conventional methods) make strong intrinsic assumptions about the processes that generated the data. One should thus balance prior modeling assumptions against assumptions of validity, and use sensitivity analyses to understand their implications. In doing so - and by directly incorporating into our analyses information from other studies or allied fields - we can improve our ability to distinguish true causes of disease from noise and bias.
ERIC Educational Resources Information Center
Saeki, Elina; Jimerson, Shane R.; Earhart, James; Hart, Shelley R.; Renshaw, Tyler; Singh, Renee D.; Stewart, Kaitlyn
2011-01-01
As many schools move toward a three-tier model that incorporates a Response to Intervention (RtI) service delivery model in the social, emotional, and behavioral domains, school psychologists may provide leadership. The decision-making process for filtering students through multiple tiers of support and intervention and examining change is an area…
Hatfield, Laura A.; Gutreuter, Steve; Boogaard, Michael A.; Carlin, Bradley P.
2011-01-01
Estimation of extreme quantal-response statistics, such as the concentration required to kill 99.9% of test subjects (LC99.9), remains a challenge in the presence of multiple covariates and complex study designs. Accurate and precise estimates of the LC99.9 for mixtures of toxicants are critical to ongoing control of a parasitic invasive species, the sea lamprey, in the Laurentian Great Lakes of North America. The toxicity of those chemicals is affected by local and temporal variations in water chemistry, which must be incorporated into the modeling. We develop multilevel empirical Bayes models for data from multiple laboratory studies. Our approach yields more accurate and precise estimation of the LC99.9 compared to alternative models considered. This study demonstrates that properly incorporating hierarchical structure in laboratory data yields better estimates of LC99.9 stream treatment values that are critical to larvae control in the field. In addition, out-of-sample prediction of the results of in situ tests reveals the presence of a latent seasonal effect not manifest in the laboratory studies, suggesting avenues for future study and illustrating the importance of dual consideration of both experimental and observational data.
Model of visual contrast gain control and pattern masking
NASA Technical Reports Server (NTRS)
Watson, A. B.; Solomon, J. A.
1997-01-01
We have implemented a model of contrast gain and control in human vision that incorporates a number of key features, including a contrast sensitivity function, multiple oriented bandpass channels, accelerating nonlinearities, and a devisive inhibitory gain control pool. The parameters of this model have been optimized through a fit to the recent data that describe masking of a Gabor function by cosine and Gabor masks [J. M. Foley, "Human luminance pattern mechanisms: masking experiments require a new model," J. Opt. Soc. Am. A 11, 1710 (1994)]. The model achieves a good fit to the data. We also demonstrate how the concept of recruitment may accommodate a variant of this model in which excitatory and inhibitory paths have a common accelerating nonlinearity, but which include multiple channels tuned to different levels of contrast.
Xuan, Ziming; Chaloupka, Frank J; Blanchette, Jason G; Nguyen, Thien H; Heeren, Timothy C; Nelson, Toben F; Naimi, Timothy S
2015-03-01
U.S. studies contribute heavily to the literature about the tax elasticity of demand for alcohol, and most U.S. studies have relied upon specific excise (volume-based) taxes for beer as a proxy for alcohol taxes. The purpose of this paper was to compare this conventional alcohol tax measure with more comprehensive tax measures (incorporating multiple tax and beverage types) in analyses of the relationship between alcohol taxes and adult binge drinking prevalence in U.S. states. Data on U.S. state excise, ad valorem and sales taxes from 2001 to 2010 were obtained from the Alcohol Policy Information System and other sources. For 510 state-year strata, we developed a series of weighted tax-per-drink measures that incorporated various combinations of tax and beverage types, and related these measures to state-level adult binge drinking prevalence data from the Behavioral Risk Factor Surveillance System surveys. In analyses pooled across all years, models using the combined tax measure explained approximately 20% of state binge drinking prevalence, and documented more negative tax elasticity (-0.09, P = 0.02 versus -0.005, P = 0.63) and price elasticity (-1.40, P < 0.01 versus -0.76, P = 0.15) compared with models using only the volume-based tax. In analyses stratified by year, the R-squares for models using the beer combined tax measure were stable across the study period (P = 0.11), while the R-squares for models rely only on volume-based tax declined (P < 0.0). Compared with volume-based tax measures, combined tax measures (i.e. those incorporating volume-based tax and value-based taxes) yield substantial improvement in model fit and find more negative tax elasticity and price elasticity predicting adult binge drinking prevalence in U.S. states. © 2014 Society for the Study of Addiction.
Xuan, Ziming; Chaloupka, Frank J.; Blanchette, Jason G.; Nguyen, Thien H.; Heeren, Timothy C.; Nelson, Toben F.; Naimi, Timothy S.
2015-01-01
Aims U.S. studies contribute heavily to the literature about the tax elasticity of demand for alcohol, and most U.S. studies have relied upon specific excise (volume-based) taxes for beer as a proxy for alcohol taxes. The purpose of this paper was to compare this conventional alcohol tax measure with more comprehensive tax measures (incorporating multiple tax and beverage types) in analyses of the relationship between alcohol taxes and adult binge drinking prevalence in U.S. states. Design Data on U.S. state excise, ad valorem and sales taxes from 2001 to 2010 were obtained from the Alcohol Policy Information System and other sources. For 510 state-year strata, we developed a series of weighted tax-per-drink measures that incorporated various combinations of tax and beverage types, and related these measures to state-level adult binge drinking prevalence data from the Behavioral Risk Factor Surveillance System surveys. Findings In analyses pooled across all years, models using the combined tax measure explained approximately 20% of state binge drinking prevalence, and documented more negative tax elasticity (−0.09, P=0.02 versus −0.005, P=0.63) and price elasticity (−1.40, P<0.01 versus −0.76, P=0.15) compared with models using only the volume-based tax. In analyses stratified by year, the R-squares for models using the beer combined tax measure were stable across the study period (P=0.11), while the R-squares for models rely only on volume-based tax declined (P<0.01). Conclusions Compared with volume-based tax measures, combined tax measures (i.e. those incorporating volume-based tax and value-based taxes) yield substantial improvement in model fit and find more negative tax elasticity and price elasticity predicting adult binge drinking prevalence in U.S. states. PMID:25428795
Stochastic nature of Landsat MSS data
NASA Technical Reports Server (NTRS)
Labovitz, M. L.; Masuoka, E. J.
1987-01-01
A multiple series generalization of the ARIMA models is used to model Landsat MSS scan lines as sequences of vectors, each vector having four elements (bands). The purpose of this work is to investigate if Landsat scan lines can be described by a general multiple series linear stochastic model and if the coefficients of such a model vary as a function of satellite system and target attributes. To accomplish this objective, an exploratory experimental design was set up incorporating six factors, four representing target attributes - location, cloud cover, row (within location), and column (within location) - and two factors representing system attributes - satellite number and detector bank. Each factor was included in the design at two levels and, with two replicates per treatment, 128 scan lines were analyzed. The results of the analysis suggests that a multiple AR(4) model is an adequate representation across all scan lines. Furthermore, the coefficients of the AR(4) model vary with location, particularly changes in physiography (slope regimes), and with percent cloud cover, but are insensitive to changes in system attributes.
Multi-timescale data assimilation for atmosphere–ocean state estimates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steiger, Nathan; Hakim, Gregory
2016-06-24
Paleoclimate proxy data span seasonal to millennial timescales, and Earth's climate system has both high- and low-frequency components. Yet it is currently unclear how best to incorporate multiple timescales of proxy data into a single reconstruction framework and to also capture both high- and low-frequency components of reconstructed variables. Here we present a data assimilation approach that can explicitly incorporate proxy data at arbitrary timescales. The principal advantage of using such an approach is that it allows much more proxy data to inform a climate reconstruction, though there can be additional benefits. Through a series of offline data-assimilation-based pseudoproxy experiments,more » we find that atmosphere–ocean states are most skillfully reconstructed by incorporating proxies across multiple timescales compared to using proxies at short (annual) or long (~ decadal) timescales alone. Additionally, reconstructions that incorporate long-timescale pseudoproxies improve the low-frequency components of the reconstructions relative to using only high-resolution pseudoproxies. We argue that this is because time averaging high-resolution observations improves their covariance relationship with the slowly varying components of the coupled-climate system, which the data assimilation algorithm can exploit. These results are consistent across the climate models considered, despite the model variables having very different spectral characteristics. Furthermore, our results also suggest that it may be possible to reconstruct features of the oceanic meridional overturning circulation based on atmospheric surface temperature proxies, though here we find such reconstructions lack spectral power over a broad range of frequencies.« less
Bishop, Malachy; Rumrill, Phillip D; Roessler, Richard T
2015-01-01
This article presents a replication of Rumrill, Roessler, and Fitzgerald's 2004 analysis of a three-factor model of the impact of multiple sclerosis (MS) on quality of life (QOL). The three factors in the original model included illness-related, employment-related, and psychosocial adjustment factors. To test hypothesized relationships between QOL and illness-related, employment-related, and psychosocial variables using data from a survey of the employment concerns of Americans with MS (N = 1,839). An ex post facto, multiple correlational design was employed incorporating correlational and multiple regression analyses. QOL was positively related to educational level, employment status, job satisfaction, and job-match, and negatively related to number of symptoms, severity of symptoms, and perceived stress level. The three-factor model explained approximately 37 percent of the variance in QOL scores. The results of this replication confirm the continuing value of the three-factor model for predicting the QOL of adults with MS, and demonstrate the importance of medical, mental health, and vocational rehabilitation interventions and services in promoting QOL.
Kwon, Inchan; Choi, Eun Sil
2016-01-01
Multiple-site-specific incorporation of a noncanonical amino acid into a recombinant protein would be a very useful technique to generate multiple chemical handles for bioconjugation and multivalent binding sites for the enhanced interaction. Previously combination of a mutant yeast phenylalanyl-tRNA synthetase variant and the yeast phenylalanyl-tRNA containing the AAA anticodon was used to incorporate a noncanonical amino acid into multiple UUU phenylalanine (Phe) codons in a site-specific manner. However, due to the less selective codon recognition of the AAA anticodon, there was significant misincorporation of a noncanonical amino acid into unwanted UUC Phe codons. To enhance codon selectivity, we explored degenerate leucine (Leu) codons instead of Phe degenerate codons. Combined use of the mutant yeast phenylalanyl-tRNA containing the CAA anticodon and the yPheRS_naph variant allowed incorporation of a phenylalanine analog, 2-naphthylalanine, into murine dihydrofolate reductase in response to multiple UUG Leu codons, but not to other Leu codon sites. Despite the moderate UUG codon occupancy by 2-naphthylalaine, these results successfully demonstrated that the concept of forced ambiguity of the genetic code can be achieved for the Leu codons, available for multiple-site-specific incorporation. PMID:27028506
Kwon, Inchan; Choi, Eun Sil
2016-01-01
Multiple-site-specific incorporation of a noncanonical amino acid into a recombinant protein would be a very useful technique to generate multiple chemical handles for bioconjugation and multivalent binding sites for the enhanced interaction. Previously combination of a mutant yeast phenylalanyl-tRNA synthetase variant and the yeast phenylalanyl-tRNA containing the AAA anticodon was used to incorporate a noncanonical amino acid into multiple UUU phenylalanine (Phe) codons in a site-specific manner. However, due to the less selective codon recognition of the AAA anticodon, there was significant misincorporation of a noncanonical amino acid into unwanted UUC Phe codons. To enhance codon selectivity, we explored degenerate leucine (Leu) codons instead of Phe degenerate codons. Combined use of the mutant yeast phenylalanyl-tRNA containing the CAA anticodon and the yPheRS_naph variant allowed incorporation of a phenylalanine analog, 2-naphthylalanine, into murine dihydrofolate reductase in response to multiple UUG Leu codons, but not to other Leu codon sites. Despite the moderate UUG codon occupancy by 2-naphthylalaine, these results successfully demonstrated that the concept of forced ambiguity of the genetic code can be achieved for the Leu codons, available for multiple-site-specific incorporation.
Taplay, Karyn; Jack, Susan M; Baxter, Pamela; Eva, Kevin; Martin, Lynn
2014-01-01
Purpose. To create a substantive mid-range theory explaining how the organizational cultures of undergraduate nursing programs shape the adoption and incorporation of mid-to high-level technical fidelity simulators as a teaching strategy within curricula. Method. A constructivist grounded theory was used to guide this study which was conducted in Ontario, Canada, during 2011-12. Semistructured interviews (n = 43) with participants that included nursing administrators, nursing faculty, and simulation leaders across multiple programs (n = 13) informed this study. Additionally, key documents (n = 67) were reviewed. Purposeful and theoretical sampling was used and data were collected and analyzed simultaneously. Data were compared among and between sites. Findings. The organizational elements that shape simulation in nursing (OESSN) model depicts five key organizational factors at the nursing program level that shaped the adoption and incorporation of simulation: (1) leaders working in tandem, (2) information exchange, (3) physical locale, (4) shared motivators, and (5) scaffolding to manage change. Conclusions. The OESSN model provides an explanation of the organizational factors that contributed to the adoption and incorporation of simulation into nursing curricula. Nursing programs that use the OESSN model may experience a more rapid or broad uptake of simulation when organizational factors that impact adoption and incorporation are considered and planned for.
Jack, Susan M.; Eva, Kevin; Martin, Lynn
2014-01-01
Purpose. To create a substantive mid-range theory explaining how the organizational cultures of undergraduate nursing programs shape the adoption and incorporation of mid-to high-level technical fidelity simulators as a teaching strategy within curricula. Method. A constructivist grounded theory was used to guide this study which was conducted in Ontario, Canada, during 2011-12. Semistructured interviews (n = 43) with participants that included nursing administrators, nursing faculty, and simulation leaders across multiple programs (n = 13) informed this study. Additionally, key documents (n = 67) were reviewed. Purposeful and theoretical sampling was used and data were collected and analyzed simultaneously. Data were compared among and between sites. Findings. The organizational elements that shape simulation in nursing (OESSN) model depicts five key organizational factors at the nursing program level that shaped the adoption and incorporation of simulation: (1) leaders working in tandem, (2) information exchange, (3) physical locale, (4) shared motivators, and (5) scaffolding to manage change. Conclusions. The OESSN model provides an explanation of the organizational factors that contributed to the adoption and incorporation of simulation into nursing curricula. Nursing programs that use the OESSN model may experience a more rapid or broad uptake of simulation when organizational factors that impact adoption and incorporation are considered and planned for. PMID:24818018
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fu, Congsheng; Wang, Guiling; Goulden, Michael L.
Effects of hydraulic redistribution (HR) on hydrological, biogeochemical, and ecological processes have been demonstrated in the field, but the current generation of standard earth system models does not include a representation of HR. Though recent studies have examined the effect of incorporating HR into land surface models, few (if any) have done cross-site comparisons for contrasting climate regimes and multiple vegetation types via the integration of measurement and modeling. Here, we incorporated the HR scheme of Ryel et al. (2002) into the NCAR Community Land Model Version 4.5 (CLM4.5), and examined the ability of the resulting hybrid model to capture themore » magnitude of HR flux and/or soil moisture dynamics from which HR can be directly inferred, to assess the impact of HR on land surface water and energy budgets, and to explore how the impact may depend on climate regimes and vegetation conditions. Eight AmeriFlux sites with contrasting climate regimes and multiple vegetation types were studied, including the Wind River Crane site in Washington State, the Santa Rita Mesquite savanna site in southern Arizona, and six sites along the Southern California Climate Gradient. HR flux, evapotranspiration (ET), and soil moisture were properly simulated in the present study, even in the face of various uncertainties. Our cross-ecosystem comparison showed that the timing, magnitude, and direction (upward or downward) of HR vary across ecosystems, and incorporation of HR into CLM4.5 improved the model-measurement matches of evapotranspiration, Bowen ratio, and soil moisture particularly during dry seasons. Lastly, our results also reveal that HR has important hydrological impact in ecosystems that have a pronounced dry season but are not overall so dry that sparse vegetation and very low soil moisture limit HR.« less
Multiple causes of nonstationarity in the Weihe annual low-flow series
NASA Astrophysics Data System (ADS)
Xiong, Bin; Xiong, Lihua; Chen, Jie; Xu, Chong-Yu; Li, Lingqi
2018-02-01
Under the background of global climate change and local anthropogenic activities, multiple driving forces have introduced various nonstationary components into low-flow series. This has led to a high demand on low-flow frequency analysis that considers nonstationary conditions for modeling. In this study, through a nonstationary frequency analysis framework with the generalized linear model (GLM) to consider time-varying distribution parameters, the multiple explanatory variables were incorporated to explain the variation in low-flow distribution parameters. These variables are comprised of the three indices of human activities (HAs; i.e., population, POP; irrigation area, IAR; and gross domestic product, GDP) and the eight measuring indices of the climate and catchment conditions (i.e., total precipitation P, mean frequency of precipitation events λ, temperature T, potential evapotranspiration (EP), climate aridity index AIEP, base-flow index (BFI), recession constant K and the recession-related aridity index AIK). This framework was applied to model the annual minimum flow series of both Huaxian and Xianyang gauging stations in the Weihe River, China (also known as the Wei He River). The results from stepwise regression for the optimal explanatory variables show that the variables related to irrigation, recession, temperature and precipitation play an important role in modeling. Specifically, analysis of annual minimum 30-day flow in Huaxian shows that the nonstationary distribution model with any one of all explanatory variables is better than the one without explanatory variables, the nonstationary gamma distribution model with four optimal variables is the best model and AIK is of the highest relative importance among these four variables, followed by IAR, BFI and AIEP. We conclude that the incorporation of multiple indices related to low-flow generation permits tracing various driving forces. The established link in nonstationary analysis will be beneficial to analyze future occurrences of low-flow extremes in similar areas.
Maximum Likelihood Item Easiness Models for Test Theory Without an Answer Key
Batchelder, William H.
2014-01-01
Cultural consensus theory (CCT) is a data aggregation technique with many applications in the social and behavioral sciences. We describe the intuition and theory behind a set of CCT models for continuous type data using maximum likelihood inference methodology. We describe how bias parameters can be incorporated into these models. We introduce two extensions to the basic model in order to account for item rating easiness/difficulty. The first extension is a multiplicative model and the second is an additive model. We show how the multiplicative model is related to the Rasch model. We describe several maximum-likelihood estimation procedures for the models and discuss issues of model fit and identifiability. We describe how the CCT models could be used to give alternative consensus-based measures of reliability. We demonstrate the utility of both the basic and extended models on a set of essay rating data and give ideas for future research. PMID:29795812
Genomic-based multiple-trait evaluation in Eucalyptus grandis using dominant DArT markers.
Cappa, Eduardo P; El-Kassaby, Yousry A; Muñoz, Facundo; Garcia, Martín N; Villalba, Pamela V; Klápště, Jaroslav; Marcucci Poltri, Susana N
2018-06-01
We investigated the impact of combining the pedigree- and genomic-based relationship matrices in a multiple-trait individual-tree mixed model (a.k.a., multiple-trait combined approach) on the estimates of heritability and on the genomic correlations between growth and stem straightness in an open-pollinated Eucalyptus grandis population. Additionally, the added advantage of incorporating genomic information on the theoretical accuracies of parents and offspring breeding values was evaluated. Our results suggested that the use of the combined approach for estimating heritabilities and additive genetic correlations in multiple-trait evaluations is advantageous and including genomic information increases the expected accuracy of breeding values. Furthermore, the multiple-trait combined approach was proven to be superior to the single-trait combined approach in predicting breeding values, in particular for low-heritability traits. Finally, our results advocate the use of the combined approach in forest tree progeny testing trials, specifically when a multiple-trait individual-tree mixed model is considered. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Fang, Jinwei; Zhou, Hui; Zhang, Qingchen; Chen, Hanming; Wang, Ning; Sun, Pengyuan; Wang, Shucheng
2018-01-01
It is critically important to assess the effectiveness of elastic full waveform inversion (FWI) algorithms when FWI is applied to real land seismic data including strong surface and multiple waves related to the air-earth boundary. In this paper, we review the realization of the free surface boundary condition in staggered-grid finite-difference (FD) discretization of elastic wave equation, and analyze the impact of the free surface on FWI results. To reduce inputs/outputs (I/O) operations in gradient calculation, we adopt the boundary value reconstruction method to rebuild the source wavefields during the backward propagation of the residual data. A time-domain multiscale inversion strategy is conducted by using a convolutional objective function, and a multi-GPU parallel programming technique is used to accelerate our elastic FWI further. Forward simulation and elastic FWI examples without and with considering the free surface are shown and analyzed, respectively. Numerical results indicate that no free surface incorporated elastic FWI fails to recover a good inversion result from the Rayleigh wave contaminated observed data. By contrast, when the free surface is incorporated into FWI, the inversion results become better. We also discuss the dependency of the Rayleigh waveform incorporated FWI on the accuracy of initial models, especially the accuracy of the shallow part of the initial models.
INCORPORATING CONCENTRATION DEPENDENCE IN STABLE ISOTOPE MIXING MODELS
Stable isotopes are often used as natural labels to quantify the contributions of multiple sources to a mixture. For example, C and N isotopic signatures can be used to determine the fraction of three food sources in a consumer's diet. The standard dual isotope, three source li...
USDA-ARS?s Scientific Manuscript database
In recent decades, there has been increased interest in ecosystem services among landowners, and a growing diversity of stakeholders on rangelands. Given these changes, management cannot focus solely on maximizing ranch proceeds, but must also incorporate ecosystem service goals to sustain resources...
Incorporating Japan into the World History Curriculum: An Integrative Model
ERIC Educational Resources Information Center
Dennehy, Kristine
2008-01-01
According to the California Department of Education's Curriculum Framework, the secondary curriculum for grades nine through twelve is geared toward students who are beginning "to develop [an] abstract understanding of historical causality--the often complex patterns of relationships between historical events, their multiple antecedents, and…
Modeling and roles of meteorological factors in outbreaks of highly pathogenic avian influenza H5N1.
Biswas, Paritosh K; Islam, Md Zohorul; Debnath, Nitish C; Yamage, Mat
2014-01-01
The highly pathogenic avian influenza A virus subtype H5N1 (HPAI H5N1) is a deadly zoonotic pathogen. Its persistence in poultry in several countries is a potential threat: a mutant or genetically reassorted progenitor might cause a human pandemic. Its world-wide eradication from poultry is important to protect public health. The global trend of outbreaks of influenza attributable to HPAI H5N1 shows a clear seasonality. Meteorological factors might be associated with such trend but have not been studied. For the first time, we analyze the role of meteorological factors in the occurrences of HPAI outbreaks in Bangladesh. We employed autoregressive integrated moving average (ARIMA) and multiplicative seasonal autoregressive integrated moving average (SARIMA) to assess the roles of different meteorological factors in outbreaks of HPAI. Outbreaks were modeled best when multiplicative seasonality was incorporated. Incorporation of any meteorological variable(s) as inputs did not improve the performance of any multivariable models, but relative humidity (RH) was a significant covariate in several ARIMA and SARIMA models with different autoregressive and moving average orders. The variable cloud cover was also a significant covariate in two SARIMA models, but air temperature along with RH might be a predictor when moving average (MA) order at lag 1 month is considered.
Global change and terrestrial plant community dynamics
Franklin, Janet; Serra-Diaz, Josep M.; Syphard, Alexandra D.; ...
2016-02-29
Anthropogenic drivers of global change include rising atmospheric concentrations of carbon dioxide and other greenhouse gasses and resulting changes in the climate, as well as nitrogen deposition, biotic invasions, altered disturbance regimes, and land-use change. Predicting the effects of global change on terrestrial plant communities is crucial because of the ecosystem services vegetation provides, from climate regulation to forest products. In this article, we present a framework for detecting vegetation changes and attributing them to global change drivers that incorporates multiple lines of evidence from spatially extensive monitoring networks, distributed experiments, remotely sensed data, and historical records. Based on amore » literature review, we summarize observed changes and then describe modeling tools that can forecast the impacts of multiple drivers on plant communities in an era of rapid change. Observed responses to changes in temperature, water, nutrients, land use, and disturbance show strong sensitivity of ecosystem productivity and plant population dynamics to water balance and long-lasting effects of disturbance on plant community dynamics. Persistent effects of land-use change and human-altered fire regimes on vegetation can overshadow or interact with climate change impacts. Models forecasting plant community responses to global change incorporate shifting ecological niches, population dynamics, species interactions, spatially explicit disturbance, ecosystem processes, and plant functional responses. Lastly, monitoring, experiments, and models evaluating multiple change drivers are needed to detect and predict vegetation changes in response to 21st century global change.« less
Global change and terrestrial plant community dynamics
Franklin, Janet; Serra-Diaz, Josep M.; Syphard, Alexandra D.; Regan, Helen M.
2016-01-01
Anthropogenic drivers of global change include rising atmospheric concentrations of carbon dioxide and other greenhouse gasses and resulting changes in the climate, as well as nitrogen deposition, biotic invasions, altered disturbance regimes, and land-use change. Predicting the effects of global change on terrestrial plant communities is crucial because of the ecosystem services vegetation provides, from climate regulation to forest products. In this paper, we present a framework for detecting vegetation changes and attributing them to global change drivers that incorporates multiple lines of evidence from spatially extensive monitoring networks, distributed experiments, remotely sensed data, and historical records. Based on a literature review, we summarize observed changes and then describe modeling tools that can forecast the impacts of multiple drivers on plant communities in an era of rapid change. Observed responses to changes in temperature, water, nutrients, land use, and disturbance show strong sensitivity of ecosystem productivity and plant population dynamics to water balance and long-lasting effects of disturbance on plant community dynamics. Persistent effects of land-use change and human-altered fire regimes on vegetation can overshadow or interact with climate change impacts. Models forecasting plant community responses to global change incorporate shifting ecological niches, population dynamics, species interactions, spatially explicit disturbance, ecosystem processes, and plant functional responses. Monitoring, experiments, and models evaluating multiple change drivers are needed to detect and predict vegetation changes in response to 21st century global change. PMID:26929338
DOE Office of Scientific and Technical Information (OSTI.GOV)
Franklin, Janet; Serra-Diaz, Josep M.; Syphard, Alexandra D.
Anthropogenic drivers of global change include rising atmospheric concentrations of carbon dioxide and other greenhouse gasses and resulting changes in the climate, as well as nitrogen deposition, biotic invasions, altered disturbance regimes, and land-use change. Predicting the effects of global change on terrestrial plant communities is crucial because of the ecosystem services vegetation provides, from climate regulation to forest products. In this article, we present a framework for detecting vegetation changes and attributing them to global change drivers that incorporates multiple lines of evidence from spatially extensive monitoring networks, distributed experiments, remotely sensed data, and historical records. Based on amore » literature review, we summarize observed changes and then describe modeling tools that can forecast the impacts of multiple drivers on plant communities in an era of rapid change. Observed responses to changes in temperature, water, nutrients, land use, and disturbance show strong sensitivity of ecosystem productivity and plant population dynamics to water balance and long-lasting effects of disturbance on plant community dynamics. Persistent effects of land-use change and human-altered fire regimes on vegetation can overshadow or interact with climate change impacts. Models forecasting plant community responses to global change incorporate shifting ecological niches, population dynamics, species interactions, spatially explicit disturbance, ecosystem processes, and plant functional responses. Lastly, monitoring, experiments, and models evaluating multiple change drivers are needed to detect and predict vegetation changes in response to 21st century global change.« less
USDA-ARS?s Scientific Manuscript database
To address the multiple challenges to food security posed by global climate change, population growth and rising incomes, plant breeders are developing new crop varieties that can enhance both agricultural productivity and environmental sustainability. Current breeding practices, however, are unable...
Calls for Multiple Indices Incorporating Multiculturalism in Content Analysis
ERIC Educational Resources Information Center
Lee, Dong-gwi
2005-01-01
This reaction evaluates three content analyses that investigated separate aspects of research articles published in major counseling psychology journals: (a) institutional research productivity, (b) use of structural equation modeling, and (c) use of theory-driven research. The evaluation focuses on the adequacy of indices used in the content…
Online Communication and Information Technology Education
ERIC Educational Resources Information Center
Heinze, Aleksej; Procter, Chris
2006-01-01
Blended Learning, a learning facilitation that incorporates different modes of delivery, models of teaching, and learning styles, introduces multiple media to the dialog between the learner and the facilitator. This paper examines online communication as the link between established theory of learning and literature on e-learning in order to…
Change Detection, Multiple Controllers, and Dynamic Environments: Insights from the Brain
ERIC Educational Resources Information Center
Pearson, John M.; Platt, Michael L.
2013-01-01
Foundational studies in decision making focused on behavior as the most accessible and reliable data on which to build theories of choice. More recent work, however, has incorporated neural data to provide insights unavailable from behavior alone. Among other contributions, these studies have validated reinforcement learning models by…
Armen, Roger S; Chen, Jianhan; Brooks, Charles L
2009-10-13
Incorporating receptor flexibility into molecular docking should improve results for flexible proteins. However, the incorporation of explicit all-atom flexibility with molecular dynamics for the entire protein chain may also introduce significant error and "noise" that could decrease docking accuracy and deteriorate the ability of a scoring function to rank native-like poses. We address this apparent paradox by comparing the success of several flexible receptor models in cross-docking and multiple receptor ensemble docking for p38α mitogen-activated protein (MAP) kinase. Explicit all-atom receptor flexibility has been incorporated into a CHARMM-based molecular docking method (CDOCKER) using both molecular dynamics (MD) and torsion angle molecular dynamics (TAMD) for the refinement of predicted protein-ligand binding geometries. These flexible receptor models have been evaluated, and the accuracy and efficiency of TAMD sampling is directly compared to MD sampling. Several flexible receptor models are compared, encompassing flexible side chains, flexible loops, multiple flexible backbone segments, and treatment of the entire chain as flexible. We find that although including side chain and some backbone flexibility is required for improved docking accuracy as expected, docking accuracy also diminishes as additional and unnecessary receptor flexibility is included into the conformational search space. Ensemble docking results demonstrate that including protein flexibility leads to to improved agreement with binding data for 227 active compounds. This comparison also demonstrates that a flexible receptor model enriches high affinity compound identification without significantly increasing the number of false positives from low affinity compounds.
Armen, Roger S.; Chen, Jianhan; Brooks, Charles L.
2009-01-01
Incorporating receptor flexibility into molecular docking should improve results for flexible proteins. However, the incorporation of explicit all-atom flexibility with molecular dynamics for the entire protein chain may also introduce significant error and “noise” that could decrease docking accuracy and deteriorate the ability of a scoring function to rank native-like poses. We address this apparent paradox by comparing the success of several flexible receptor models in cross-docking and multiple receptor ensemble docking for p38α mitogen-activated protein (MAP) kinase. Explicit all-atom receptor flexibility has been incorporated into a CHARMM-based molecular docking method (CDOCKER) using both molecular dynamics (MD) and torsion angle molecular dynamics (TAMD) for the refinement of predicted protein-ligand binding geometries. These flexible receptor models have been evaluated, and the accuracy and efficiency of TAMD sampling is directly compared to MD sampling. Several flexible receptor models are compared, encompassing flexible side chains, flexible loops, multiple flexible backbone segments, and treatment of the entire chain as flexible. We find that although including side chain and some backbone flexibility is required for improved docking accuracy as expected, docking accuracy also diminishes as additional and unnecessary receptor flexibility is included into the conformational search space. Ensemble docking results demonstrate that including protein flexibility leads to to improved agreement with binding data for 227 active compounds. This comparison also demonstrates that a flexible receptor model enriches high affinity compound identification without significantly increasing the number of false positives from low affinity compounds. PMID:20160879
Behavioral Modeling of Adversaries with Multiple Objectives in Counterterrorism.
Mazicioglu, Dogucan; Merrick, Jason R W
2018-05-01
Attacker/defender models have primarily assumed that each decisionmaker optimizes the cost of the damage inflicted and its economic repercussions from their own perspective. Two streams of recent research have sought to extend such models. One stream suggests that it is more realistic to consider attackers with multiple objectives, but this research has not included the adaption of the terrorist with multiple objectives to defender actions. The other stream builds off experimental studies that show that decisionmakers deviate from optimal rational behavior. In this article, we extend attacker/defender models to incorporate multiple objectives that a terrorist might consider in planning an attack. This includes the tradeoffs that a terrorist might consider and their adaption to defender actions. However, we must also consider experimental evidence of deviations from the rationality assumed in the commonly used expected utility model in determining such adaption. Thus, we model the attacker's behavior using multiattribute prospect theory to account for the attacker's multiple objectives and deviations from rationality. We evaluate our approach by considering an attacker with multiple objectives who wishes to smuggle radioactive material into the United States and a defender who has the option to implement a screening process to hinder the attacker. We discuss the problems with implementing such an approach, but argue that research in this area must continue to avoid misrepresenting terrorist behavior in determining optimal defensive actions. © 2017 Society for Risk Analysis.
GLASS 2.0: An Operational, Multimodal, Bayesian Earthquake Data Association Engine
NASA Astrophysics Data System (ADS)
Benz, H.; Johnson, C. E.; Patton, J. M.; McMahon, N. D.; Earle, P. S.
2015-12-01
The legacy approach to automated detection and determination of hypocenters is arrival time stacking algorithms. Examples of such algorithms are the associator, Binder, which has been in continuous use in many USGS-supported regional seismic networks since the 1980s and the spherical earth successor, GLASS 1.0, currently in service at the USGS National Earthquake Information Center for over 10 years. The principle short-comings of the legacy approach are 1) it can only use phase arrival times, 2) it does not adequately address the problems of extreme variations in station density worldwide, 3) it cannot incorporate multiple phase models or statistical attributes of phases with distance, and 4) it cannot incorporate noise model attributes of individual stations. Previously we introduced a theoretical framework of a new associator using a Bayesian kernel stacking approach to approximate a joint probability density function for hypocenter localization. More recently we added station- and phase-specific Bayesian constraints to the association process. GLASS 2.0 incorporates a multiplicity of earthquake related data including phase arrival times, back-azimuth and slowness information from array beamforming, arrival times from waveform cross correlation processing, and geographic constraints from real-time social media reports of ground shaking. We demonstrate its application by modeling an aftershock sequence using dozens of stations that recorded tens of thousands of earthquakes over a period of one month. We also demonstrate Glass 2.0 performance regionally and teleseismically using the globally distributed real-time monitoring system at NEIC.
Double multiple streamtube model with recent improvements
NASA Astrophysics Data System (ADS)
Paraschivoiu, I.; Delclaux, F.
1983-06-01
The objective of the present paper is to show the new capabilities of the double multiple streamtube (DMS) model for predicting the aerodynamic loads and performance of the Darrieus vertical-axis turbine. The original DMS model has been improved (DMSV model) by considering the variation in the upwind and downwind induced velocities as a function of the azimuthal angle for each streamtube. A comparison is made of the rotor performance for several blade geometries (parabola, catenary, troposkien, and Sandia shape). A new formulation is given for an approximate troposkien shape by considering the effect of the gravitational field. The effects of three NACA symmetrical profiles, 0012, 0015 and 0018, on the aerodynamic performance of the turbine are shown. Finally, a semiempirical dynamic-stall model has been incorporated and a better approximation obtained for modeling the local aerodynamic forces and performance for a Darrieus rotor.
Representation and presentation of requirements knowledge
NASA Technical Reports Server (NTRS)
Johnson, W. L.; Feather, Martin S.; Harris, David R.
1992-01-01
An approach to representation and presentation of knowledge used in the ARIES, an experimental requirements/specification environment, is described. The approach applies the notion of a representation architecture to the domain of software engineering and incorporates a strong coupling to a transformation system. It is characterized by a single highly expressive underlying representation, interfaced simultaneously to multiple presentations, each with notations of differing degrees of expressivity. This enables analysts to use multiple languages for describing systems and have these descriptions yield a single consistent model of the system.
The Effects of Towfish Motion on Sidescan Sonar Images: Extension to a Multiple-Beam Device
1994-02-01
simulation, the raw simulated sidescan image is formed from pixels G , which are the sum of energies E,", assigned to the nearest range- bin k as noted in...for stable motion at constant velocity V0, are applied to (divided into) the G ,, and the simulated sidescan image is ready to display. Maximal energy...limitation is likely to apply to all multiple-beam sonais of similar construction. The yaw correction was incorporated in the MBEAM model by an
Park, Eun Sug; Symanski, Elaine; Han, Daikwon; Spiegelman, Clifford
2015-06-01
A major difficulty with assessing source-specific health effects is that source-specific exposures cannot be measured directly; rather, they need to be estimated by a source-apportionment method such as multivariate receptor modeling. The uncertainty in source apportionment (uncertainty in source-specific exposure estimates and model uncertainty due to the unknown number of sources and identifiability conditions) has been largely ignored in previous studies. Also, spatial dependence of multipollutant data collected from multiple monitoring sites has not yet been incorporated into multivariate receptor modeling. The objectives of this project are (1) to develop a multipollutant approach that incorporates both sources of uncertainty in source-apportionment into the assessment of source-specific health effects and (2) to develop enhanced multivariate receptor models that can account for spatial correlations in the multipollutant data collected from multiple sites. We employed a Bayesian hierarchical modeling framework consisting of multivariate receptor models, health-effects models, and a hierarchical model on latent source contributions. For the health model, we focused on the time-series design in this project. Each combination of number of sources and identifiability conditions (additional constraints on model parameters) defines a different model. We built a set of plausible models with extensive exploratory data analyses and with information from previous studies, and then computed posterior model probability to estimate model uncertainty. Parameter estimation and model uncertainty estimation were implemented simultaneously by Markov chain Monte Carlo (MCMC*) methods. We validated the methods using simulated data. We illustrated the methods using PM2.5 (particulate matter ≤ 2.5 μm in aerodynamic diameter) speciation data and mortality data from Phoenix, Arizona, and Houston, Texas. The Phoenix data included counts of cardiovascular deaths and daily PM2.5 speciation data from 1995-1997. The Houston data included respiratory mortality data and 24-hour PM2.5 speciation data sampled every six days from a region near the Houston Ship Channel in years 2002-2005. We also developed a Bayesian spatial multivariate receptor modeling approach that, while simultaneously dealing with the unknown number of sources and identifiability conditions, incorporated spatial correlations in the multipollutant data collected from multiple sites into the estimation of source profiles and contributions based on the discrete process convolution model for multivariate spatial processes. This new modeling approach was applied to 24-hour ambient air concentrations of 17 volatile organic compounds (VOCs) measured at nine monitoring sites in Harris County, Texas, during years 2000 to 2005. Simulation results indicated that our methods were accurate in identifying the true model and estimated parameters were close to the true values. The results from our methods agreed in general with previous studies on the source apportionment of the Phoenix data in terms of estimated source profiles and contributions. However, we had a greater number of statistically insignificant findings, which was likely a natural consequence of incorporating uncertainty in the estimated source contributions into the health-effects parameter estimation. For the Houston data, a model with five sources (that seemed to be Sulfate-Rich Secondary Aerosol, Motor Vehicles, Industrial Combustion, Soil/Crustal Matter, and Sea Salt) showed the highest posterior model probability among the candidate models considered when fitted simultaneously to the PM2.5 and mortality data. There was a statistically significant positive association between respiratory mortality and same-day PM2.5 concentrations attributed to one of the sources (probably industrial combustion). The Bayesian spatial multivariate receptor modeling approach applied to the VOC data led to a highest posterior model probability for a model with five sources (that seemed to be refinery, petrochemical production, gasoline evaporation, natural gas, and vehicular exhaust) among several candidate models, with the number of sources varying between three and seven and with different identifiability conditions. Our multipollutant approach assessing source-specific health effects is more advantageous than a single-pollutant approach in that it can estimate total health effects from multiple pollutants and can also identify emission sources that are responsible for adverse health effects. Our Bayesian approach can incorporate not only uncertainty in the estimated source contributions, but also model uncertainty that has not been addressed in previous studies on assessing source-specific health effects. The new Bayesian spatial multivariate receptor modeling approach enables predictions of source contributions at unmonitored sites, minimizing exposure misclassification and providing improved exposure estimates along with their uncertainty estimates, as well as accounting for uncertainty in the number of sources and identifiability conditions.
NASA Astrophysics Data System (ADS)
Schumann, Andreas; Oppel, Henning
2017-04-01
To represent the hydrological behaviour of catchments a model should reproduce/reflect the hydrologically most relevant catchment characteristics. These are heterogeneously distributed within a watershed but often interrelated and subject of a certain spatial organisation. Since common models are mostly based on fundamental assumptions about hydrological processes, the reduction of variance of catchment properties as well as the incorporation of the spatial organisation of the catchment is desirable. We have developed a method that combines the idea of the width-function used for determination of the geomorphologic unit hydrograph with information about soil or topography. With this method we are able to assess the spatial organisation of selected catchment characteristics. An algorithm was developed that structures a watershed into sub-basins and other spatial units to minimise its heterogeneity. The outcomes of this algorithm are used for the spatial setup of a semi-distributed model. Since the spatial organisation of a catchment is not bound to a single characteristic, we have to embed information of multiple catchment properties. For this purpose we applied a fuzzy-based method to combine the spatial setup for multiple single characteristics into a union, optimal spatial differentiation. Utilizing this method, we are able to propose a spatial structure for a semi-distributed hydrological model, comprising the definition of sub-basins and a zonal classification within each sub-basin. Besides the improved spatial structuring, the performed analysis ameliorates modelling in another way. The spatial variability of catchment characteristics, which is considered by a minimum of heterogeneity in the zones, can be considered in a parameter constrained calibration scheme in a case study both options were used to explore the benefits of incorporating the spatial organisation and derived parameter constraints for the parametrisation of a HBV-96 model. We use two benchmark model setups (lumped and semi-distributed by common approaches) to address the benefits for different time and spatial scales. Moreover, the benefits for calibration effort, model performance in validation periods and process extrapolation are shown.
NASA Astrophysics Data System (ADS)
Gao, B.; Nakano, S.; Harada, H.; Miyamura, Y.; Kakimoto, K.
2017-09-01
We used an advanced 3D model to study the effect of crystal orientation on the dislocation multiplication in single-crystal silicon under accurate control of the cooling history of temperature. The incorporation of the anisotropy effect of the crystal lattice into the model has been explained in detail, and an algorithm for accurate control of the temperature in the furnace has also been presented. This solver can dynamically track the history of dislocation generation for different orientations during thermal processing of single-crystal silicon. Four orientations, [001], [110], [111], and [112], have been examined, and the comparison of dislocation distributions has been provided.
Managing data from multiple disciplines, scales, and sites to support synthesis and modeling
Olson, R. J.; Briggs, J. M.; Porter, J.H.; Mah, Grant R.; Stafford, S.G.
1999-01-01
The synthesis and modeling of ecological processes at multiple spatial and temporal scales involves bringing together and sharing data from numerous sources. This article describes a data and information system model that facilitates assembling, managing, and sharing diverse data from multiple disciplines, scales, and sites to support integrated ecological studies. Cross-site scientific-domain working groups coordinate the development of data associated with their particular scientific working group, including decisions about data requirements, data to be compiled, data formats, derived data products, and schedules across the sites. The Web-based data and information system consists of nodes for each working group plus a central node that provides data access, project information, data query, and other functionality. The approach incorporates scientists and computer experts in the working groups and provides incentives for individuals to submit documented data to the data and information system.
Optimizing modelling in iterative image reconstruction for preclinical pinhole PET
NASA Astrophysics Data System (ADS)
Goorden, Marlies C.; van Roosmalen, Jarno; van der Have, Frans; Beekman, Freek J.
2016-05-01
The recently developed versatile emission computed tomography (VECTor) technology enables high-energy SPECT and simultaneous SPECT and PET of small animals at sub-mm resolutions. VECTor uses dedicated clustered pinhole collimators mounted in a scanner with three stationary large-area NaI(Tl) gamma detectors. Here, we develop and validate dedicated image reconstruction methods that compensate for image degradation by incorporating accurate models for the transport of high-energy annihilation gamma photons. Ray tracing software was used to calculate photon transport through the collimator structures and into the gamma detector. Input to this code are several geometric parameters estimated from system calibration with a scanning 99mTc point source. Effects on reconstructed images of (i) modelling variable depth-of-interaction (DOI) in the detector, (ii) incorporating photon paths that go through multiple pinholes (‘multiple-pinhole paths’ (MPP)), and (iii) including various amounts of point spread function (PSF) tail were evaluated. Imaging 18F in resolution and uniformity phantoms showed that including large parts of PSFs is essential to obtain good contrast-noise characteristics and that DOI modelling is highly effective in removing deformations of small structures, together leading to 0.75 mm resolution PET images of a hot-rod Derenzo phantom. Moreover, MPP modelling reduced the level of background noise. These improvements were also clearly visible in mouse images. Performance of VECTor can thus be significantly improved by accurately modelling annihilation gamma photon transport.
Powathil, Gibin G.; Adamson, Douglas J. A.; Chaplain, Mark A. J.
2013-01-01
In this paper we use a hybrid multiscale mathematical model that incorporates both individual cell behaviour through the cell-cycle and the effects of the changing microenvironment through oxygen dynamics to study the multiple effects of radiation therapy. The oxygenation status of the cells is considered as one of the important prognostic markers for determining radiation therapy, as hypoxic cells are less radiosensitive. Another factor that critically affects radiation sensitivity is cell-cycle regulation. The effects of radiation therapy are included in the model using a modified linear quadratic model for the radiation damage, incorporating the effects of hypoxia and cell-cycle in determining the cell-cycle phase-specific radiosensitivity. Furthermore, after irradiation, an individual cell's cell-cycle dynamics are intrinsically modified through the activation of pathways responsible for repair mechanisms, often resulting in a delay/arrest in the cell-cycle. The model is then used to study various combinations of multiple doses of cell-cycle dependent chemotherapies and radiation therapy, as radiation may work better by the partial synchronisation of cells in the most radiosensitive phase of the cell-cycle. Moreover, using this multi-scale model, we investigate the optimum sequencing and scheduling of these multi-modality treatments, and the impact of internal and external heterogeneity on the spatio-temporal patterning of the distribution of tumour cells and their response to different treatment schedules. PMID:23874170
2006-06-07
inquirers based on the underlying philosophies of Leibniz, Locke, Kant , Hegel, and Singer. These inquirers share capabilities and can work together in a...encourages and supports socially oriented knowledge development. The Kantian Inquirer Kantian systems are the archetype of multi-model, synthetic systems...Mason and Mitroff, 1973). The Kantian inquirer is designed to incorporate both multiple perspectives and facts to determine models that are
Hatfield, L.A.; Gutreuter, S.; Boogaard, M.A.; Carlin, B.P.
2011-01-01
Estimation of extreme quantal-response statistics, such as the concentration required to kill 99.9% of test subjects (LC99.9), remains a challenge in the presence of multiple covariates and complex study designs. Accurate and precise estimates of the LC99.9 for mixtures of toxicants are critical to ongoing control of a parasitic invasive species, the sea lamprey, in the Laurentian Great Lakes of North America. The toxicity of those chemicals is affected by local and temporal variations in water chemistry, which must be incorporated into the modeling. We develop multilevel empirical Bayes models for data from multiple laboratory studies. Our approach yields more accurate and precise estimation of the LC99.9 compared to alternative models considered. This study demonstrates that properly incorporating hierarchical structure in laboratory data yields better estimates of LC99.9 stream treatment values that are critical to larvae control in the field. In addition, out-of-sample prediction of the results of in situ tests reveals the presence of a latent seasonal effect not manifest in the laboratory studies, suggesting avenues for future study and illustrating the importance of dual consideration of both experimental and observational data. ?? 2011, The International Biometric Society.
Optimisation of a Generic Ionic Model of Cardiac Myocyte Electrical Activity
Guo, Tianruo; Al Abed, Amr; Lovell, Nigel H.; Dokos, Socrates
2013-01-01
A generic cardiomyocyte ionic model, whose complexity lies between a simple phenomenological formulation and a biophysically detailed ionic membrane current description, is presented. The model provides a user-defined number of ionic currents, employing two-gate Hodgkin-Huxley type kinetics. Its generic nature allows accurate reconstruction of action potential waveforms recorded experimentally from a range of cardiac myocytes. Using a multiobjective optimisation approach, the generic ionic model was optimised to accurately reproduce multiple action potential waveforms recorded from central and peripheral sinoatrial nodes and right atrial and left atrial myocytes from rabbit cardiac tissue preparations, under different electrical stimulus protocols and pharmacological conditions. When fitted simultaneously to multiple datasets, the time course of several physiologically realistic ionic currents could be reconstructed. Model behaviours tend to be well identified when extra experimental information is incorporated into the optimisation. PMID:23710254
An entropic barriers diffusion theory of decision-making in multiple alternative tasks
Sigman, Mariano; Cecchi, Guillermo A.
2018-01-01
We present a theory of decision-making in the presence of multiple choices that departs from traditional approaches by explicitly incorporating entropic barriers in a stochastic search process. We analyze response time data from an on-line repository of 15 million blitz chess games, and show that our model fits not just the mean and variance, but the entire response time distribution (over several response-time orders of magnitude) at every stage of the game. We apply the model to show that (a) higher cognitive expertise corresponds to the exploration of more complex solution spaces, and (b) reaction times of users at an on-line buying website can be similarly explained. Our model can be seen as a synergy between diffusion models used to model simple two-choice decision-making and planning agents in complex problem solving. PMID:29499036
Hu, Yue-Hua; Kitching, Roger L.; Lan, Guo-Yu; Zhang, Jiao-Lin; Sha, Li-Qing; Cao, Min
2014-01-01
We have investigated the processes of community assembly using size classes of trees. Specifically our work examined (1) whether point process models incorporating an effect of size-class produce more realistic summary outcomes than do models without this effect; (2) which of three selected models incorporating, respectively environmental effects, dispersal and the joint-effect of both of these, is most useful in explaining species-area relationships (SARs) and point dispersion patterns. For this evaluation we used tree species data from the 50-ha forest dynamics plot in Barro Colorado Island, Panama and the comparable 20 ha plot at Bubeng, Southwest China. Our results demonstrated that incorporating an size-class effect dramatically improved the SAR estimation at both the plots when the dispersal only model was used. The joint effect model produced similar improvement but only for the 50-ha plot in Panama. The point patterns results were not improved by incorporation of size-class effects using any of the three models. Our results indicate that dispersal is likely to be a key process determining both SARs and point patterns. The environment-only model and joint-effects model were effective at the species level and the community level, respectively. We conclude that it is critical to use multiple summary characteristics when modelling spatial patterns at the species and community levels if a comprehensive understanding of the ecological processes that shape species’ distributions is sought; without this results may have inherent biases. By influencing dispersal, the effect of size-class contributes to species assembly and enhances our understanding of species coexistence. PMID:25251538
NASA Astrophysics Data System (ADS)
Chen, Tao; Clauser, Christoph; Marquart, Gabriele; Willbrand, Karen; Hiller, Thomas
2018-02-01
Upscaling permeability of grid blocks is crucial for groundwater models. A novel upscaling method for three-dimensional fractured porous rocks is presented. The objective of the study was to compare this method with the commonly used Oda upscaling method and the volume averaging method. First, the multiple boundary method and its computational framework were defined for three-dimensional stochastic fracture networks. Then, the different upscaling methods were compared for a set of rotated fractures, for tortuous fractures, and for two discrete fracture networks. The results computed by the multiple boundary method are comparable with those of the other two methods and fit best the analytical solution for a set of rotated fractures. The errors in flow rate of the equivalent fracture model decrease when using the multiple boundary method. Furthermore, the errors of the equivalent fracture models increase from well-connected fracture networks to poorly connected ones. Finally, the diagonal components of the equivalent permeability tensors tend to follow a normal or log-normal distribution for the well-connected fracture network model with infinite fracture size. By contrast, they exhibit a power-law distribution for the poorly connected fracture network with multiple scale fractures. The study demonstrates the accuracy and the flexibility of the multiple boundary upscaling concept. This makes it attractive for being incorporated into any existing flow-based upscaling procedures, which helps in reducing the uncertainty of groundwater models.
Seismic source characterization for the 2014 update of the U.S. National Seismic Hazard Model
Moschetti, Morgan P.; Powers, Peter; Petersen, Mark D.; Boyd, Oliver; Chen, Rui; Field, Edward H.; Frankel, Arthur; Haller, Kathleen; Harmsen, Stephen; Mueller, Charles S.; Wheeler, Russell; Zeng, Yuehua
2015-01-01
We present the updated seismic source characterization (SSC) for the 2014 update of the National Seismic Hazard Model (NSHM) for the conterminous United States. Construction of the seismic source models employs the methodology that was developed for the 1996 NSHM but includes new and updated data, data types, source models, and source parameters that reflect the current state of knowledge of earthquake occurrence and state of practice for seismic hazard analyses. We review the SSC parameterization and describe the methods used to estimate earthquake rates, magnitudes, locations, and geometries for all seismic source models, with an emphasis on new source model components. We highlight the effects that two new model components—incorporation of slip rates from combined geodetic-geologic inversions and the incorporation of adaptively smoothed seismicity models—have on probabilistic ground motions, because these sources span multiple regions of the conterminous United States and provide important additional epistemic uncertainty for the 2014 NSHM.
Unified tensor model for space-frequency spreading-multiplexing (SFSM) MIMO communication systems
NASA Astrophysics Data System (ADS)
de Almeida, André LF; Favier, Gérard
2013-12-01
This paper presents a unified tensor model for space-frequency spreading-multiplexing (SFSM) multiple-input multiple-output (MIMO) wireless communication systems that combine space- and frequency-domain spreadings, followed by a space-frequency multiplexing. Spreading across space (transmit antennas) and frequency (subcarriers) adds resilience against deep channel fades and provides space and frequency diversities, while orthogonal space-frequency multiplexing enables multi-stream transmission. We adopt a tensor-based formulation for the proposed SFSM MIMO system that incorporates space, frequency, time, and code dimensions by means of the parallel factor model. The developed SFSM tensor model unifies the tensorial formulation of some existing multiple-access/multicarrier MIMO signaling schemes as special cases, while revealing interesting tradeoffs due to combined space, frequency, and time diversities which are of practical relevance for joint symbol-channel-code estimation. The performance of the proposed SFSM MIMO system using either a zero forcing receiver or a semi-blind tensor-based receiver is illustrated by means of computer simulation results under realistic channel and system parameters.
NASA Technical Reports Server (NTRS)
Santi, L. Michael
1986-01-01
Computational predictions of turbulent flow in sharply curved 180 degree turn around ducts are presented. The CNS2D computer code is used to solve the equations of motion for two-dimensional incompressible flows transformed to a nonorthogonal body-fitted coordinate system. This procedure incorporates the pressure velocity correction algorithm SIMPLE-C to iteratively solve a discretized form of the transformed equations. A multiple scale turbulence model based on simplified spectral partitioning is employed to obtain closure. Flow field predictions utilizing the multiple scale model are compared to features predicted by the traditional single scale k-epsilon model. Tuning parameter sensitivities of the multiple scale model applied to turn around duct flows are also determined. In addition, a wall function approach based on a wall law suitable for incompressible turbulent boundary layers under strong adverse pressure gradients is tested. Turn around duct flow characteristics utilizing this modified wall law are presented and compared to results based on a standard wall treatment.
Using iPad Tablets for Self-modeling with Preschoolers: Videos versus Photos
ERIC Educational Resources Information Center
McCoy, Dacia M.; Morrison, Julie Q.; Barnett, Dave W.; Kalra, Hilary D.; Donovan, Lauren K.
2017-01-01
As technology becomes more accessible and acceptable in the preschool setting, teachers need effective strategies of incorporating it to address challenging behaviors. A nonconcurrent delayed multiple baseline design in combination with an alternating treatment design was utilized to investigate the effects of using iPad tablets to display video…
Lin, Ting; Harmsen, Stephen C.; Baker, Jack W.; Luco, Nicolas
2013-01-01
The conditional spectrum (CS) is a target spectrum (with conditional mean and conditional standard deviation) that links seismic hazard information with ground-motion selection for nonlinear dynamic analysis. Probabilistic seismic hazard analysis (PSHA) estimates the ground-motion hazard by incorporating the aleatory uncertainties in all earthquake scenarios and resulting ground motions, as well as the epistemic uncertainties in ground-motion prediction models (GMPMs) and seismic source models. Typical CS calculations to date are produced for a single earthquake scenario using a single GMPM, but more precise use requires consideration of at least multiple causal earthquakes and multiple GMPMs that are often considered in a PSHA computation. This paper presents the mathematics underlying these more precise CS calculations. Despite requiring more effort to compute than approximate calculations using a single causal earthquake and GMPM, the proposed approach produces an exact output that has a theoretical basis. To demonstrate the results of this approach and compare the exact and approximate calculations, several example calculations are performed for real sites in the western United States. The results also provide some insights regarding the circumstances under which approximate results are likely to closely match more exact results. To facilitate these more precise calculations for real applications, the exact CS calculations can now be performed for real sites in the United States using new deaggregation features in the U.S. Geological Survey hazard mapping tools. Details regarding this implementation are discussed in this paper.
2015-11-09
as biokinetic or physiologically-based pharmacokinetic (PBPK) models, can readily incorporate multiple routes of exposure (e.g., baseline dietary ...1998) and in the code provided as a supplement to O’Flaherty (2000), as noted above. An aspect of the O’Flaherty model that contrasts to the... dietary Pb; a stable Pb isotope was substituted for some of the dietary Pb for limited periods. Tracer Pb concentrations were measured in blood
Fu, Congsheng; Wang, Guiling; Goulden, Michael L.; ...
2016-05-17
Effects of hydraulic redistribution (HR) on hydrological, biogeochemical, and ecological processes have been demonstrated in the field, but the current generation of standard earth system models does not include a representation of HR. Though recent studies have examined the effect of incorporating HR into land surface models, few (if any) have done cross-site comparisons for contrasting climate regimes and multiple vegetation types via the integration of measurement and modeling. Here, we incorporated the HR scheme of Ryel et al. (2002) into the NCAR Community Land Model Version 4.5 (CLM4.5), and examined the ability of the resulting hybrid model to capture themore » magnitude of HR flux and/or soil moisture dynamics from which HR can be directly inferred, to assess the impact of HR on land surface water and energy budgets, and to explore how the impact may depend on climate regimes and vegetation conditions. Eight AmeriFlux sites with contrasting climate regimes and multiple vegetation types were studied, including the Wind River Crane site in Washington State, the Santa Rita Mesquite savanna site in southern Arizona, and six sites along the Southern California Climate Gradient. HR flux, evapotranspiration (ET), and soil moisture were properly simulated in the present study, even in the face of various uncertainties. Our cross-ecosystem comparison showed that the timing, magnitude, and direction (upward or downward) of HR vary across ecosystems, and incorporation of HR into CLM4.5 improved the model-measurement matches of evapotranspiration, Bowen ratio, and soil moisture particularly during dry seasons. Lastly, our results also reveal that HR has important hydrological impact in ecosystems that have a pronounced dry season but are not overall so dry that sparse vegetation and very low soil moisture limit HR.« less
Multiagent intelligent systems
NASA Astrophysics Data System (ADS)
Krause, Lee S.; Dean, Christopher; Lehman, Lynn A.
2003-09-01
This paper will discuss a simulation approach based upon a family of agent-based models. As the demands placed upon simulation technology by such applications as Effects Based Operations (EBO), evaluations of indicators and warnings surrounding homeland defense and commercial demands such financial risk management current single thread based simulations will continue to show serious deficiencies. The types of "what if" analysis required to support these types of applications, demand rapidly re-configurable approaches capable of aggregating large models incorporating multiple viewpoints. The use of agent technology promises to provide a broad spectrum of models incorporating differing viewpoints through a synthesis of a collection of models. Each model would provide estimates to the overall scenario based upon their particular measure or aspect. An agent framework, denoted as the "family" would provide a common ontology in support of differing aspects of the scenario. This approach permits the future of modeling to change from viewing the problem as a single thread simulation, to take into account multiple viewpoints from different models. Even as models are updated or replaced the agent approach permits rapid inclusion in new or modified simulations. In this approach a variety of low and high-resolution information and its synthesis requires a family of models. Each agent "publishes" its support for a given measure and each model provides their own estimates on the scenario based upon their particular measure or aspect. If more than one agent provides the same measure (e.g. cognitive) then the results from these agents are combined to form an aggregate measure response. The objective would be to inform and help calibrate a qualitative model, rather than merely to present highly aggregated statistical information. As each result is processed, the next action can then be determined. This is done by a top-level decision system that communicates to the family at the ontology level without any specific understanding of the processes (or model) behind each agent. The increasingly complex demands upon simulation for the necessity to incorporate the breadth and depth of influencing factors makes a family of agent based models a promising solution. This paper will discuss that solution with syntax and semantics necessary to support the approach.
Adaptive evolutionary walks require neutral intermediates in RNA fitness landscapes.
Rendel, Mark D
2011-01-01
In RNA fitness landscapes with interconnected networks of neutral mutations, neutral precursor mutations can play an important role in facilitating the accessibility of epistatic adaptive mutant combinations. I use an exhaustively surveyed fitness landscape model based on short sequence RNA genotypes (and their secondary structure phenotypes) to calculate the minimum rate at which mutants initially appearing as neutral are incorporated into an adaptive evolutionary walk. I show first, that incorporating neutral mutations significantly increases the number of point mutations in a given evolutionary walk when compared to estimates from previous adaptive walk models. Second, that incorporating neutral mutants into such a walk significantly increases the final fitness encountered on that walk - indeed evolutionary walks including neutral steps often reach the global optimum in this model. Third, and perhaps most importantly, evolutionary paths of this kind are often extremely winding in their nature and have the potential to undergo multiple mutations at a given sequence position within a single walk; the potential of these winding paths to mislead phylogenetic reconstruction is briefly considered. Copyright © 2010 Elsevier Inc. All rights reserved.
Wen, Shihua; Zhang, Lanju; Yang, Bo
2014-07-01
The Problem formulation, Objectives, Alternatives, Consequences, Trade-offs, Uncertainties, Risk attitude, and Linked decisions (PrOACT-URL) framework and multiple criteria decision analysis (MCDA) have been recommended by the European Medicines Agency for structured benefit-risk assessment of medicinal products undergoing regulatory review. The objective of this article was to provide solutions to incorporate the uncertainty from clinical data into the MCDA model when evaluating the overall benefit-risk profiles among different treatment options. Two statistical approaches, the δ-method approach and the Monte-Carlo approach, were proposed to construct the confidence interval of the overall benefit-risk score from the MCDA model as well as other probabilistic measures for comparing the benefit-risk profiles between treatment options. Both approaches can incorporate the correlation structure between clinical parameters (criteria) in the MCDA model and are straightforward to implement. The two proposed approaches were applied to a case study to evaluate the benefit-risk profile of an add-on therapy for rheumatoid arthritis (drug X) relative to placebo. It demonstrated a straightforward way to quantify the impact of the uncertainty from clinical data to the benefit-risk assessment and enabled statistical inference on evaluating the overall benefit-risk profiles among different treatment options. The δ-method approach provides a closed form to quantify the variability of the overall benefit-risk score in the MCDA model, whereas the Monte-Carlo approach is more computationally intensive but can yield its true sampling distribution for statistical inference. The obtained confidence intervals and other probabilistic measures from the two approaches enhance the benefit-risk decision making of medicinal products. Copyright © 2014 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Online adaptive neural control of a robotic lower limb prosthesis
NASA Astrophysics Data System (ADS)
Spanias, J. A.; Simon, A. M.; Finucane, S. B.; Perreault, E. J.; Hargrove, L. J.
2018-02-01
Objective. The purpose of this study was to develop and evaluate an adaptive intent recognition algorithm that continuously learns to incorporate a lower limb amputee’s neural information (acquired via electromyography (EMG)) as they ambulate with a robotic leg prosthesis. Approach. We present a powered lower limb prosthesis that was configured to acquire the user’s neural information and kinetic/kinematic information from embedded mechanical sensors, and identify and respond to the user’s intent. We conducted an experiment with eight transfemoral amputees over multiple days. EMG and mechanical sensor data were collected while subjects using a powered knee/ankle prosthesis completed various ambulation activities such as walking on level ground, stairs, and ramps. Our adaptive intent recognition algorithm automatically transitioned the prosthesis into the different locomotion modes and continuously updated the user’s model of neural data during ambulation. Main results. Our proposed algorithm accurately and consistently identified the user’s intent over multiple days, despite changing neural signals. The algorithm incorporated 96.31% [0.91%] (mean, [standard error]) of neural information across multiple experimental sessions, and outperformed non-adaptive versions of our algorithm—with a 6.66% [3.16%] relative decrease in error rate. Significance. This study demonstrates that our adaptive intent recognition algorithm enables incorporation of neural information over long periods of use, allowing assistive robotic devices to accurately respond to the user’s intent with low error rates.
NASA Technical Reports Server (NTRS)
Chao, W. C.
1982-01-01
With appropriate modifications, a recently proposed explicit-multiple-time-step scheme (EMTSS) is incorporated into the UCLA model. In this scheme, the linearized terms in the governing equations that generate the gravity waves are split into different vertical modes. Each mode is integrated with an optimal time step, and at periodic intervals these modes are recombined. The other terms are integrated with a time step dictated by the CFL condition for low-frequency waves. This large time step requires a special modification of the advective terms in the polar region to maintain stability. Test runs for 72 h show that EMTSS is a stable, efficient and accurate scheme.
Exciton effects in the index of refraction of multiple quantum wells and superlattices
NASA Technical Reports Server (NTRS)
Kahen, K. B.; Leburton, J. P.
1986-01-01
Theoretical calculations of the index of refraction of multiple quantum wells and superlattices are presented. The model incorporates both the bound and continuum exciton contributions for the gamma region transitions. In addition, the electronic band structure model has both superlattice and bulk alloy properties. The results indicate that large light-hole masses, i.e., of about 0.23, produced by band mixing effects, are required to account for the experimental data. Furthermore, it is shown that superlattice effects rapidly decrease for energies greater than the confining potential barriers. Overall, the theoretical results are in very good agreement with the experimental data and show the importance of including exciton effects in the index of refraction.
Learning Compositional Shape Models of Multiple Distance Metrics by Information Projection.
Luo, Ping; Lin, Liang; Liu, Xiaobai
2016-07-01
This paper presents a novel compositional contour-based shape model by incorporating multiple distance metrics to account for varying shape distortions or deformations. Our approach contains two key steps: 1) contour feature generation and 2) generative model pursuit. For each category, we first densely sample an ensemble of local prototype contour segments from a few positive shape examples and describe each segment using three different types of distance metrics. These metrics are diverse and complementary with each other to capture various shape deformations. We regard the parameterized contour segment plus an additive residual ϵ as a basic subspace, namely, ϵ -ball, in the sense that it represents local shape variance under the certain distance metric. Using these ϵ -balls as features, we then propose a generative learning algorithm to pursue the compositional shape model, which greedily selects the most representative features under the information projection principle. In experiments, we evaluate our model on several public challenging data sets, and demonstrate that the integration of multiple shape distance metrics is capable of dealing various shape deformations, articulations, and background clutter, hence boosting system performance.
Sparsity-aware tight frame learning with adaptive subspace recognition for multiple fault diagnosis
NASA Astrophysics Data System (ADS)
Zhang, Han; Chen, Xuefeng; Du, Zhaohui; Yang, Boyuan
2017-09-01
It is a challenging problem to design excellent dictionaries to sparsely represent diverse fault information and simultaneously discriminate different fault sources. Therefore, this paper describes and analyzes a novel multiple feature recognition framework which incorporates the tight frame learning technique with an adaptive subspace recognition strategy. The proposed framework consists of four stages. Firstly, by introducing the tight frame constraint into the popular dictionary learning model, the proposed tight frame learning model could be formulated as a nonconvex optimization problem which can be solved by alternatively implementing hard thresholding operation and singular value decomposition. Secondly, the noises are effectively eliminated through transform sparse coding techniques. Thirdly, the denoised signal is decoupled into discriminative feature subspaces by each tight frame filter. Finally, in guidance of elaborately designed fault related sensitive indexes, latent fault feature subspaces can be adaptively recognized and multiple faults are diagnosed simultaneously. Extensive numerical experiments are sequently implemented to investigate the sparsifying capability of the learned tight frame as well as its comprehensive denoising performance. Most importantly, the feasibility and superiority of the proposed framework is verified through performing multiple fault diagnosis of motor bearings. Compared with the state-of-the-art fault detection techniques, some important advantages have been observed: firstly, the proposed framework incorporates the physical prior with the data-driven strategy and naturally multiple fault feature with similar oscillation morphology can be adaptively decoupled. Secondly, the tight frame dictionary directly learned from the noisy observation can significantly promote the sparsity of fault features compared to analytical tight frames. Thirdly, a satisfactory complete signal space description property is guaranteed and thus weak feature leakage problem is avoided compared to typical learning methods.
ERIC Educational Resources Information Center
Bourgeois, Marc B.; Winters, Ryan C.; Esters, Irvin E.
2016-01-01
Utilizing an experiential component in group work training is a prominent feature in Counselor Education programs. Although numerous models have been proposed, the vast majority offer limited explanations of incorporating the number of hours of group participation and observation recommended by the Professional Standards for the Training of Group…
Modeling fuel treatment leverage: Encounter rates, risk reduction, and suppression cost impacts
Matthew P. Thompson; Karin L. Riley; Dan Loeffler; Jessica R. Haas
2017-01-01
The primary theme of this study is the cost-effectiveness of fuel treatments at multiple scales of investment. We focused on the nexus of fuel management and suppression response planning, designing spatial fuel treatment strategies to incorporate landscape features that provide control opportunities that are relevant to fire operations. Our analysis explored the...
ERIC Educational Resources Information Center
Gowen, Deborah C.
2010-01-01
Finding teaching models and strategies that benefit learners while incorporating skills students will need in the future, such as using technology, is important. This study examined the problem of whether Webquests, an inquiry-based teaching strategy where much of the information is found online, are a beneficial way to integrate technology into…
Commentary: Gene by Environment Interplay and Psychopathology--In Search of a Paradigm
ERIC Educational Resources Information Center
Nigg, Joel T.
2013-01-01
The articles in this Special Issue (SI) extend research on G×E in multiple ways, showing the growing importance of specifying kinds of G×E models (e.g., bioecological, susceptibility, stress-diathesis), incorporation of sophisticated ways of measuring types of G×E correlations (rGE), checking effects of statistical artifact, exemplifying an…
NASA Astrophysics Data System (ADS)
Indi Sriprisan, Sirikul; Townsend, Lawrence; Cucinotta, Francis A.; Miller, Thomas M.
Purpose: An analytical knockout-ablation-coalescence model capable of making quantitative predictions of the neutron spectra from high-energy nucleon-nucleus and nucleus-nucleus collisions is being developed for use in space radiation protection studies. The FORTRAN computer code that implements this model is called UBERNSPEC. The knockout or abrasion stage of the model is based on Glauber multiple scattering theory. The ablation part of the model uses the classical evaporation model of Weisskopf-Ewing. In earlier work, the knockout-ablation model has been extended to incorporate important coalescence effects into the formalism. Recently, alpha coalescence has been incorporated, and the ability to predict light ion spectra with the coalescence model added. The earlier versions were limited to nuclei with mass numbers less than 69. In this work, the UBERNSPEC code has been extended to make predictions of secondary neutrons and light ion production from the interactions of heavy charged particles with higher mass numbers (as large as 238). The predictions are compared with published measurements of neutron spectra and light ion energy for a variety of collision pairs. Furthermore, the predicted spectra from this work are compared with the predictions from the recently-developed heavy ion event generator incorporated in the Monte Carlo radiation transport code HETC-HEDS.
NASA Astrophysics Data System (ADS)
Rahman, Abdul Samad Abdul; Noor, Mohd Jamaludin Md; Ahmad, Juhaizad Bin; Sidek, Norbaya
2017-10-01
The concept of effective stress has been the principal concept in characterizing soil volume change behavior in soil mechanics, the settlement models developed using this concept have been empirical in nature. However, there remain certain unexplained soil volume change behaviors that cannot be explained using the effective stress concept, one such behaviour is the inundation settlement. Studies have begun to indicate the inevitable role of shear strength as a critical element to be incorporated in models to unravel the unexplained soil behaviours. One soil volume change model that applies the concept of effective stress and the shear strength interaction is the Rotational Multiple Yield Surface Framework (RMYSF) model. This model has been developed from the soil-strain behavior under anisotropic stress condition. Hence, the RMYSF actually measure the soil actual elasto-plastic response to stress rather than assuming it to be fully elastic or plastic as normally perceived by the industry. The frameworks measures the increase in the mobilize shear strength when the soil undergo anisotropic settlement.
Anand, M.; Rajagopal, K.; Rajagopal, K. R.
2003-01-01
Multiple interacting mechanisms control the formation and dissolution of clots to maintain blood in a state of delicate balance. In addition to a myriad of biochemical reactions, rheological factors also play a crucial role in modulating the response of blood to external stimuli. To date, a comprehensive model for clot formation and dissolution, that takes into account the biochemical, medical and rheological factors, has not been put into place, the existing models emphasizing either one or the other of the factors. In this paper, after discussing the various biochemical, physiologic and rheological factors at some length, we develop a modelmore » for clot formation and dissolution that incorporates many of the relevant crucial factors that have a bearing on the problem. The model, though just a first step towards understanding a complex phenomenon, goes further than previous models in integrating the biochemical, physiologic and rheological factors that come into play.« less
SketchBio: a scientist's 3D interface for molecular modeling and animation.
Waldon, Shawn M; Thompson, Peter M; Hahn, Patrick J; Taylor, Russell M
2014-10-30
Because of the difficulties involved in learning and using 3D modeling and rendering software, many scientists hire programmers or animators to create models and animations. This both slows the discovery process and provides opportunities for miscommunication. Working with multiple collaborators, a tool was developed (based on a set of design goals) to enable them to directly construct models and animations. SketchBio is presented, a tool that incorporates state-of-the-art bimanual interaction and drop shadows to enable rapid construction of molecular structures and animations. It includes three novel features: crystal-by-example, pose-mode physics, and spring-based layout that accelerate operations common in the formation of molecular models. Design decisions and their consequences are presented, including cases where iterative design was required to produce effective approaches. The design decisions, novel features, and inclusion of state-of-the-art techniques enabled SketchBio to meet all of its design goals. These features and decisions can be incorporated into existing and new tools to improve their effectiveness.
Computation of turbulent reacting flow in a solid-propellant ducted rocket
NASA Astrophysics Data System (ADS)
Chao, Yei-Chin; Chou, Wen-Fuh; Liu, Sheng-Shyang
1995-05-01
A mathematical model for computation of turbulent reacting flows is developed under general curvilinear coordinate systems. An adaptive, streamline grid system is generated to deal with the complex flow structures in a multiple-inlet solid-propellant ducted rocket (SDR) combustor. General tensor representations of the k-epsilon and algebraic stress (ASM) turbulence models are derived in terms of contravariant velocity components, and modification caused by the effects of compressible turbulence is also included in the modeling. The clipped Gaussian probability density function is incorporated in the combustion model to account for fluctuations of properties. Validation of the above modeling is first examined by studying mixing and reacting characteristics in a confined coaxial-jet problem. This is followed by study of nonreacting and reacting SDR combustor flows. The results show that Gibson and Launder's ASM incorporated with Sarkar's modification for compressible turbulence effects based on the general curvilinear coordinate systems yields the most satisfactory prediction for this complicated SDR flowfield.
Computation of turbulent reacting flow in a solid-propellant ducted rocket
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chao, Y.; Chou, W.; Liu, S.
1995-05-01
A mathematical model for computation of turbulent reacting flows is developed under general curvilinear coordinate systems. An adaptive, streamline grid system is generated to deal with the complex flow structures in a multiple-inlet solid-propellant ducted rocket (SDR) combustor. General tensor representations of the k-epsilon and algebraic stress (ASM) turbulence models are derived in terms of contravariant velocity components, and modification caused by the effects of compressible turbulence is also included in the modeling. The clipped Gaussian probability density function is incorporated in the combustion model to account for fluctuations of properties. Validation of the above modeling is first examined bymore » studying mixing and reacting characteristics in a confined coaxial-jet problem. This is followed by study of nonreacting and reacting SDR combustor flows. The results show that Gibson and Launder`s ASM incorporated with Sarkar`s modification for compressible turbulence effects based on the general curvilinear coordinate systems yields the most satisfactory prediction for this complicated SDR flowfield. 36 refs.« less
Kambayashi, Atsushi; Blume, Henning; Dressman, Jennifer B
2014-07-01
The objective of this research was to characterize the dissolution profile of a poorly soluble drug, diclofenac, from a commercially available multiple-unit enteric coated dosage form, Diclo-Puren® capsules, and to develop a predictive model for its oral pharmacokinetic profile. The paddle method was used to obtain the dissolution profiles of this dosage form in biorelevant media, with the exposure to simulated gastric conditions being varied in order to simulate the gastric emptying behavior of pellets. A modified Noyes-Whitney theory was subsequently fitted to the dissolution data. A physiologically-based pharmacokinetic (PBPK) model for multiple-unit dosage forms was designed using STELLA® software and coupled with the biorelevant dissolution profiles in order to simulate the plasma concentration profiles of diclofenac from Diclo-Puren® capsule in both the fasted and fed state in humans. Gastric emptying kinetics relevant to multiple-units pellets were incorporated into the PBPK model by setting up a virtual patient population to account for physiological variations in emptying kinetics. Using in vitro biorelevant dissolution coupled with in silico PBPK modeling and simulation it was possible to predict the plasma profile of this multiple-unit formulation of diclofenac after oral administration in both the fasted and fed state. This approach might be useful to predict variability in the plasma profiles for other drugs housed in multiple-unit dosage forms. Copyright © 2014 Elsevier B.V. All rights reserved.
Precision Modeling Of Targets Using The VALUE Computer Program
NASA Astrophysics Data System (ADS)
Hoffman, George A.; Patton, Ronald; Akerman, Alexander
1989-08-01
The 1976-vintage LASERX computer code has been augmented to produce realistic electro-optical images of targets. Capabilities lacking in LASERX but recently incorporated into its VALUE successor include: •Shadows cast onto the ground •Shadows cast onto parts of the target •See-through transparencies (e.g.,canopies) •Apparent images due both to atmospheric scattering and turbulence •Surfaces characterized by multiple bi-directional reflectance functions VALUE provides not only realistic target modeling by its precise and comprehensive representation of all target attributes, but additionally VALUE is very user friendly. Specifically, setup of runs is accomplished by screen prompting menus in a sequence of queries that is logical to the user. VALUE also incorporates the Optical Encounter (OPEC) software developed by Tricor Systems,Inc., Elgin, IL.
Espaulella-Panicot, Joan; Molist-Brunet, Núria; Sevilla-Sánchez, Daniel; González-Bueno, Javier; Amblàs-Novellas, Jordi; Solà-Bonada, Núria; Codina-Jané, Carles
Patients with multiple disorders and on multiple medication are often associated with clinical complexity, defined as a situation of uncertainty conditioned by difficulties in establishing a situational diagnosis and decision-making. The patient-centred care approach in this population group seems to be one of the best therapeutic options. In this context, the preparation of an individualised therapeutic plan is the most relevant practical element, where the pharmacological plan maintains an important role. There has recently been a significant increase in knowledge in the area of adequacy of prescription and adherence. In this context, we must find a model must be found that incorporates this knowledge into clinical practice by the professionals. Person-centred prescription is a medication review model that includes different strategies in a single intervention. It is performed by a multidisciplinary team, and allows them to adapt the pharmacological plan of patients with clinical complexity. Copyright © 2017 SEGG. Publicado por Elsevier España, S.L.U. All rights reserved.
Spatial path models with multiple indicators and multiple causes: mental health in US counties.
Congdon, Peter
2011-06-01
This paper considers a structural model for the impact on area mental health outcomes (poor mental health, suicide) of spatially structured latent constructs: deprivation, social capital, social fragmentation and rurality. These constructs are measured by multiple observed effect indicators, with the constructs allowed to be correlated both between and within areas. However, in the scheme developed here, particular latent constructs may also be influenced by known variables, or, via path sequences, by other constructs, possibly nonlinearly. For example, area social capital may be measured by effect indicators (e.g. associational density, charitable activity), but influenced as causes by other constructs (e.g. area deprivation), and by observed features of the socio-ethnic structure of areas. A model incorporating these features is applied to suicide mortality and the prevalence of poor mental health in 3141 US counties, which are related to the latent spatial constructs and to observed variables (e.g. county ethnic mix). Copyright © 2011 Elsevier Ltd. All rights reserved.
Template based protein structure modeling by global optimization in CASP11.
Joo, Keehyoung; Joung, InSuk; Lee, Sun Young; Kim, Jong Yun; Cheng, Qianyi; Manavalan, Balachandran; Joung, Jong Young; Heo, Seungryong; Lee, Juyong; Nam, Mikyung; Lee, In-Ho; Lee, Sung Jong; Lee, Jooyoung
2016-09-01
For the template-based modeling (TBM) of CASP11 targets, we have developed three new protein modeling protocols (nns for server prediction and LEE and LEER for human prediction) by improving upon our previous CASP protocols (CASP7 through CASP10). We applied the powerful global optimization method of conformational space annealing to three stages of optimization, including multiple sequence-structure alignment, three-dimensional (3D) chain building, and side-chain remodeling. For more successful fold recognition, a new alignment method called CRFalign was developed. It can incorporate sensitive positional and environmental dependence in alignment scores as well as strong nonlinear correlations among various features. Modifications and adjustments were made to the form of the energy function and weight parameters pertaining to the chain building procedure. For the side-chain remodeling step, residue-type dependence was introduced to the cutoff value that determines the entry of a rotamer to the side-chain modeling library. The improved performance of the nns server method is attributed to successful fold recognition achieved by combining several methods including CRFalign and to the current modeling formulation that can incorporate native-like structural aspects present in multiple templates. The LEE protocol is identical to the nns one except that CASP11-released server models are used as templates. The success of LEE in utilizing CASP11 server models indicates that proper template screening and template clustering assisted by appropriate cluster ranking promises a new direction to enhance protein 3D modeling. Proteins 2016; 84(Suppl 1):221-232. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Jakeman, A. J.; Guillaume, J. H. A.; El Sawah, S.; Hamilton, S.
2014-12-01
Integrated modelling and assessment (IMA) is best regarded as a process that can support environmental decision-making when issues are strongly contested and uncertainties pervasive. To be most useful, the process must be multi-dimensional and phased. Principally, it must be tailored to the problem context to encompass diverse issues of concern, management settings and stakeholders. This in turn requires the integration of multiple processes and components of natural and human systems and their corresponding spatial and temporal scales. Modellers therefore need to be able to integrate multiple disciplines, methods, models, tools and data, and many sources and types of uncertainty. These dimensions are incorporated into iteration between the various phases of the IMA process, including scoping, problem framing and formulation, assessing options and communicating findings. Two case studies in Australia are employed to share the lessons of how integration can be achieved in these IMA phases using a mix of stakeholder participation processes and modelling tools. One case study aims to improve the relevance of modelling by incorporating stakeholder's views of irrigated viticulture and water management decision making. It used a novel methodology with the acronym ICTAM, consisting of Interviews to elicit mental models, Cognitive maps to represent and analyse individual and group mental models, Time-sequence diagrams to chronologically structure the decision making process, an All-encompassing conceptual model, and computational Models of stakeholder decision making. The second case uses a hydro-economic river network model to examine basin-wide impacts of water allocation cuts and adoption of farm innovations. The knowledge exchange approach used in each case was designed to integrate data and knowledge bearing in mind the contextual dimensions of the problem at hand, and the specific contributions that environmental modelling was thought to be able to make.
Information Commons for Rice (IC4R)
2016-01-01
Rice is the most important staple food for a large part of the world's human population and also a key model organism for plant research. Here, we present Information Commons for Rice (IC4R; http://ic4r.org), a rice knowledgebase featuring adoption of an extensible and sustainable architecture that integrates multiple omics data through community-contributed modules. Each module is developed and maintained by different committed groups, deals with data collection, processing and visualization, and delivers data on-demand via web services. In the current version, IC4R incorporates a variety of rice data through multiple committed modules, including genome-wide expression profiles derived entirely from RNA-Seq data, resequencing-based genomic variations obtained from re-sequencing data of thousands of rice varieties, plant homologous genes covering multiple diverse plant species, post-translational modifications, rice-related literatures and gene annotations contributed by the rice research community. Unlike extant related databases, IC4R is designed for scalability and sustainability and thus also features collaborative integration of rice data and low costs for database update and maintenance. Future directions of IC4R include incorporation of other omics data and association of multiple omics data with agronomically important traits, dedicating to build IC4R into a valuable knowledgebase for both basic and translational researches in rice. PMID:26519466
Barnes, Samuel R; Ng, Thomas S C; Santa-Maria, Naomi; Montagne, Axel; Zlokovic, Berislav V; Jacobs, Russell E
2015-06-16
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is a promising technique to characterize pathology and evaluate treatment response. However, analysis of DCE-MRI data is complex and benefits from concurrent analysis of multiple kinetic models and parameters. Few software tools are currently available that specifically focuses on DCE-MRI analysis with multiple kinetic models. Here, we developed ROCKETSHIP, an open-source, flexible and modular software for DCE-MRI analysis. ROCKETSHIP incorporates analyses with multiple kinetic models, including data-driven nested model analysis. ROCKETSHIP was implemented using the MATLAB programming language. Robustness of the software to provide reliable fits using multiple kinetic models is demonstrated using simulated data. Simulations also demonstrate the utility of the data-driven nested model analysis. Applicability of ROCKETSHIP for both preclinical and clinical studies is shown using DCE-MRI studies of the human brain and a murine tumor model. A DCE-MRI software suite was implemented and tested using simulations. Its applicability to both preclinical and clinical datasets is shown. ROCKETSHIP was designed to be easily accessible for the beginner, but flexible enough for changes or additions to be made by the advanced user as well. The availability of a flexible analysis tool will aid future studies using DCE-MRI. A public release of ROCKETSHIP is available at https://github.com/petmri/ROCKETSHIP .
A coarse grain model for protein-surface interactions
NASA Astrophysics Data System (ADS)
Wei, Shuai; Knotts, Thomas A.
2013-09-01
The interaction of proteins with surfaces is important in numerous applications in many fields—such as biotechnology, proteomics, sensors, and medicine—but fundamental understanding of how protein stability and structure are affected by surfaces remains incomplete. Over the last several years, molecular simulation using coarse grain models has yielded significant insights, but the formalisms used to represent the surface interactions have been rudimentary. We present a new model for protein surface interactions that incorporates the chemical specificity of both the surface and the residues comprising the protein in the context of a one-bead-per-residue, coarse grain approach that maintains computational efficiency. The model is parameterized against experimental adsorption energies for multiple model peptides on different types of surfaces. The validity of the model is established by its ability to quantitatively and qualitatively predict the free energy of adsorption and structural changes for multiple biologically-relevant proteins on different surfaces. The validation, done with proteins not used in parameterization, shows that the model produces remarkable agreement between simulation and experiment.
Vassallo, Rebecca; Durrant, Gabriele B; Smith, Peter W F; Goldstein, Harvey
2015-01-01
The paper investigates two different multilevel approaches, the multilevel cross-classified and the multiple-membership models, for the analysis of interviewer effects on wave non-response in longitudinal surveys. The models proposed incorporate both interviewer and area effects to account for the non-hierarchical structure, the influence of potentially more than one interviewer across waves and possible confounding of area and interviewer effects arising from the non-random allocation of interviewers across areas. The methods are compared by using a data set: the UK Family and Children Survey. PMID:25598587
Bennetts, Victor Hernandez; Schaffernicht, Erik; Pomareda, Victor; Lilienthal, Achim J; Marco, Santiago; Trincavelli, Marco
2014-09-17
In this paper, we address the task of gas distribution modeling in scenarios where multiple heterogeneous compounds are present. Gas distribution modeling is particularly useful in emission monitoring applications where spatial representations of the gaseous patches can be used to identify emission hot spots. In realistic environments, the presence of multiple chemicals is expected and therefore, gas discrimination has to be incorporated in the modeling process. The approach presented in this work addresses the task of gas distribution modeling by combining different non selective gas sensors. Gas discrimination is addressed with an open sampling system, composed by an array of metal oxide sensors and a probabilistic algorithm tailored to uncontrolled environments. For each of the identified compounds, the mapping algorithm generates a calibrated gas distribution model using the classification uncertainty and the concentration readings acquired with a photo ionization detector. The meta parameters of the proposed modeling algorithm are automatically learned from the data. The approach was validated with a gas sensitive robot patrolling outdoor and indoor scenarios, where two different chemicals were released simultaneously. The experimental results show that the generated multi compound maps can be used to accurately predict the location of emitting gas sources.
Water Isotopes in the GISS GCM: History, Applications and Potential
NASA Astrophysics Data System (ADS)
Schmidt, G. A.; LeGrande, A. N.; Field, R. D.; Nusbaumer, J. M.
2017-12-01
Water isotopes have been incorporated in the GISS GCMs since the pioneering work of Jean Jouzel in the 1980s. Since 2005, this functionality has been maintained within the master branch of the development code and has been usable (and used) in all subsequent versions. This has allowed a wide variety of applications, across multiple time-scales and interests, to be tackled coherently. Water isotope tracers have been used to debug the atmospheric model code, tune parameterisations of moist processes, assess the isotopic fingerprints of multiple climate drivers, produce forward models for remotely sensed isotope products, and validate paleo-climate interpretations from the last millennium to the Eocene. We will present an overview of recent results involving isotope tracers, including improvements in models for the isotopic fractionation processes themselves, and demonstrate the potential for using these tracers and models more systematically in paleo-climate reconstructions and investigations of the modern hydrological cycle.
A model for making project funding decisions at the National Cancer Institute.
Hall, N G; Hershey, J C; Kessler, L G; Stotts, R C
1992-01-01
This paper describes the development of a model for making project funding decisions at The National Cancer Institute (NCI). The American Stop Smoking Intervention Study (ASSIST) is a multiple-year, multiple-site demonstration project, aimed at reducing smoking prevalence. The initial request for ASSIST proposals was answered by about twice as many states as could be funded. Scientific peer review of the proposals was the primary criterion used for funding decisions. However, a modified Delphi process made explicit several criteria of secondary importance. A structured questionnaire identified the relative importance of these secondary criteria, some of which we incorporated into a composite preference function. We modeled the proposal funding decision as a zero-one program, and adjusted the preference function and available budget parametrically to generate many suitable outcomes. The actual funding decision, identified by our model, offers significant advantages over manually generated solutions found by experts at NCI.
Developing Access Control Model of Web OLAP over Trusted and Collaborative Data Warehouses
NASA Astrophysics Data System (ADS)
Fugkeaw, Somchart; Mitrpanont, Jarernsri L.; Manpanpanich, Piyawit; Juntapremjitt, Sekpon
This paper proposes the design and development of Role- based Access Control (RBAC) model for the Single Sign-On (SSO) Web-OLAP query spanning over multiple data warehouses (DWs). The model is based on PKI Authentication and Privilege Management Infrastructure (PMI); it presents a binding model of RBAC authorization based on dimension privilege specified in attribute certificate (AC) and user identification. Particularly, the way of attribute mapping between DW user authentication and privilege of dimensional access is illustrated. In our approach, we apply the multi-agent system to automate flexible and effective management of user authentication, role delegation as well as system accountability. Finally, the paper culminates in the prototype system A-COLD (Access Control of web-OLAP over multiple DWs) that incorporates the OLAP features and authentication and authorization enforcement in the multi-user and multi-data warehouse environment.
A management and optimisation model for water supply planning in water deficit areas
NASA Astrophysics Data System (ADS)
Molinos-Senante, María; Hernández-Sancho, Francesc; Mocholí-Arce, Manuel; Sala-Garrido, Ramón
2014-07-01
The integrated water resources management approach has proven to be a suitable option for efficient, equitable and sustainable water management. In water-poor regions experiencing acute and/or chronic shortages, optimisation techniques are a useful tool for supporting the decision process of water allocation. In order to maximise the value of water use, an optimisation model was developed which involves multiple supply sources (conventional and non-conventional) and multiple users. Penalties, representing monetary losses in the event of an unfulfilled water demand, have been incorporated into the objective function. This model represents a novel approach which considers water distribution efficiency and the physical connections between water supply and demand points. Subsequent empirical testing using data from a Spanish Mediterranean river basin demonstrated the usefulness of the global optimisation model to solve existing water imbalances at the river basin level.
Finding the target sites of RNA-binding proteins
Li, Xiao; Kazan, Hilal; Lipshitz, Howard D; Morris, Quaid D
2014-01-01
RNA–protein interactions differ from DNA–protein interactions because of the central role of RNA secondary structure. Some RNA-binding domains (RBDs) recognize their target sites mainly by their shape and geometry and others are sequence-specific but are sensitive to secondary structure context. A number of small- and large-scale experimental approaches have been developed to measure RNAs associated in vitro and in vivo with RNA-binding proteins (RBPs). Generalizing outside of the experimental conditions tested by these assays requires computational motif finding. Often RBP motif finding is done by adapting DNA motif finding methods; but modeling secondary structure context leads to better recovery of RBP-binding preferences. Genome-wide assessment of mRNA secondary structure has recently become possible, but these data must be combined with computational predictions of secondary structure before they add value in predicting in vivo binding. There are two main approaches to incorporating structural information into motif models: supplementing primary sequence motif models with preferred secondary structure contexts (e.g., MEMERIS and RNAcontext) and directly modeling secondary structure recognized by the RBP using stochastic context-free grammars (e.g., CMfinder and RNApromo). The former better reconstruct known binding preferences for sequence-specific RBPs but are not suitable for modeling RBPs that recognize shape and geometry of RNAs. Future work in RBP motif finding should incorporate interactions between multiple RBDs and multiple RBPs in binding to RNA. WIREs RNA 2014, 5:111–130. doi: 10.1002/wrna.1201 PMID:24217996
NASA Astrophysics Data System (ADS)
Nabil, Mahdi; Rattner, Alexander S.
The volume-of-fluid (VOF) approach is a mature technique for simulating two-phase flows. However, VOF simulation of phase-change heat transfer is still in its infancy. Multiple closure formulations have been proposed in the literature, each suited to different applications. While these have enabled significant research advances, few implementations are publicly available, actively maintained, or inter-operable. Here, a VOF solver is presented (interThermalPhaseChangeFoam), which incorporates an extensible framework for phase-change heat transfer modeling, enabling simulation of diverse phenomena in a single environment. The solver employs object oriented OpenFOAM library features, including Run-Time-Type-Identification to enable rapid implementation and run-time selection of phase change and surface tension force models. The solver is packaged with multiple phase change and surface tension closure models, adapted and refined from earlier studies. This code has previously been applied to study wavy film condensation, Taylor flow evaporation, nucleate boiling, and dropwise condensation. Tutorial cases are provided for simulation of horizontal film condensation, smooth and wavy falling film condensation, nucleate boiling, and bubble condensation. Validation and grid sensitivity studies, interfacial transport models, effects of spurious currents from surface tension models, effects of artificial heat transfer due to numerical factors, and parallel scaling performance are described in detail in the Supplemental Material (see Appendix A). By incorporating the framework and demonstration cases into a single environment, users can rapidly apply the solver to study phase-change processes of interest.
Cross-Platform Toxicogenomics for the Prediction of Non-Genotoxic Hepatocarcinogenesis in Rat
Metzger, Ute; Templin, Markus F.; Plummer, Simon; Ellinger-Ziegelbauer, Heidrun; Zell, Andreas
2014-01-01
In the area of omics profiling in toxicology, i.e. toxicogenomics, characteristic molecular profiles have previously been incorporated into prediction models for early assessment of a carcinogenic potential and mechanism-based classification of compounds. Traditionally, the biomarker signatures used for model construction were derived from individual high-throughput techniques, such as microarrays designed for monitoring global mRNA expression. In this study, we built predictive models by integrating omics data across complementary microarray platforms and introduced new concepts for modeling of pathway alterations and molecular interactions between multiple biological layers. We trained and evaluated diverse machine learning-based models, differing in the incorporated features and learning algorithms on a cross-omics dataset encompassing mRNA, miRNA, and protein expression profiles obtained from rat liver samples treated with a heterogeneous set of substances. Most of these compounds could be unambiguously classified as genotoxic carcinogens, non-genotoxic carcinogens, or non-hepatocarcinogens based on evidence from published studies. Since mixed characteristics were reported for the compounds Cyproterone acetate, Thioacetamide, and Wy-14643, we reclassified these compounds as either genotoxic or non-genotoxic carcinogens based on their molecular profiles. Evaluating our toxicogenomics models in a repeated external cross-validation procedure, we demonstrated that the prediction accuracy of our models could be increased by joining the biomarker signatures across multiple biological layers and by adding complex features derived from cross-platform integration of the omics data. Furthermore, we found that adding these features resulted in a better separation of the compound classes and a more confident reclassification of the three undefined compounds as non-genotoxic carcinogens. PMID:24830643
NASA Astrophysics Data System (ADS)
Guo, W.
2017-12-01
Chemical and isotopic compositions of scleractinian coral skeletons reflect the physicochemical condition of the seawater in which corals grow. This makes coral skeleton one of the best archives of ocean climate and biogeochemical changes. A number of coral-based geochemical proxies have been developed and applied to reconstruct past seawater conditions, such as temperature, pH, carbonate chemistry and nutrient concentrations. Detailed laboratory and field-based studies of these proxies, however, indicate interpretation of the geochemistry of coral skeletons is not straightforward, due to the presence of `vital effects' and the variations of empirical proxy calibrations among and within different species. This poses challenges for the broad application of many geochemical proxies in corals, and highlights the need to better understand the fundamental processes governing the incorporation of different proxies. Here I present a numerical model that simulates the incorporation of a suite of geochemical proxies into coral skeletons, including δ11B, Mg/Ca, Sr/Ca, U/Ca, B/Ca and Ba/Ca. This model, building on previous theoretical studies of coral calcification, combines our current understanding of coral calcification mechanism with experimental constraints on the isotope and element partition during carbonate precipitation. It enables quantitative evaluation of the effects of different environmental and biological factors on each proxy. Specifically, this model shows that (1) the incorporation of every proxy is affected by multiple seawater parameters (e.g. temperature, pH, DIC) as opposed to one single parameter, and (2) biological factors, particularly the interplay between enzymatic alkalinity pumping and the exchange of coral calcifying fluid with external seawater, also exert significant controls. Based on these findings, I propose an inverse method for simultaneously reconstructing multiple seawater physicochemical parameters, and compare the performance of this new method with conventional paleo-reconstruction methods that are based on empirical calibrations. In addition, the extension of this model to simulate carbon, oxygen and clumped isotope (δ13C, δ18O, Δ47) composition of coral skeletons will also be discussed at the meeting.
ERIC Educational Resources Information Center
Ault, Melinda Jones; Baggerman, Melanie A.; Horn, Channon K.
2017-01-01
This study used a multiple probe (conditions) design across behaviors to investigate the effects of an app for the tablet computer to teach spelling of academic content words to four students with developmental disabilities. The app delivered instruction using a model-lead-test format and students typed on the on-screen keyboard. The study also…
ERIC Educational Resources Information Center
Sahney, Sangeeta
2016-01-01
Purpose: Educational institutes must embrace the principles of total quality management (TQM) if they seek to remain competitive, and survive and succeed in the long run. An educational institution must embrace the principles of quality management and incorporate them into all of their activities. Starting with a theoretical background, the paper…
Mark Hitchcock; Alan Ager
1992-01-01
National Forests in the Pacific Northwest Region have incorporated elk habitat standards into Forest plans to ensure that elk habitat objectives are met on multiple use land allocations. Many Forests have employed versions of the habitat effectiveness index (HEI) as a standard method to evaluate habitat. Field application of the HEI model unfortunately is a formidable...
Erika L. Rowland; Jennifer E. Davison; Lisa J. Graumlich
2011-01-01
Assessing the impact of climate change on species and associated management objectives is a critical initial step for engaging in the adaptation planning process. Multiple approaches are available. While all possess limitations to their application associated with the uncertainties inherent in the data and models that inform their results, conducting and incorporating...
ERIC Educational Resources Information Center
Maloney, Shannon
2014-01-01
Positive youth development (PYD) orients youth toward pro-social and forward-looking behavior through programs that emphasize youth empowerment and involvement, focus on skill development and character building, incorporate community collaboration at multiple levels, and include positive adult role models and mentors that interact with youth in…
Multiple regression technique for Pth degree polynominals with and without linear cross products
NASA Technical Reports Server (NTRS)
Davis, J. W.
1973-01-01
A multiple regression technique was developed by which the nonlinear behavior of specified independent variables can be related to a given dependent variable. The polynomial expression can be of Pth degree and can incorporate N independent variables. Two cases are treated such that mathematical models can be studied both with and without linear cross products. The resulting surface fits can be used to summarize trends for a given phenomenon and provide a mathematical relationship for subsequent analysis. To implement this technique, separate computer programs were developed for the case without linear cross products and for the case incorporating such cross products which evaluate the various constants in the model regression equation. In addition, the significance of the estimated regression equation is considered and the standard deviation, the F statistic, the maximum absolute percent error, and the average of the absolute values of the percent of error evaluated. The computer programs and their manner of utilization are described. Sample problems are included to illustrate the use and capability of the technique which show the output formats and typical plots comparing computer results to each set of input data.
Shi, Yuan; Lau, Kevin Ka-Lun; Ng, Edward
2017-08-01
Urban air quality serves as an important function of the quality of urban life. Land use regression (LUR) modelling of air quality is essential for conducting health impacts assessment but more challenging in mountainous high-density urban scenario due to the complexities of the urban environment. In this study, a total of 21 LUR models are developed for seven kinds of air pollutants (gaseous air pollutants CO, NO 2 , NO x , O 3 , SO 2 and particulate air pollutants PM 2.5 , PM 10 ) with reference to three different time periods (summertime, wintertime and annual average of 5-year long-term hourly monitoring data from local air quality monitoring network) in Hong Kong. Under the mountainous high-density urban scenario, we improved the traditional LUR modelling method by incorporating wind availability information into LUR modelling based on surface geomorphometrical analysis. As a result, 269 independent variables were examined to develop the LUR models by using the "ADDRESS" independent variable selection method and stepwise multiple linear regression (MLR). Cross validation has been performed for each resultant model. The results show that wind-related variables are included in most of the resultant models as statistically significant independent variables. Compared with the traditional method, a maximum increase of 20% was achieved in the prediction performance of annual averaged NO 2 concentration level by incorporating wind-related variables into LUR model development. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Berman, A. L.; Wackley, J. A.; Rockwell, S. T.; Yee, J. G.
1976-01-01
The 1976 Pioneer II Solar Conjunction provided the opportunity to accumulate a substantial quantity of doppler noise data over a dynamic range of signal closest approach point heliographic latitudes. The observed doppler noise data were fit to the doppler noise model ISED, and the deviations of the observed doppler noise data from the model were used to construct a (multiplicative) function to describe the effect of heliographic latitude. This expression was then incorporated into the ISED model to produce a new doppler noise model-ISEDB.
Gan, Rui; Perez, Jessica G; Carlson, Erik D; Ntai, Ioanna; Isaacs, Farren J; Kelleher, Neil L; Jewett, Michael C
2017-05-01
The ability to site-specifically incorporate non-canonical amino acids (ncAAs) into proteins has made possible the study of protein structure and function in fundamentally new ways, as well as the bio synthesis of unnatural polymers. However, the task of site-specifically incorporating multiple ncAAs into proteins with high purity and yield continues to present a challenge. At the heart of this challenge lies the lower efficiency of engineered orthogonal translation system components compared to their natural counterparts (e.g., translation elements that specifically use a ncAA and do not interact with the cell's natural translation apparatus). Here, we show that evolving and tuning expression levels of multiple components of an engineered translation system together as a whole enhances ncAA incorporation efficiency. Specifically, we increase protein yield when incorporating multiple p-azido-phenylalanine(pAzF) residues into proteins by (i) evolving the Methanocaldococcus jannaschii p-azido-phenylalanyl-tRNA synthetase anti-codon binding domain, (ii) evolving the elongation factor Tu amino acid-binding pocket, and (iii) tuning the expression of evolved translation machinery components in a single vector. Use of the evolved translation machinery in a genomically recoded organism lacking release factor one enabled enhanced multi-site ncAA incorporation into proteins. We anticipate that our approach to orthogonal translation system development will accelerate and expand our ability to site-specifically incorporate multiple ncAAs into proteins and biopolymers, advancing new horizons for synthetic and chemical biotechnology. Biotechnol. Bioeng. 2017;114: 1074-1086. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Bayesian networks improve causal environmental ...
Rule-based weight of evidence approaches to ecological risk assessment may not account for uncertainties and generally lack probabilistic integration of lines of evidence. Bayesian networks allow causal inferences to be made from evidence by including causal knowledge about the problem, using this knowledge with probabilistic calculus to combine multiple lines of evidence, and minimizing biases in predicting or diagnosing causal relationships. Too often, sources of uncertainty in conventional weight of evidence approaches are ignored that can be accounted for with Bayesian networks. Specifying and propagating uncertainties improve the ability of models to incorporate strength of the evidence in the risk management phase of an assessment. Probabilistic inference from a Bayesian network allows evaluation of changes in uncertainty for variables from the evidence. The network structure and probabilistic framework of a Bayesian approach provide advantages over qualitative approaches in weight of evidence for capturing the impacts of multiple sources of quantifiable uncertainty on predictions of ecological risk. Bayesian networks can facilitate the development of evidence-based policy under conditions of uncertainty by incorporating analytical inaccuracies or the implications of imperfect information, structuring and communicating causal issues through qualitative directed graph formulations, and quantitatively comparing the causal power of multiple stressors on value
A nursing-specific model of EPR documentation: organizational and professional requirements.
von Krogh, Gunn; Nåden, Dagfinn
2008-01-01
To present the Norwegian documentation KPO model (quality assurance, problem solving, and caring). To present the requirements and multiple electronic patient record (EPR) functions the model is designed to address. The model's professional substance, a conceptual framework for nursing practice is developed by examining, reorganizing, and completing existing frameworks. The model's methodology, an information management system, is developed using an expert group. Both model elements were clinically tested over a period of 1 year. The model is designed for nursing documentation in step with statutory, organizational, and professional requirements. Complete documentation is arranged for by incorporating the Nursing Minimum Data Set. A systematic and comprehensive documentation is arranged for by establishing categories as provided in the model's framework domains. Consistent documentation is arranged for by incorporating NANDA-I Nursing Diagnoses, Nursing Intervention Classification, and Nursing Outcome Classification. The model can be used as a tool in cooperation with vendors to ensure the interests of the nursing profession is met when developing EPR solutions in healthcare. The model can provide clinicians with a framework for documentation in step with legal and organizational requirements and at the same time retain the ability to record all aspects of clinical nursing.
Rosenblatt, Marcus; Timmer, Jens; Kaschek, Daniel
2016-01-01
Ordinary differential equation models have become a wide-spread approach to analyze dynamical systems and understand underlying mechanisms. Model parameters are often unknown and have to be estimated from experimental data, e.g., by maximum-likelihood estimation. In particular, models of biological systems contain a large number of parameters. To reduce the dimensionality of the parameter space, steady-state information is incorporated in the parameter estimation process. For non-linear models, analytical steady-state calculation typically leads to higher-order polynomial equations for which no closed-form solutions can be obtained. This can be circumvented by solving the steady-state equations for kinetic parameters, which results in a linear equation system with comparatively simple solutions. At the same time multiplicity of steady-state solutions is avoided, which otherwise is problematic for optimization. When solved for kinetic parameters, however, steady-state constraints tend to become negative for particular model specifications, thus, generating new types of optimization problems. Here, we present an algorithm based on graph theory that derives non-negative, analytical steady-state expressions by stepwise removal of cyclic dependencies between dynamical variables. The algorithm avoids multiple steady-state solutions by construction. We show that our method is applicable to most common classes of biochemical reaction networks containing inhibition terms, mass-action and Hill-type kinetic equations. Comparing the performance of parameter estimation for different analytical and numerical methods of incorporating steady-state information, we show that our approach is especially well-tailored to guarantee a high success rate of optimization. PMID:27243005
Rosenblatt, Marcus; Timmer, Jens; Kaschek, Daniel
2016-01-01
Ordinary differential equation models have become a wide-spread approach to analyze dynamical systems and understand underlying mechanisms. Model parameters are often unknown and have to be estimated from experimental data, e.g., by maximum-likelihood estimation. In particular, models of biological systems contain a large number of parameters. To reduce the dimensionality of the parameter space, steady-state information is incorporated in the parameter estimation process. For non-linear models, analytical steady-state calculation typically leads to higher-order polynomial equations for which no closed-form solutions can be obtained. This can be circumvented by solving the steady-state equations for kinetic parameters, which results in a linear equation system with comparatively simple solutions. At the same time multiplicity of steady-state solutions is avoided, which otherwise is problematic for optimization. When solved for kinetic parameters, however, steady-state constraints tend to become negative for particular model specifications, thus, generating new types of optimization problems. Here, we present an algorithm based on graph theory that derives non-negative, analytical steady-state expressions by stepwise removal of cyclic dependencies between dynamical variables. The algorithm avoids multiple steady-state solutions by construction. We show that our method is applicable to most common classes of biochemical reaction networks containing inhibition terms, mass-action and Hill-type kinetic equations. Comparing the performance of parameter estimation for different analytical and numerical methods of incorporating steady-state information, we show that our approach is especially well-tailored to guarantee a high success rate of optimization.
NASA Astrophysics Data System (ADS)
Possemiers, Mathias; Huysmans, Marijke; Batelaan, Okke
2015-08-01
Adequate aquifer characterization and simulation using heat transport models are indispensible for determining the optimal design for aquifer thermal energy storage (ATES) systems and wells. Recent model studies indicate that meter-scale heterogeneities in the hydraulic conductivity field introduce a considerable uncertainty in the distribution of thermal energy around an ATES system and can lead to a reduction in the thermal recoverability. In a study site in Bierbeek, Belgium, the influence of centimeter-scale clay drapes on the efficiency of a doublet ATES system and the distribution of the thermal energy around the ATES wells are quantified. Multiple-point geostatistical simulation of edge properties is used to incorporate the clay drapes in the models. The results show that clay drapes have an influence both on the distribution of thermal energy in the subsurface and on the efficiency of the ATES system. The distribution of the thermal energy is determined by the strike of the clay drapes, with the major axis of anisotropy parallel to the clay drape strike. The clay drapes have a negative impact (3.3-3.6 %) on the energy output in the models without a hydraulic gradient. In the models with a hydraulic gradient, however, the presence of clay drapes has a positive influence (1.6-10.2 %) on the energy output of the ATES system. It is concluded that it is important to incorporate small-scale heterogeneities in heat transport models to get a better estimate on ATES efficiency and distribution of thermal energy.
NASA Astrophysics Data System (ADS)
Possemiers, Mathias; Huysmans, Marijke; Batelaan, Okke
2015-04-01
Adequate aquifer characterization and simulation using heat transport models are indispensible for determining the optimal design for Aquifer Thermal Energy Storage (ATES) systems and wells. Recent model studies indicate that meter scale heterogeneities in the hydraulic conductivity field introduce a considerable uncertainty in the distribution of thermal energy around an ATES system and can lead to a reduction in the thermal recoverability. In this paper, the influence of centimeter scale clay drapes on the efficiency of a doublet ATES system and the distribution of the thermal energy around the ATES wells are quantified. Multiple-point geostatistical simulation of edge properties is used to incorporate the clay drapes in the models. The results show that clay drapes have an influence both on the distribution of thermal energy in the subsurface and on the efficiency of the ATES system. The distribution of the thermal energy is determined by the strike of the clay drapes, with the major axis of anisotropy parallel to the clay drape strike. The clay drapes have a negative impact (3.3 - 3.6%) on the energy output in the models without a hydraulic gradient. In the models with a hydraulic gradient, however, the presence of clay drapes has a positive influence (1.6 - 10.2%) on the energy output of the ATES system. It is concluded that it is important to incorporate small scale heterogeneities in heat transport models to get a better estimate on ATES efficiency and distribution of thermal energy.
NASA Astrophysics Data System (ADS)
Campbell, John L.; Ganly, Brianna; Heirwegh, Christopher M.; Maxwell, John A.
2018-01-01
Multiple ionization satellites are prominent features in X-ray spectra induced by MeV energy alpha particles. It follows that the accuracy of PIXE analysis using alpha particles can be improved if these features are explicitly incorporated in the peak model description when fitting the spectra with GUPIX or other codes for least-squares fitting PIXE spectra and extracting element concentrations. A method for this incorporation is described and is tested using spectra recorded on Mars by the Curiosity rover's alpha particle X-ray spectrometer. These spectra are induced by both PIXE and X-ray fluorescence, resulting in a spectral energy range from ∼1 to ∼25 keV. This range is valuable in determining the energy-channel calibration, which departs from linearity at low X-ray energies. It makes it possible to separate the effects of the satellites from an instrumental non-linearity component. The quality of least-squares spectrum fits is significantly improved, raising the level of confidence in analytical results from alpha-induced PIXE.
NASA Astrophysics Data System (ADS)
Aghaei, A.
2017-12-01
Digital imaging and modeling of rocks and subsequent simulation of physical phenomena in digitally-constructed rock models are becoming an integral part of core analysis workflows. One of the inherent limitations of image-based analysis, at any given scale, is image resolution. This limitation becomes more evident when the rock has multiple scales of porosity such as in carbonates and tight sandstones. Multi-scale imaging and constructions of hybrid models that encompass images acquired at multiple scales and resolutions are proposed as a solution to this problem. In this study, we investigate the effect of image resolution and unresolved porosity on petrophysical and two-phase flow properties calculated based on images. A helical X-ray micro-CT scanner with a high cone-angle is used to acquire digital rock images that are free of geometric distortion. To remove subjectivity from the analyses, a semi-automated image processing technique is used to process and segment the acquired data into multiple phases. Direct and pore network based models are used to simulate physical phenomena and obtain absolute permeability, formation factor and two-phase flow properties such as relative permeability and capillary pressure. The effect of image resolution on each property is investigated. Finally a hybrid network model incorporating images at multiple resolutions is built and used for simulations. The results from the hybrid model are compared against results from the model built at the highest resolution and those from laboratory tests.
Ke, A; Barter, Z; Rowland‐Yeo, K
2016-01-01
In this study, we present efavirenz physiologically based pharmacokinetic (PBPK) model development as an example of our best practice approach that uses a stepwise approach to verify the different components of the model. First, a PBPK model for efavirenz incorporating in vitro and clinical pharmacokinetic (PK) data was developed to predict exposure following multiple dosing (600 mg q.d.). Alfentanil i.v. and p.o. drug‐drug interaction (DDI) studies were utilized to evaluate and refine the CYP3A4 induction component in the liver and gut. Next, independent DDI studies with substrates of CYP3A4 (maraviroc, atazanavir, and clarithromycin) and CYP2B6 (bupropion) verified the induction components of the model (area under the curve [AUC] ratios within 1.0–1.7‐fold of observed). Finally, the model was refined to incorporate the fractional contribution of enzymes, including CYP2B6, propagating autoinduction into the model (Racc 1.7 vs. 1.7 observed). This validated mechanistic model can now be applied in clinical pharmacology studies to prospectively assess both the victim and perpetrator DDI potential of efavirenz. PMID:27435752
J.M. Warren; F.C. Meinzer; J.R. Brooks; J.-C. Domec; R. Coulombe
2006-01-01
We incorporated soil/plant biophysical properties into a simple model to predict seasonal trajectories of hydraulic redistribution (HR). We measured soil water content, water potential root conductivity, and climate across multiple years in two old-growth coniferous forests. The HR variability within sites (0 to 0.5 mm/d) was linked to spatial patterns of roots, soil...
Chen, Can; Chen, Deli; Pan, Jianjun; Lam, Shu Kee
2013-01-01
Straw retention has been shown to reduce carbon dioxide (CO2) emission from agricultural soils. But it remains a big challenge for models to effectively predict CO2 emission fluxes under different straw retention methods. We used maize season data in the Griffith region, Australia, to test whether the denitrification-decomposition (DNDC) model could simulate annual CO2 emission. We also identified driving factors of CO2 emission by correlation analysis and path analysis. We show that the DNDC model was able to simulate CO2 emission under alternative straw retention scenarios. The correlation coefficients between simulated and observed daily values for treatments of straw burn and straw incorporation were 0.74 and 0.82, respectively, in the straw retention period and 0.72 and 0.83, respectively, in the crop growth period. The results also show that simulated values of annual CO2 emission for straw burn and straw incorporation were 3.45 t C ha(-1) y(-1) and 2.13 t C ha(-1) y(-1), respectively. In addition the DNDC model was found to be more suitable in simulating CO2 mission fluxes under straw incorporation. Finally the standard multiple regression describing the relationship between CO2 emissions and factors found that soil mean temperature (SMT), daily mean temperature (T mean), and water-filled pore space (WFPS) were significant.
Chen, Deli; Pan, Jianjun; Lam, Shu Kee
2013-01-01
Straw retention has been shown to reduce carbon dioxide (CO2) emission from agricultural soils. But it remains a big challenge for models to effectively predict CO2 emission fluxes under different straw retention methods. We used maize season data in the Griffith region, Australia, to test whether the denitrification-decomposition (DNDC) model could simulate annual CO2 emission. We also identified driving factors of CO2 emission by correlation analysis and path analysis. We show that the DNDC model was able to simulate CO2 emission under alternative straw retention scenarios. The correlation coefficients between simulated and observed daily values for treatments of straw burn and straw incorporation were 0.74 and 0.82, respectively, in the straw retention period and 0.72 and 0.83, respectively, in the crop growth period. The results also show that simulated values of annual CO2 emission for straw burn and straw incorporation were 3.45 t C ha−1 y−1 and 2.13 t C ha−1 y−1, respectively. In addition the DNDC model was found to be more suitable in simulating CO2 mission fluxes under straw incorporation. Finally the standard multiple regression describing the relationship between CO2 emissions and factors found that soil mean temperature (SMT), daily mean temperature (T mean), and water-filled pore space (WFPS) were significant. PMID:24453915
An In Situ One-Pot Synthetic Approach towards Multivariate Zirconium MOFs.
Sun, Yujia; Sun, Lixian; Feng, Dawei; Zhou, Hong-Cai
2016-05-23
Chemically highly stable MOFs incorporating multiple functionalities are of great interest for applications under harsh environments. Herein, we presented a facile one-pot synthetic strategy to incorporate multiple functionalities into stable Zr-MOFs from mixed ligands of different geometry and connectivity. Via our strategy, tetratopic tetrakis(4-carboxyphenyl)porphyrin (TCPP) ligands were successfully integrated into UiO-66 while maintaining the crystal structure, morphology, and ultrahigh chemical stability of UiO-66. The amount of incorporated TCPP is controllable. Through various combinations of BDC derivatives and TCPP, 49 MOFs with multiple functionalities were obtained. Among them, MOFs modified with FeTCPPCl were demonstrated to be catalytically active for the oxidation of ABTS. We anticipate our strategy to provide a facile route to introduce multiple functionalities into stable Zr-MOFs for a wide variety of potential applications. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Conceptual models for cumulative risk assessment.
Linder, Stephen H; Sexton, Ken
2011-12-01
In the absence of scientific consensus on an appropriate theoretical framework, cumulative risk assessment and related research have relied on speculative conceptual models. We argue for the importance of theoretical backing for such models and discuss 3 relevant theoretical frameworks, each supporting a distinctive "family" of models. Social determinant models postulate that unequal health outcomes are caused by structural inequalities; health disparity models envision social and contextual factors acting through individual behaviors and biological mechanisms; and multiple stressor models incorporate environmental agents, emphasizing the intermediary role of these and other stressors. The conclusion is that more careful reliance on established frameworks will lead directly to improvements in characterizing cumulative risk burdens and accounting for disproportionate adverse health effects.
Conceptual Models for Cumulative Risk Assessment
Sexton, Ken
2011-01-01
In the absence of scientific consensus on an appropriate theoretical framework, cumulative risk assessment and related research have relied on speculative conceptual models. We argue for the importance of theoretical backing for such models and discuss 3 relevant theoretical frameworks, each supporting a distinctive “family” of models. Social determinant models postulate that unequal health outcomes are caused by structural inequalities; health disparity models envision social and contextual factors acting through individual behaviors and biological mechanisms; and multiple stressor models incorporate environmental agents, emphasizing the intermediary role of these and other stressors. The conclusion is that more careful reliance on established frameworks will lead directly to improvements in characterizing cumulative risk burdens and accounting for disproportionate adverse health effects. PMID:22021317
Owen, Rhiannon K; Cooper, Nicola J; Quinn, Terence J; Lees, Rosalind; Sutton, Alex J
2018-07-01
Network meta-analyses (NMA) have extensively been used to compare the effectiveness of multiple interventions for health care policy and decision-making. However, methods for evaluating the performance of multiple diagnostic tests are less established. In a decision-making context, we are often interested in comparing and ranking the performance of multiple diagnostic tests, at varying levels of test thresholds, in one simultaneous analysis. Motivated by an example of cognitive impairment diagnosis following stroke, we synthesized data from 13 studies assessing the efficiency of two diagnostic tests: Mini-Mental State Examination (MMSE) and Montreal Cognitive Assessment (MoCA), at two test thresholds: MMSE <25/30 and <27/30, and MoCA <22/30 and <26/30. Using Markov chain Monte Carlo (MCMC) methods, we fitted a bivariate network meta-analysis model incorporating constraints on increasing test threshold, and accounting for the correlations between multiple test accuracy measures from the same study. We developed and successfully fitted a model comparing multiple tests/threshold combinations while imposing threshold constraints. Using this model, we found that MoCA at threshold <26/30 appeared to have the best true positive rate, whereas MMSE at threshold <25/30 appeared to have the best true negative rate. The combined analysis of multiple tests at multiple thresholds allowed for more rigorous comparisons between competing diagnostics tests for decision making. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
The water balance of the urban Salt Lake Valley: a multiple-box model validated by observations
NASA Astrophysics Data System (ADS)
Stwertka, C.; Strong, C.
2012-12-01
A main focus of the recently awarded National Science Foundation (NSF) EPSCoR Track-1 research project "innovative Urban Transitions and Arid-region Hydro-sustainability (iUTAH)" is to quantify the primary components of the water balance for the Wasatch region, and to evaluate their sensitivity to climate change and projected urban development. Building on the multiple-box model that we developed and validated for carbon dioxide (Strong et al 2011), mass balance equations for water in the atmosphere and surface are incorporated into the modeling framework. The model is used to determine how surface fluxes, ground-water transport, biological fluxes, and meteorological processes regulate water cycling within and around the urban Salt Lake Valley. The model is used to evaluate the hypotheses that increased water demand associated with urban growth in Salt Lake Valley will (1) elevate sensitivity to projected climate variability and (2) motivate more attentive management of urban water use and evaporative fluxes.
Additive-Multiplicative Approximation of Genotype-Environment Interaction
Gimelfarb, A.
1994-01-01
A model of genotype-environment interaction in quantitative traits is considered. The model represents an expansion of the traditional additive (first degree polynomial) approximation of genotypic and environmental effects to a second degree polynomial incorporating a multiplicative term besides the additive terms. An experimental evaluation of the model is suggested and applied to a trait in Drosophila melanogaster. The environmental variance of a genotype in the model is shown to be a function of the genotypic value: it is a convex parabola. The broad sense heritability in a population depends not only on the genotypic and environmental variances, but also on the position of the genotypic mean in the population relative to the minimum of the parabola. It is demonstrated, using the model, that GXE interaction rectional may cause a substantial non-linearity in offspring-parent regression and a reversed response to directional selection. It is also shown that directional selection may be accompanied by an increase in the heritability. PMID:7896113
Nonstationary multivariate modeling of cerebral autoregulation during hypercapnia.
Kostoglou, Kyriaki; Debert, Chantel T; Poulin, Marc J; Mitsis, Georgios D
2014-05-01
We examined the time-varying characteristics of cerebral autoregulation and hemodynamics during a step hypercapnic stimulus by using recursively estimated multivariate (two-input) models which quantify the dynamic effects of mean arterial blood pressure (ABP) and end-tidal CO2 tension (PETCO2) on middle cerebral artery blood flow velocity (CBFV). Beat-to-beat values of ABP and CBFV, as well as breath-to-breath values of PETCO2 during baseline and sustained euoxic hypercapnia were obtained in 8 female subjects. The multiple-input, single-output models used were based on the Laguerre expansion technique, and their parameters were updated using recursive least squares with multiple forgetting factors. The results reveal the presence of nonstationarities that confirm previously reported effects of hypercapnia on autoregulation, i.e. a decrease in the MABP phase lead, and suggest that the incorporation of PETCO2 as an additional model input yields less time-varying estimates of dynamic pressure autoregulation obtained from single-input (ABP-CBFV) models. Copyright © 2013 IPEM. Published by Elsevier Ltd. All rights reserved.
Adaptive subdomain modeling: A multi-analysis technique for ocean circulation models
NASA Astrophysics Data System (ADS)
Altuntas, Alper; Baugh, John
2017-07-01
Many coastal and ocean processes of interest operate over large temporal and geographical scales and require a substantial amount of computational resources, particularly when engineering design and failure scenarios are also considered. This study presents an adaptive multi-analysis technique that improves the efficiency of these computations when multiple alternatives are being simulated. The technique, called adaptive subdomain modeling, concurrently analyzes any number of child domains, with each instance corresponding to a unique design or failure scenario, in addition to a full-scale parent domain providing the boundary conditions for its children. To contain the altered hydrodynamics originating from the modifications, the spatial extent of each child domain is adaptively adjusted during runtime depending on the response of the model. The technique is incorporated in ADCIRC++, a re-implementation of the popular ADCIRC ocean circulation model with an updated software architecture designed to facilitate this adaptive behavior and to utilize concurrent executions of multiple domains. The results of our case studies confirm that the method substantially reduces computational effort while maintaining accuracy.
Estimation of health effects of prenatal methylmercury exposure using structural equation models.
Budtz-Jørgensen, Esben; Keiding, Niels; Grandjean, Philippe; Weihe, Pal
2002-10-14
Observational studies in epidemiology always involve concerns regarding validity, especially measurement error, confounding, missing data, and other problems that may affect the study outcomes. Widely used standard statistical techniques, such as multiple regression analysis, may to some extent adjust for these shortcomings. However, structural equations may incorporate most of these considerations, thereby providing overall adjusted estimations of associations. This approach was used in a large epidemiological data set from a prospective study of developmental methyl-mercury toxicity. Structural equation models were developed for assessment of the association between biomarkers of prenatal mercury exposure and neuropsychological test scores in 7 year old children. Eleven neurobehavioral outcomes were grouped into motor function and verbally mediated function. Adjustment for local dependence and item bias was necessary for a satisfactory fit of the model, but had little impact on the estimated mercury effects. The mercury effect on the two latent neurobehavioral functions was similar to the strongest effects seen for individual test scores of motor function and verbal skills. Adjustment for contaminant exposure to poly chlorinated biphenyls (PCBs) changed the estimates only marginally, but the mercury effect could be reduced to non-significance by assuming a large measurement error for the PCB biomarker. The structural equation analysis allows correction for measurement error in exposure variables, incorporation of multiple outcomes and incomplete cases. This approach therefore deserves to be applied more frequently in the analysis of complex epidemiological data sets.
Sosenko, Jay M; Skyler, Jay S; Palmer, Jerry P; Krischer, Jeffrey P; Yu, Liping; Mahon, Jeffrey; Beam, Craig A; Boulware, David C; Rafkin, Lisa; Schatz, Desmond; Eisenbarth, George
2013-09-01
We assessed whether a risk score that incorporates levels of multiple islet autoantibodies could enhance the prediction of type 1 diabetes (T1D). TrialNet Natural History Study participants (n = 784) were tested for three autoantibodies (GADA, IA-2A, and mIAA) at their initial screening. Samples from those positive for at least one autoantibody were subsequently tested for ICA and ZnT8A. An autoantibody risk score (ABRS) was developed from a proportional hazards model that combined autoantibody levels from each autoantibody along with their designations of positivity and negativity. The ABRS was strongly predictive of T1D (hazard ratio [with 95% CI] 2.72 [2.23-3.31], P < 0.001). Receiver operating characteristic curve areas (with 95% CI) for the ABRS revealed good predictability (0.84 [0.78-0.90] at 2 years, 0.81 [0.74-0.89] at 3 years, P < 0.001 for both). The composite of levels from the five autoantibodies was predictive of T1D before and after an adjustment for the positivity or negativity of autoantibodies (P < 0.001). The findings were almost identical when ICA was excluded from the risk score model. The combination of the ABRS and the previously validated Diabetes Prevention Trial-Type 1 Risk Score (DPTRS) predicted T1D more accurately (0.93 [0.88-0.98] at 2 years, 0.91 [0.83-0.99] at 3 years) than either the DPTRS or the ABRS alone (P ≤ 0.01 for all comparisons). These findings show the importance of considering autoantibody levels in assessing the risk of T1D. Moreover, levels of multiple autoantibodies can be incorporated into an ABRS that accurately predicts T1D.
Sosenko, Jay M.; Skyler, Jay S.; Palmer, Jerry P.; Krischer, Jeffrey P.; Yu, Liping; Mahon, Jeffrey; Beam, Craig A.; Boulware, David C.; Rafkin, Lisa; Schatz, Desmond; Eisenbarth, George
2013-01-01
OBJECTIVE We assessed whether a risk score that incorporates levels of multiple islet autoantibodies could enhance the prediction of type 1 diabetes (T1D). RESEARCH DESIGN AND METHODS TrialNet Natural History Study participants (n = 784) were tested for three autoantibodies (GADA, IA-2A, and mIAA) at their initial screening. Samples from those positive for at least one autoantibody were subsequently tested for ICA and ZnT8A. An autoantibody risk score (ABRS) was developed from a proportional hazards model that combined autoantibody levels from each autoantibody along with their designations of positivity and negativity. RESULTS The ABRS was strongly predictive of T1D (hazard ratio [with 95% CI] 2.72 [2.23–3.31], P < 0.001). Receiver operating characteristic curve areas (with 95% CI) for the ABRS revealed good predictability (0.84 [0.78–0.90] at 2 years, 0.81 [0.74–0.89] at 3 years, P < 0.001 for both). The composite of levels from the five autoantibodies was predictive of T1D before and after an adjustment for the positivity or negativity of autoantibodies (P < 0.001). The findings were almost identical when ICA was excluded from the risk score model. The combination of the ABRS and the previously validated Diabetes Prevention Trial–Type 1 Risk Score (DPTRS) predicted T1D more accurately (0.93 [0.88–0.98] at 2 years, 0.91 [0.83–0.99] at 3 years) than either the DPTRS or the ABRS alone (P ≤ 0.01 for all comparisons). CONCLUSIONS These findings show the importance of considering autoantibody levels in assessing the risk of T1D. Moreover, levels of multiple autoantibodies can be incorporated into an ABRS that accurately predicts T1D. PMID:23818528
NASA Technical Reports Server (NTRS)
Goldberg, Robert K.; Carney, Kelly S.; Dubois, Paul; Hoffarth, Canio; Khaled, Bilal; Rajan, Subramaniam; Blankenhorn, Gunther
2016-01-01
A material model which incorporates several key capabilities which have been identified by the aerospace community as lacking in the composite impact models currently available in LS-DYNA(Registered Trademark) is under development. In particular, the material model, which is being implemented as MAT 213 into a tailored version of LS-DYNA being jointly developed by the FAA and NASA, incorporates both plasticity and damage within the material model, utilizes experimentally based tabulated input to define the evolution of plasticity and damage as opposed to specifying discrete input parameters (such as modulus and strength), and is able to analyze the response of composites composed with a variety of fiber architectures. The plasticity portion of the orthotropic, three-dimensional, macroscopic composite constitutive model is based on an extension of the Tsai-Wu composite failure model into a generalized yield function with a non-associative flow rule. The capability to account for the rate and temperature dependent deformation response of composites has also been incorporated into the material model. For the damage model, a strain equivalent formulation is utilized to allow for the uncoupling of the deformation and damage analyses. In the damage model, a diagonal damage tensor is defined to account for the directionally dependent variation of damage. However, in composites it has been found that loading in one direction can lead to damage in multiple coordinate directions. To account for this phenomena, the terms in the damage matrix are semi-coupled such that the damage in a particular coordinate direction is a function of the stresses and plastic strains in all of the coordinate directions. The onset of material failure, and thus element deletion, is being developed to be a function of the stresses and plastic strains in the various coordinate directions. Systematic procedures are being developed to generate the required input parameters based on the results of experimental tests.
Combining multiple imputation and meta-analysis with individual participant data
Burgess, Stephen; White, Ian R; Resche-Rigon, Matthieu; Wood, Angela M
2013-01-01
Multiple imputation is a strategy for the analysis of incomplete data such that the impact of the missingness on the power and bias of estimates is mitigated. When data from multiple studies are collated, we can propose both within-study and multilevel imputation models to impute missing data on covariates. It is not clear how to choose between imputation models or how to combine imputation and inverse-variance weighted meta-analysis methods. This is especially important as often different studies measure data on different variables, meaning that we may need to impute data on a variable which is systematically missing in a particular study. In this paper, we consider a simulation analysis of sporadically missing data in a single covariate with a linear analysis model and discuss how the results would be applicable to the case of systematically missing data. We find in this context that ensuring the congeniality of the imputation and analysis models is important to give correct standard errors and confidence intervals. For example, if the analysis model allows between-study heterogeneity of a parameter, then we should incorporate this heterogeneity into the imputation model to maintain the congeniality of the two models. In an inverse-variance weighted meta-analysis, we should impute missing data and apply Rubin's rules at the study level prior to meta-analysis, rather than meta-analyzing each of the multiple imputations and then combining the meta-analysis estimates using Rubin's rules. We illustrate the results using data from the Emerging Risk Factors Collaboration. PMID:23703895
AgMIP: Next Generation Models and Assessments
NASA Astrophysics Data System (ADS)
Rosenzweig, C.
2014-12-01
Next steps in developing next-generation crop models fall into several categories: significant improvements in simulation of important crop processes and responses to stress; extension from simplified crop models to complex cropping systems models; and scaling up from site-based models to landscape, national, continental, and global scales. Crop processes that require major leaps in understanding and simulation in order to narrow uncertainties around how crops will respond to changing atmospheric conditions include genetics; carbon, temperature, water, and nitrogen; ozone; and nutrition. The field of crop modeling has been built on a single crop-by-crop approach. It is now time to create a new paradigm, moving from 'crop' to 'cropping system.' A first step is to set up the simulation technology so that modelers can rapidly incorporate multiple crops within fields, and multiple crops over time. Then the response of these more complex cropping systems can be tested under different sustainable intensification management strategies utilizing the updated simulation environments. Model improvements for diseases, pests, and weeds include developing process-based models for important diseases, frameworks for coupling air-borne diseases to crop models, gathering significantly more data on crop impacts, and enabling the evaluation of pest management strategies. Most smallholder farming in the world involves integrated crop-livestock systems that cannot be represented by crop modeling alone. Thus, next-generation cropping system models need to include key linkages to livestock. Livestock linkages to be incorporated include growth and productivity models for grasslands and rangelands as well as the usual annual crops. There are several approaches for scaling up, including use of gridded models and development of simpler quasi-empirical models for landscape-scale analysis. On the assessment side, AgMIP is leading a community process for coordinated contributions to IPCC AR6 that involves the key modeling groups from around the world including North America, Europe, South America, Sub-Saharan Africa, South Asia, East Asia, and Australia and Oceania. This community process will lead to mutually agreed protocols for coordinated global and regional assessments.
Bakas, Spyridon; Zeng, Ke; Sotiras, Aristeidis; Rathore, Saima; Akbari, Hamed; Gaonkar, Bilwaj; Rozycki, Martin; Pati, Sarthak; Davatzikos, Christos
2016-01-01
We present an approach for segmenting low- and high-grade gliomas in multimodal magnetic resonance imaging volumes. The proposed approach is based on a hybrid generative-discriminative model. Firstly, a generative approach based on an Expectation-Maximization framework that incorporates a glioma growth model is used to segment the brain scans into tumor, as well as healthy tissue labels. Secondly, a gradient boosting multi-class classification scheme is used to refine tumor labels based on information from multiple patients. Lastly, a probabilistic Bayesian strategy is employed to further refine and finalize the tumor segmentation based on patient-specific intensity statistics from the multiple modalities. We evaluated our approach in 186 cases during the training phase of the BRAin Tumor Segmentation (BRATS) 2015 challenge and report promising results. During the testing phase, the algorithm was additionally evaluated in 53 unseen cases, achieving the best performance among the competing methods.
Protein Turnover Measurements in Human Serum by Serial Immunoaffinity LC-MS/MS.
Farrokhi, Vahid; Chen, Xiaoying; Neubert, Hendrik
2018-02-01
The half-life of target proteins is frequently an important parameter in mechanistic pharmacokinetic and pharmacodynamic (PK/PD) modeling of biotherapeutics. Clinical studies for accurate measurement of physiologically relevant protein turnover can reduce the uncertainty in PK/PD model-based predictions, for example, of the therapeutic dose and dosing regimen in first-in-human clinical trials. We used a targeted mass spectrometry work flow based on serial immunoaffinity enrichment ofmultiple human serum proteins from a [5,5,5- 2 H 3 ]-L-leucine tracer pulse-chase study in healthy volunteers. To confirm the reproducibility of turnover measurements from serial immunoaffinity enrichment, multiple aliquots from the same sample set were subjected to protein turnover analysis in varying order. Tracer incorporation was measured by multiple-reaction-monitoring mass spectrometry and target turnover was calculated using a four-compartment pharmacokinetic model. Five proteins of clinical or therapeutic relevance including soluble tumor necrosis factor receptor superfamily member 12A, tissue factor pathway inhibitor, soluble interleukin 1 receptor like 1, soluble mucosal addressin cell adhesion molecule 1, and muscle-specific creatine kinase were sequentially subjected to turnover analysis from the same human serum sample. Calculated half-lives ranged from 5-15 h; however, no tracer incorporation was observed for mucosal addressin cell adhesion molecule 1. The utility of clinical pulse-chase studies to investigate protein turnover can be extended by serial immunoaffinity enrichment of target proteins. Turnover analysis from serum and subsequently from remaining supernatants provided analytical sensitivity and reproducibility for multiple human target proteins in the same sample set, irrespective of the order of analysis. © 2017 American Association for Clinical Chemistry.
Modeling of Nickel Hydroxide Electrode Containing Multiple Phases
NASA Technical Reports Server (NTRS)
Timmerman, P.; Ratnakumar, B. V.; Di Stefano, S.
1996-01-01
Mathematical models of alkaline rechargeable nickel cell systems (e.g., Ni-Cd, Ni-H(sub 2) and Ni-MH) have so far been developed based on the assumption that the active material at Ni electrode exists primarily in a single phase as Beta-NiOOH -- Beta-Ni(OH)(sub 2), despite enough experimental evidence for the second phase, i.e., Gamma-NiOOH -- Alpha-Ni(OH)(sub 2), especially under conditions of extended coverage. Here, we have incorporated the additional couple of Gamma-NiOOH -- Alpha-Ni(OH)(sub 2) into the modeling of the Ni electrode.
Sensitivity Analysis of Multiple Informant Models When Data are Not Missing at Random
Blozis, Shelley A.; Ge, Xiaojia; Xu, Shu; Natsuaki, Misaki N.; Shaw, Daniel S.; Neiderhiser, Jenae; Scaramella, Laura; Leve, Leslie; Reiss, David
2014-01-01
Missing data are common in studies that rely on multiple informant data to evaluate relationships among variables for distinguishable individuals clustered within groups. Estimation of structural equation models using raw data allows for incomplete data, and so all groups may be retained even if only one member of a group contributes data. Statistical inference is based on the assumption that data are missing completely at random or missing at random. Importantly, whether or not data are missing is assumed to be independent of the missing data. A saturated correlates model that incorporates correlates of the missingness or the missing data into an analysis and multiple imputation that may also use such correlates offer advantages over the standard implementation of SEM when data are not missing at random because these approaches may result in a data analysis problem for which the missingness is ignorable. This paper considers these approaches in an analysis of family data to assess the sensitivity of parameter estimates to assumptions about missing data, a strategy that may be easily implemented using SEM software. PMID:25221420
Computational models for the analysis of three-dimensional internal and exhaust plume flowfields
NASA Technical Reports Server (NTRS)
Dash, S. M.; Delguidice, P. D.
1977-01-01
This paper describes computational procedures developed for the analysis of three-dimensional supersonic ducted flows and multinozzle exhaust plume flowfields. The models/codes embodying these procedures cater to a broad spectrum of geometric situations via the use of multiple reference plane grid networks in several coordinate systems. Shock capturing techniques are employed to trace the propagation and interaction of multiple shock surfaces while the plume interface, separating the exhaust and external flows, and the plume external shock are discretely analyzed. The computational grid within the reference planes follows the trace of streamlines to facilitate the incorporation of finite-rate chemistry and viscous computational capabilities. Exhaust gas properties consist of combustion products in chemical equilibrium. The computational accuracy of the models/codes is assessed via comparisons with exact solutions, results of other codes and experimental data. Results are presented for the flows in two-dimensional convergent and divergent ducts, expansive and compressive corner flows, flow in a rectangular nozzle and the plume flowfields for exhausts issuing out of single and multiple rectangular nozzles.
XCAT/DRASIM: a realistic CT/human-model simulation package
NASA Astrophysics Data System (ADS)
Fung, George S. K.; Stierstorfer, Karl; Segars, W. Paul; Taguchi, Katsuyuki; Flohr, Thomas G.; Tsui, Benjamin M. W.
2011-03-01
The aim of this research is to develop a complete CT/human-model simulation package by integrating the 4D eXtended CArdiac-Torso (XCAT) phantom, a computer generated NURBS surface based phantom that provides a realistic model of human anatomy and respiratory and cardiac motions, and the DRASIM (Siemens Healthcare) CT-data simulation program. Unlike other CT simulation tools which are based on simple mathematical primitives or voxelized phantoms, this new simulation package has the advantages of utilizing a realistic model of human anatomy and physiological motions without voxelization and with accurate modeling of the characteristics of clinical Siemens CT systems. First, we incorporated the 4D XCAT anatomy and motion models into DRASIM by implementing a new library which consists of functions to read-in the NURBS surfaces of anatomical objects and their overlapping order and material properties in the XCAT phantom. Second, we incorporated an efficient ray-tracing algorithm for line integral calculation in DRASIM by computing the intersection points of the rays cast from the x-ray source to the detector elements through the NURBS surfaces of the multiple XCAT anatomical objects along the ray paths. Third, we evaluated the integrated simulation package by performing a number of sample simulations of multiple x-ray projections from different views followed by image reconstruction. The initial simulation results were found to be promising by qualitative evaluation. In conclusion, we have developed a unique CT/human-model simulation package which has great potential as a tool in the design and optimization of CT scanners, and the development of scanning protocols and image reconstruction methods for improving CT image quality and reducing radiation dose.
IMM estimator with out-of-sequence measurements
NASA Astrophysics Data System (ADS)
Bar-Shalom, Yaakov; Chen, Huimin
2004-08-01
In multisensor tracking systems that operate in a centralized information processing architecture, measurements from the same target obtained by different sensors can arrive at the processing center out of sequence. In order to avoid either a delay in the output or the need for reordering and reprocessing an entire sequence of measurements, such measurements have to be processed as out-of-sequence measurements (OOSM). Recent work developed procedures for incorporating OOSMs into a Kalman filter (KF). Since the state of the art tracker for real (maneuvering) targets is the Interacting Multiple Model (IMM) estimator, this paper presents the algorithm for incorporating OOSMs into an IMM estimator. Both data association and estimation are considered. Simulation results are presented for two realistic problems using measurements from two airborne GMTI sensors. It is shown that the proposed algorithm for incorporating OOSMs into an IMM estimator yields practically the same performance as the reordering and in-sequence reprocessing of the measurements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freytag, Stefan, E-mail: stefan.freytag@ovgu.de; Feneberg, Martin; Berger, Christoph
2016-07-07
In{sub x}Ga{sub 1–x}N/GaN single and multi quantum well (MQW) structures with x ≈ 0.13 were investigated optically by photoreflectance, photoluminescence excitation spectroscopy, and luminescence. Clear evidence of unintentional indium incorporation into the nominal GaN barrier layers is found. The unintentional In content is found to be around 3%. Inhomogeneous distribution of In atoms occurs within the distinct quantum well (QW) layers, which is commonly described as statistical alloy fluctuation and leads to the characteristic S-shape temperature shift of emission energy. Furthermore, differences in emission energy between the first and the other QWs of a MQW stack are found experimentally. Thismore » effect is discussed with the help of model calculations and is assigned to differences in the confining potential due to unwanted indium incorporation for the upper QWs.« less
Weston, Dale; Hauck, Katharina; Amlôt, Richard
2018-03-09
Given the importance of person to person transmission in the spread of infectious diseases, it is critically important to ensure that human behaviour with respect to infection prevention is appropriately represented within infectious disease models. This paper presents a large scale scoping review regarding the incorporation of infection prevention behaviour in infectious disease models. The outcomes of this review are contextualised within the psychological literature concerning health behaviour and behaviour change, resulting in a series of key recommendations for the incorporation of human behaviour in future infectious disease models. The search strategy focused on terms relating to behaviour, infectious disease and mathematical modelling. The selection criteria were developed iteratively to focus on original research articles that present an infectious disease model with human-human spread, in which individuals' self-protective health behaviour varied endogenously within the model. Data extracted included: the behaviour that is modelled; how this behaviour is modelled; any theoretical background for the modelling of behaviour, and; any behavioural data used to parameterise the models. Forty-two papers from an initial total of 2987 were retained for inclusion in the final review. All of these papers were published between 2002 and 2015. Many of the included papers employed a multiple, linked models to incorporate infection prevention behaviour. Both cognitive constructs (e.g., perceived risk) and, to a lesser extent, social constructs (e.g., social norms) were identified in the included papers. However, only five papers made explicit reference to psychological health behaviour change theories. Finally, just under half of the included papers incorporated behavioural data in their modelling. By contextualising the review outcomes within the psychological literature on health behaviour and behaviour change, three key recommendations for future behavioural modelling are made. First, modellers should consult with the psychological literature on health behaviour/ behaviour change when developing new models. Second, modellers interested in exploring the relationship between behaviour and disease spread should draw on social psychological literature to increase the complexity of the social world represented within infectious disease models. Finally, greater use of context-specific behavioural data (e.g., survey data, observational data) is recommended to parameterise models.
NASA Astrophysics Data System (ADS)
Noh, Seong Jin; Rakovec, Oldrich; Kumar, Rohini; Samaniego, Luis
2016-04-01
There have been tremendous improvements in distributed hydrologic modeling (DHM) which made a process-based simulation with a high spatiotemporal resolution applicable on a large spatial scale. Despite of increasing information on heterogeneous property of a catchment, DHM is still subject to uncertainties inherently coming from model structure, parameters and input forcing. Sequential data assimilation (DA) may facilitate improved streamflow prediction via DHM using real-time observations to correct internal model states. In conventional DA methods such as state updating, parametric uncertainty is, however, often ignored mainly due to practical limitations of methodology to specify modeling uncertainty with limited ensemble members. If parametric uncertainty related with routing and runoff components is not incorporated properly, predictive uncertainty by DHM may be insufficient to capture dynamics of observations, which may deteriorate predictability. Recently, a multi-scale parameter regionalization (MPR) method was proposed to make hydrologic predictions at different scales using a same set of model parameters without losing much of the model performance. The MPR method incorporated within the mesoscale hydrologic model (mHM, http://www.ufz.de/mhm) could effectively represent and control uncertainty of high-dimensional parameters in a distributed model using global parameters. In this study, we present a global multi-parametric ensemble approach to incorporate parametric uncertainty of DHM in DA to improve streamflow predictions. To effectively represent and control uncertainty of high-dimensional parameters with limited number of ensemble, MPR method is incorporated with DA. Lagged particle filtering is utilized to consider the response times and non-Gaussian characteristics of internal hydrologic processes. The hindcasting experiments are implemented to evaluate impacts of the proposed DA method on streamflow predictions in multiple European river basins having different climate and catchment characteristics. Because augmentation of parameters is not required within an assimilation window, the approach could be stable with limited ensemble members and viable for practical uses.
Design of a High-Power White Light Source with Colloidal Quantum Dots and Non-Rare-Earth Phosphors
NASA Astrophysics Data System (ADS)
Bicanic, Kristopher T.
This thesis describes the design process of a high-power white light source, using novel phosphor and colloidal quantum dot materials. To incorporate multiple light emitters, we generalized and extended a down-converting layer model. We employed a phosphor mixture comprising of YAG:Ce and K2TiF 6:Mn4+ powders to illustrate the effectiveness of the model. By incorporating experimental photophysical results from the phosphors and colloidal quantum dots, we modeled our system and chose the design suitable for high-power applications. We report a reduction in the correlated color temperature by 600K for phosphor and quantum dot systems, enabling the creation of a warm white light emission at power densities up to 5 kW/cm 2. Furthermore, at this high-power, their emission achieves the digital cinema initiative (DCI) requirements with a luminescence efficacy improvement up to 32% over the stand-alone ceramic YAG:Ce phosphor.
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett A.; Arnold, Steven M.
2006-01-01
A framework is presented that enables coupled multiscale analysis of composite structures. The recently developed, free, Finite Element Analysis - Micromechanics Analysis Code (FEAMAC) software couples the Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) with ABAQUS to perform micromechanics based FEA such that the nonlinear composite material response at each integration point is modeled at each increment by MAC/GMC. As a result, the stochastic nature of fiber breakage in composites can be simulated through incorporation of an appropriate damage and failure model that operates within MAC/GMC on the level of the fiber. Results are presented for the progressive failure analysis of a titanium matrix composite tensile specimen that illustrate the power and utility of the framework and address the techniques needed to model the statistical nature of the problem properly. In particular, it is shown that incorporating fiber strength randomness on multiple scales improves the quality of the simulation by enabling failure at locations other than those associated with structural level stress risers.
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett A.; Arnold, Steven M.
2007-01-01
A framework is presented that enables coupled multiscale analysis of composite structures. The recently developed, free, Finite Element Analysis-Micromechanics Analysis Code (FEAMAC) software couples the Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) with ABAQUS to perform micromechanics based FEA such that the nonlinear composite material response at each integration point is modeled at each increment by MAC/GMC. As a result, the stochastic nature of fiber breakage in composites can be simulated through incorporation of an appropriate damage and failure model that operates within MAC/GMC on the level of the fiber. Results are presented for the progressive failure analysis of a titanium matrix composite tensile specimen that illustrate the power and utility of the framework and address the techniques needed to model the statistical nature of the problem properly. In particular, it is shown that incorporating fiber strength randomness on multiple scales improves the quality of the simulation by enabling failure at locations other than those associated with structural level stress risers.
A new OLED SPICE model for pixel circuit simulation in OLED-on-silicon microdisplay design
NASA Astrophysics Data System (ADS)
Bohua, Zhao; Ran, Huang; Jianhui, Bu; Yinxue, Lü; Yiqi, Wang; Fei, Ma; Guohua, Xie; Zhensong, Zhang; Huan, Du; Jiajun, Luo; Zhengsheng, Han; Yi, Zhao
2012-07-01
A new equivalent circuit model of organic-light-emitting-diode (OLED) is proposed. As the single-diode model is able to approximate OLED behavior as well as the multiple-diode model, the new model will be built based on it. In order to make sure that the experimental and simulated data are in good agreement, the constant resistor is exchanged for an exponential resistor in the new model. Compared with the measured data and the results of the other two OLED SPICE models, the simulated I—V characteristics of the new model match the measured data much better. This new model can be directly incorporated into an SPICE circuit simulator and presents good accuracy over the whole operating voltage.
NASA Technical Reports Server (NTRS)
Hall, Laverne
1995-01-01
Modeling of the Multi-mission Image Processing System (MIPS) will be described as an example of the use of a modeling tool to design a distributed system that supports multiple application scenarios. This paper examines: (a) modeling tool selection, capabilities, and operation (namely NETWORK 2.5 by CACl), (b) pointers for building or constructing a model and how the MIPS model was developed, (c) the importance of benchmarking or testing the performance of equipment/subsystems being considered for incorporation the design/architecture, (d) the essential step of model validation and/or calibration using the benchmark results, (e) sample simulation results from the MIPS model, and (f) how modeling and simulation analysis affected the MIPS design process by having a supportive and informative impact.
Kruger, Jen; Pollard, Daniel; Basarir, Hasan; Thokala, Praveen; Cooke, Debbie; Clark, Marie; Bond, Rod; Heller, Simon; Brennan, Alan
2015-10-01
. Health economic modeling has paid limited attention to the effects that patients' psychological characteristics have on the effectiveness of treatments. This case study tests 1) the feasibility of incorporating psychological prediction models of treatment response within an economic model of type 1 diabetes, 2) the potential value of providing treatment to a subgroup of patients, and 3) the cost-effectiveness of providing treatment to a subgroup of responders defined using 5 different algorithms. . Multiple linear regressions were used to investigate relationships between patients' psychological characteristics and treatment effectiveness. Two psychological prediction models were integrated with a patient-level simulation model of type 1 diabetes. Expected value of individualized care analysis was undertaken. Five different algorithms were used to provide treatment to a subgroup of predicted responders. A cost-effectiveness analysis compared using the algorithms to providing treatment to all patients. . The psychological prediction models had low predictive power for treatment effectiveness. Expected value of individualized care results suggested that targeting education at responders could be of value. The cost-effectiveness analysis suggested, for all 5 algorithms, that providing structured education to a subgroup of predicted responders would not be cost-effective. . The psychological prediction models tested did not have sufficient predictive power to make targeting treatment cost-effective. The psychological prediction models are simple linear models of psychological behavior. Collection of data on additional covariates could potentially increase statistical power. . By collecting data on psychological variables before an intervention, we can construct predictive models of treatment response to interventions. These predictive models can be incorporated into health economic models to investigate more complex service delivery and reimbursement strategies. © The Author(s) 2015.
Tracking of multiple targets using online learning for reference model adaptation.
Pernkopf, Franz
2008-12-01
Recently, much work has been done in multiple object tracking on the one hand and on reference model adaptation for a single-object tracker on the other side. In this paper, we do both tracking of multiple objects (faces of people) in a meeting scenario and online learning to incrementally update the models of the tracked objects to account for appearance changes during tracking. Additionally, we automatically initialize and terminate tracking of individual objects based on low-level features, i.e., face color, face size, and object movement. Many methods unlike our approach assume that the target region has been initialized by hand in the first frame. For tracking, a particle filter is incorporated to propagate sample distributions over time. We discuss the close relationship between our implemented tracker based on particle filters and genetic algorithms. Numerous experiments on meeting data demonstrate the capabilities of our tracking approach. Additionally, we provide an empirical verification of the reference model learning during tracking of indoor and outdoor scenes which supports a more robust tracking. Therefore, we report the average of the standard deviation of the trajectories over numerous tracking runs depending on the learning rate.
Filtering Meteoroid Flights Using Multiple Unscented Kalman Filters
NASA Astrophysics Data System (ADS)
Sansom, E. K.; Bland, P. A.; Rutten, M. G.; Paxman, J.; Towner, M. C.
2016-11-01
Estimator algorithms are immensely versatile and powerful tools that can be applied to any problem where a dynamic system can be modeled by a set of equations and where observations are available. A well designed estimator enables system states to be optimally predicted and errors to be rigorously quantified. Unscented Kalman filters (UKFs) and interactive multiple models can be found in methods from satellite tracking to self-driving cars. The luminous trajectory of the Bunburra Rockhole fireball was observed by the Desert Fireball Network in mid-2007. The recorded data set is used in this paper to examine the application of these two techniques as a viable approach to characterizing fireball dynamics. The nonlinear, single-body system of equations, used to model meteoroid entry through the atmosphere, is challenged by gross fragmentation events that may occur. The incorporation of the UKF within an interactive multiple model smoother provides a likely solution for when fragmentation events may occur as well as providing a statistical analysis of the state uncertainties. In addition to these benefits, another advantage of this approach is its automatability for use within an image processing pipeline to facilitate large fireball data analyses and meteorite recoveries.
NASA Technical Reports Server (NTRS)
Parker, L. Neergaard; Zank, G. P.
2013-01-01
Successful forecasting of energetic particle events in space weather models require algorithms for correctly predicting the spectrum of ions accelerated from a background population of charged particles. We present preliminary results from a model that diffusively accelerates particles at multiple shocks. Our basic approach is related to box models in which a distribution of particles is diffusively accelerated inside the box while simultaneously experiencing decompression through adiabatic expansion and losses from the convection and diffusion of particles outside the box. We adiabatically decompress the accelerated particle distribution between each shock by either the method explored in Melrose and Pope (1993) and Pope and Melrose (1994) or by the approach set forth in Zank et al. (2000) where we solve the transport equation by a method analogous to operator splitting. The second method incorporates the additional loss terms of convection and diffusion and allows for the use of a variable time between shocks. We use a maximum injection energy (E(sub max)) appropriate for quasi-parallel and quasi-perpendicular shocks and provide a preliminary application of the diffusive acceleration of particles by multiple shocks with frequencies appropriate for solar maximum (i.e., a non-Markovian process).
On the statistical equivalence of restrained-ensemble simulations with the maximum entropy method
Roux, Benoît; Weare, Jonathan
2013-01-01
An issue of general interest in computer simulations is to incorporate information from experiments into a structural model. An important caveat in pursuing this goal is to avoid corrupting the resulting model with spurious and arbitrary biases. While the problem of biasing thermodynamic ensembles can be formulated rigorously using the maximum entropy method introduced by Jaynes, the approach can be cumbersome in practical applications with the need to determine multiple unknown coefficients iteratively. A popular alternative strategy to incorporate the information from experiments is to rely on restrained-ensemble molecular dynamics simulations. However, the fundamental validity of this computational strategy remains in question. Here, it is demonstrated that the statistical distribution produced by restrained-ensemble simulations is formally consistent with the maximum entropy method of Jaynes. This clarifies the underlying conditions under which restrained-ensemble simulations will yield results that are consistent with the maximum entropy method. PMID:23464140
Method for improving accuracy in full evaporation headspace analysis.
Xie, Wei-Qi; Chai, Xin-Sheng
2017-05-01
We report a new headspace analytical method in which multiple headspace extraction is incorporated with the full evaporation technique. The pressure uncertainty caused by the solid content change in the samples has a great impact to the measurement accuracy in the conventional full evaporation headspace analysis. The results (using ethanol solution as the model sample) showed that the present technique is effective to minimize such a problem. The proposed full evaporation multiple headspace extraction analysis technique is also automated and practical, and which could greatly broaden the applications of the full-evaporation-based headspace analysis. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
MR Imaging in Monitoring and Predicting Treatment Response in Multiple Sclerosis.
Río, Jordi; Auger, Cristina; Rovira, Àlex
2017-05-01
MR imaging is the most sensitive tool for identifying lesions in patients with multiple sclerosis (MS). MR imaging has also acquired an essential role in the detection of complications arising from these treatments and in the assessment and prediction of efficacy. In the future, other radiological measures that have shown prognostic value may be incorporated within the models for predicting treatment response. This article examines the role of MR imaging as a prognostic tool in patients with MS and the recommendations that have been proposed in recent years to monitor patients who are treated with disease-modifying drugs. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Das, Anusheela; Chaudhury, Srabanti
2015-11-01
Metal nanoparticles are heterogeneous catalysts and have a multitude of non-equivalent, catalytic sites on the nanoparticle surface. The product dissociation step in such reaction schemes can follow multiple pathways. Proposed here for the first time is a completely analytical theoretical framework, based on the first passage time distribution, that incorporates the effect of heterogeneity in nanoparticle catalysis explicitly by considering multiple, non-equivalent catalytic sites on the nanoparticle surface. Our results show that in nanoparticle catalysis, the effect of dynamic disorder is manifested even at limiting substrate concentrations in contrast to an enzyme that has only one well-defined active site.
2010-01-01
Background As advances in genetics are becoming increasingly relevant to mainstream healthcare, a major challenge is to ensure that these are integrated appropriately into mainstream medical services. In 2003, the Department of Health for England announced the availability of start-up funding for ten 'Mainstreaming Genetics' pilot services to develop models to achieve this. Methods Multiple methods were used to explore the pilots' experiences of incorporating genetics which might inform the development of new services in the future. A workshop with project staff, an email questionnaire, interviews and a thematic analysis of pilot final reports were carried out. Results Seven themes relating to the integration of genetics into mainstream medical services were identified: planning services to incorporate genetics; the involvement of genetics departments; the establishment of roles incorporating genetic activities; identifying and involving stakeholders; the challenges of working across specialty boundaries; working with multiple healthcare organisations; and the importance of cultural awareness of genetic conditions. Pilots found that the planning phase often included the need to raise awareness of genetic conditions and services and that early consideration of organisational issues such as clinic location was essential. The formal involvement of genetics departments was crucial to success; benefits included provision of clinical and educational support for staff in new roles. Recruitment and retention for new roles outside usual career pathways sometimes proved difficult. Differences in specialties' working practices and working with multiple healthcare organisations also brought challenges such as the 'genetic approach' of working with families, incompatible record systems and different approaches to health professionals' autonomous practice. 'Practice points' have been collated into a Toolkit which includes resources from the pilots, including job descriptions and clinical tools. These can be customised for reuse by other services. Conclusions Healthcare services need to translate advances in genetics into benefits for patients. Consideration of the issues presented here when incorporating genetics into mainstream medical services will help ensure that new service developments build on the body of experience gained by the pilots, to provide high quality services for patients with or at risk of genetic conditions. PMID:20470377
Bennett, Catherine L; Burke, Sarah E; Burton, Hilary; Farndon, Peter A
2010-05-14
As advances in genetics are becoming increasingly relevant to mainstream healthcare, a major challenge is to ensure that these are integrated appropriately into mainstream medical services. In 2003, the Department of Health for England announced the availability of start-up funding for ten 'Mainstreaming Genetics' pilot services to develop models to achieve this. Multiple methods were used to explore the pilots' experiences of incorporating genetics which might inform the development of new services in the future. A workshop with project staff, an email questionnaire, interviews and a thematic analysis of pilot final reports were carried out. Seven themes relating to the integration of genetics into mainstream medical services were identified: planning services to incorporate genetics; the involvement of genetics departments; the establishment of roles incorporating genetic activities; identifying and involving stakeholders; the challenges of working across specialty boundaries; working with multiple healthcare organisations; and the importance of cultural awareness of genetic conditions. Pilots found that the planning phase often included the need to raise awareness of genetic conditions and services and that early consideration of organisational issues such as clinic location was essential. The formal involvement of genetics departments was crucial to success; benefits included provision of clinical and educational support for staff in new roles. Recruitment and retention for new roles outside usual career pathways sometimes proved difficult. Differences in specialties' working practices and working with multiple healthcare organisations also brought challenges such as the 'genetic approach' of working with families, incompatible record systems and different approaches to health professionals' autonomous practice. 'Practice points' have been collated into a Toolkit which includes resources from the pilots, including job descriptions and clinical tools. These can be customised for reuse by other services. Healthcare services need to translate advances in genetics into benefits for patients. Consideration of the issues presented here when incorporating genetics into mainstream medical services will help ensure that new service developments build on the body of experience gained by the pilots, to provide high quality services for patients with or at risk of genetic conditions.
Logistics system design for biomass-to-bioenergy industry with multiple types of feedstocks.
Zhu, Xiaoyan; Yao, Qingzhu
2011-12-01
It is technologically possible for a biorefinery to use a variety of biomass as feedstock including native perennial grasses (e.g., switchgrass) and agricultural residues (e.g., corn stalk and wheat straw). Incorporating the distinct characteristics of various types of biomass feedstocks and taking into account their interaction in supplying the bioenergy production, this paper proposed a multi-commodity network flow model to design the logistics system for a multiple-feedstock biomass-to-bioenergy industry. The model was formulated as a mixed integer linear programming, determining the locations of warehouses, the size of harvesting team, the types and amounts of biomass harvested/purchased, stored, and processed in each month, the transportation of biomass in the system, and so on. This paper demonstrated the advantages of using multiple types of biomass feedstocks by comparing with the case of using a single feedstock (switchgrass) and analyzed the relationship of the supply capacity of biomass feedstocks to the output and cost of biofuel. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Chaplain, Mark A. J.; Powathil, Gibin G.
Cancer is a complex, multiscale process involving interactions at intracellular, intercellular and tissue scales that are in turn susceptible to microenvironmental changes. Each individual cancer cell within a cancer cell mass is unique, with its own internal cellular pathways and biochemical interactions. These interactions contribute to the functional changes at the cellular and tissue scale, creating a heterogenous cancer cell population. Anticancer drugs are effective in controlling cancer growth by inflicting damage to various target molecules and thereby triggering multiple cellular and intracellular pathways, leading to cell death or cell-cycle arrest. One of the major impediments in the chemotherapy treatment of cancer is drug resistance driven by multiple mechanisms, including multi-drug and cell-cycle mediated resistance to chemotherapy drugs. In this article, we discuss two hybrid multiscale modelling approaches, incorporating multiple interactions involved in the sub-cellular, cellular and microenvironmental levels to study the effects of cell-cycle, phase-specific chemotherapy on the growth and progression of cancer cells.
NASA Astrophysics Data System (ADS)
Chaplain, Mark A. J.; Powathil, Gibin G.
2015-04-01
Cancer is a complex, multiscale process involving interactions at intracellular, intercellular and tissue scales that are in turn susceptible to microenvironmental changes. Each individual cancer cell within a cancer cell mass is unique, with its own internal cellular pathways and biochemical interactions. These interactions contribute to the functional changes at the cellular and tissue scale, creating a heterogenous cancer cell population. Anticancer drugs are effective in controlling cancer growth by inflicting damage to various target molecules and thereby triggering multiple cellular and intracellular pathways, leading to cell death or cell-cycle arrest. One of the major impediments in the chemotherapy treatment of cancer is drug resistance driven by multiple mechanisms, including multi-drug and cell-cycle mediated resistance to chemotherapy drugs. In this article, we discuss two hybrid multiscale modelling approaches, incorporating multiple interactions involved in the sub-cellular, cellular and microenvironmental levels to study the effects of cell-cycle, phase-specific chemotherapy on the growth and progression of cancer cells.
Robertson, Suzanne L; Eisenberg, Marisa C; Tien, Joseph H
2013-01-01
Many factors influencing disease transmission vary throughout and across populations. For diseases spread through multiple transmission pathways, sources of variation may affect each transmission pathway differently. In this paper we consider a disease that can be spread via direct and indirect transmission, such as the waterborne disease cholera. Specifically, we consider a system of multiple patches with direct transmission occurring entirely within patch and indirect transmission via a single shared water source. We investigate the effect of heterogeneity in dual transmission pathways on the spread of the disease. We first present a 2-patch model for which we examine the effect of variation in each pathway separately and propose a measure of heterogeneity that incorporates both transmission mechanisms and is predictive of R(0). We also explore how heterogeneity affects the final outbreak size and the efficacy of intervention measures. We conclude by extending several results to a more general n-patch setting.
Optimization of Airport Surface Traffic: A Case-Study of Incheon International Airport
NASA Technical Reports Server (NTRS)
Eun, Yeonju; Jeon, Daekeun; Lee, Hanbong; Jung, Yoon C.; Zhu, Zhifan; Jeong, Myeongsook; Kim, Hyounkong; Oh, Eunmi; Hong, Sungkwon
2017-01-01
This study aims to develop a controllers decision support tool for departure and surface management of ICN. Airport surface traffic optimization for Incheon International Airport (ICN) in South Korea was studied based on the operational characteristics of ICN and airspace of Korea. For surface traffic optimization, a multiple runway scheduling problem and a taxi scheduling problem were formulated into two Mixed Integer Linear Programming (MILP) optimization models. The Miles-In-Trail (MIT) separation constraint at the departure fix shared by the departure flights from multiple runways and the runway crossing constraints due to the taxi route configuration specific to ICN were incorporated into the runway scheduling and taxiway scheduling problems, respectively. Since the MILP-based optimization model for the multiple runway scheduling problem may be computationally intensive, computation times and delay costs of different solving methods were compared for a practical implementation. This research was a collaboration between Korea Aerospace Research Institute (KARI) and National Aeronautics and Space Administration (NASA).
Optimization of Airport Surface Traffic: A Case-Study of Incheon International Airport
NASA Technical Reports Server (NTRS)
Eun, Yeonju; Jeon, Daekeun; Lee, Hanbong; Jung, Yoon Chul; Zhu, Zhifan; Jeong, Myeong-Sook; Kim, Hyoun Kyoung; Oh, Eunmi; Hong, Sungkwon
2017-01-01
This study aims to develop a controllers' decision support tool for departure and surface management of ICN. Airport surface traffic optimization for Incheon International Airport (ICN) in South Korea was studied based on the operational characteristics of ICN and airspace of Korea. For surface traffic optimization, a multiple runway scheduling problem and a taxi scheduling problem were formulated into two Mixed Integer Linear Programming (MILP) optimization models. The Miles-In-Trail (MIT) separation constraint at the departure fix shared by the departure flights from multiple runways and the runway crossing constraints due to the taxi route configuration specific to ICN were incorporated into the runway scheduling and taxiway scheduling problems, respectively. Since the MILP-based optimization model for the multiple runway scheduling problem may be computationally intensive, computation times and delay costs of different solving methods were compared for a practical implementation. This research was a collaboration between Korea Aerospace Research Institute (KARI) and National Aeronautics and Space Administration (NASA).
Modeling, Materials, and Metrics: The Three-m Approach to FCS Signature Solutions
2002-05-07
calculations. These multiple levels will be incorporated into the MuSES software. The four levels are described as follows: "* Radiosity - Deterministic...view-factor-based, all-diffuse solution. Very fast. Independent of user position. "* Directional Reflectivity - Radiosity with directional incident...target and environment facets (view factor with BRDF). Last ray cast bounce = radiosity solution. "* Multi-bounce path trace - Rays traced from observer
ERIC Educational Resources Information Center
Xu, Na; Porter-Morgan, Holly; Doran, Nathan; Keller, Charles
2016-01-01
STEM (Science, Technology, Engineering, and Mathematics) education in the United States faces a host of problems including low recruitment and retention in STEM disciplines, under-representation of multiple segments of the US population, and a host of other issues. These problems are well recognized and a variety of solutions are being implemented…
Dynamic Data Driven Methods for Self-aware Aerospace Vehicles
2015-04-08
structural response model that incorporates multiple degradation or failure modes including damaged panel strength (BVID, thru- hole ), damaged panel...stiffness (BVID, thru- hole ), loose fastener, fretted fastener hole , and disbonded surface. • A new data-driven approach for the online updating of the flight...between the first and second plies. The panels were reinforced around the boarders of the panel with through holes to simulate mounting the wing skins to
Externally Pressurized Journal Gas Bearings
NASA Technical Reports Server (NTRS)
Laub, John H.
1959-01-01
Externally pressurized gas-lubricated bearings with multiple orifice feed are investigated. An analytical treatment is developed for a semi-cylindrical bearing with 9 orifices and for a cylindrical journal bearing with 192 radial and 24 axial orifices. Experiments are described on models of the two bearing configurations with specially designed fixtures which incorporate pneumatic loading and means for determining pressure profiles, gas flow and gap height. The correlation between theory and experiment is satisfactory.
The development of a successful physician compensation plan.
Berkowitz, Steven M
2002-10-01
Physician compensation plans are critical to the success of a physician group or may lead to the demise of the group. Essential components of the development and implementation of a successful physician compensation plan include: strategic planning, physician understanding and buy-in, appropriate incentives, objective performance measurement, and a specific funding source or mechanism. There are two basic philosophies to consider for use: the market-based model and the net economic contribution model. Advantages and disadvantages of each are discussed. Methods of incorporating these multiple aspects into a single plan are described.
Landguth, Erin L; Bearlin, Andrew; Day, Casey; Dunham, Jason B.
2016-01-01
1. Combining landscape demographic and genetics models offers powerful methods for addressing questions for eco-evolutionary applications.2. Using two illustrative examples, we present Cost–Distance Meta-POPulation, a program to simulate changes in neutral and/or selection-driven genotypes through time as a function of individual-based movement, complex spatial population dynamics, and multiple and changing landscape drivers.3. Cost–Distance Meta-POPulation provides a novel tool for questions in landscape genetics by incorporating population viability analysis, while linking directly to conservation applications.
Incorporation of Condensation Heat Transfer in a Flow Network Code
NASA Technical Reports Server (NTRS)
Anthony, Miranda; Majumdar, Alok
2002-01-01
Pure water is distilled from waste water in the International Space Station. The distillation assembly consists of an evaporator, a compressor and a condenser. Vapor is periodically purged from the condenser to avoid vapor accumulation. Purged vapor is condensed in a tube by coolant water prior to entering the purge pump. The paper presents a condensation model of purged vapor in a tube. This model is based on the Finite Volume Method. In the Finite Volume Method, the flow domain is discretized into multiple control volumes and a simultaneous analysis is performed.
A model for plant lighting system selection.
Ciolkosz, D E; Albright, L D; Sager, J C; Langhans, R W
2002-01-01
A decision model is presented that compares lighting systems for a plant growth scenario and chooses the most appropriate system from a given set of possible choices. The model utilizes a Multiple Attribute Utility Theory approach, and incorporates expert input and performance simulations to calculate a utility value for each lighting system being considered. The system with the highest utility is deemed the most appropriate system. The model was applied to a greenhouse scenario, and analyses were conducted to test the model's output for validity. Parameter variation indicates that the model performed as expected. Analysis of model output indicates that differences in utility among the candidate lighting systems were sufficiently large to give confidence that the model's order of selection was valid.
Shinkins, Bethany; Yang, Yaling; Abel, Lucy; Fanshawe, Thomas R
2017-04-14
Evaluations of diagnostic tests are challenging because of the indirect nature of their impact on patient outcomes. Model-based health economic evaluations of tests allow different types of evidence from various sources to be incorporated and enable cost-effectiveness estimates to be made beyond the duration of available study data. To parameterize a health-economic model fully, all the ways a test impacts on patient health must be quantified, including but not limited to diagnostic test accuracy. We assessed all UK NIHR HTA reports published May 2009-July 2015. Reports were included if they evaluated a diagnostic test, included a model-based health economic evaluation and included a systematic review and meta-analysis of test accuracy. From each eligible report we extracted information on the following topics: 1) what evidence aside from test accuracy was searched for and synthesised, 2) which methods were used to synthesise test accuracy evidence and how did the results inform the economic model, 3) how/whether threshold effects were explored, 4) how the potential dependency between multiple tests in a pathway was accounted for, and 5) for evaluations of tests targeted at the primary care setting, how evidence from differing healthcare settings was incorporated. The bivariate or HSROC model was implemented in 20/22 reports that met all inclusion criteria. Test accuracy data for health economic modelling was obtained from meta-analyses completely in four reports, partially in fourteen reports and not at all in four reports. Only 2/7 reports that used a quantitative test gave clear threshold recommendations. All 22 reports explored the effect of uncertainty in accuracy parameters but most of those that used multiple tests did not allow for dependence between test results. 7/22 tests were potentially suitable for primary care but the majority found limited evidence on test accuracy in primary care settings. The uptake of appropriate meta-analysis methods for synthesising evidence on diagnostic test accuracy in UK NIHR HTAs has improved in recent years. Future research should focus on other evidence requirements for cost-effectiveness assessment, threshold effects for quantitative tests and the impact of multiple diagnostic tests.
Astrostatistical Analysis in Solar and Stellar Physics
NASA Astrophysics Data System (ADS)
Stenning, David Craig
This dissertation focuses on developing statistical models and methods to address data-analytic challenges in astrostatistics---a growing interdisciplinary field fostering collaborations between statisticians and astrophysicists. The astrostatistics projects we tackle can be divided into two main categories: modeling solar activity and Bayesian analysis of stellar evolution. These categories from Part I and Part II of this dissertation, respectively. The first line of research we pursue involves classification and modeling of evolving solar features. Advances in space-based observatories are increasing both the quality and quantity of solar data, primarily in the form of high-resolution images. To analyze massive streams of solar image data, we develop a science-driven dimension reduction methodology to extract scientifically meaningful features from images. This methodology utilizes mathematical morphology to produce a concise numerical summary of the magnetic flux distribution in solar "active regions'' that (i) is far easier to work with than the source images, (ii) encapsulates scientifically relevant information in a more informative manner than existing schemes (i.e., manual classification schemes), and (iii) is amenable to sophisticated statistical analyses. In a related line of research, we perform a Bayesian analysis of the solar cycle using multiple proxy variables, such as sunspot numbers. We take advantage of patterns and correlations among the proxy variables to model solar activity using data from proxies that have become available more recently, while also taking advantage of the long history of observations of sunspot numbers. This model is an extension of the Yu et al. (2012) Bayesian hierarchical model for the solar cycle that used the sunspot numbers alone. Since proxies have different temporal coverage, we devise a multiple imputation scheme to account for missing data. We find that incorporating multiple proxies reveals important features of the solar cycle that are missed when the model is fit using only the sunspot numbers. In Part II of this dissertation we focus on two related lines of research involving Bayesian analysis of stellar evolution. We first focus on modeling multiple stellar populations in star clusters. It has long been assumed that all star clusters are comprised of single stellar populations---stars that formed at roughly the same time from a common molecular cloud. However, recent studies have produced evidence that some clusters host multiple populations, which has far-reaching scientific implications. We develop a Bayesian hierarchical model for multiple-population star clusters, extending earlier statistical models of stellar evolution (e.g., van Dyk et al. 2009, Stein et al. 2013). We also devise an adaptive Markov chain Monte Carlo algorithm to explore the complex posterior distribution. We use numerical studies to demonstrate that our method can recover parameters of multiple-population clusters, and also show how model misspecification can be diagnosed. Our model and computational tools are incorporated into an open-source software suite known as BASE-9. We also explore statistical properties of the estimators and determine that the influence of the prior distribution does not diminish with larger sample sizes, leading to non-standard asymptotics. In a final line of research, we present the first-ever attempt to estimate the carbon fraction of white dwarfs. This quantity has important implications for both astrophysics and fundamental nuclear physics, but is currently unknown. We use a numerical study to demonstrate that assuming an incorrect value for the carbon fraction leads to incorrect white-dwarf ages of star clusters. Finally, we present our attempt to estimate the carbon fraction of the white dwarfs in the well-studied star cluster 47 Tucanae.
Models of subjective response to in-flight motion data
NASA Technical Reports Server (NTRS)
Rudrapatna, A. N.; Jacobson, I. D.
1973-01-01
Mathematical relationships between subjective comfort and environmental variables in an air transportation system are investigated. As a first step in model building, only the motion variables are incorporated and sensitivities are obtained using stepwise multiple regression analysis. The data for these models have been collected from commercial passenger flights. Two models are considered. In the first, subjective comfort is assumed to depend on rms values of the six-degrees-of-freedom accelerations. The second assumes a Rustenburg type human response function in obtaining frequency weighted rms accelerations, which are used in a linear model. The form of the human response function is examined and the results yield a human response weighting function for different degrees of freedom.
Tchetgen Tchetgen, Eric
2011-03-01
This article considers the detection and evaluation of genetic effects incorporating gene-environment interaction and independence. Whereas ordinary logistic regression cannot exploit the assumption of gene-environment independence, the proposed approach makes explicit use of the independence assumption to improve estimation efficiency. This method, which uses both cases and controls, fits a constrained retrospective regression in which the genetic variant plays the role of the response variable, and the disease indicator and the environmental exposure are the independent variables. The regression model constrains the association of the environmental exposure with the genetic variant among the controls to be null, thus explicitly encoding the gene-environment independence assumption, which yields substantial gain in accuracy in the evaluation of genetic effects. The proposed retrospective regression approach has several advantages. It is easy to implement with standard software, and it readily accounts for multiple environmental exposures of a polytomous or of a continuous nature, while easily incorporating extraneous covariates. Unlike the profile likelihood approach of Chatterjee and Carroll (Biometrika. 2005;92:399-418), the proposed method does not require a model for the association of a polytomous or continuous exposure with the disease outcome, and, therefore, it is agnostic to the functional form of such a model and completely robust to its possible misspecification.
An integrated modelling framework for neural circuits with multiple neuromodulators.
Joshi, Alok; Youssofzadeh, Vahab; Vemana, Vinith; McGinnity, T M; Prasad, Girijesh; Wong-Lin, KongFatt
2017-01-01
Neuromodulators are endogenous neurochemicals that regulate biophysical and biochemical processes, which control brain function and behaviour, and are often the targets of neuropharmacological drugs. Neuromodulator effects are generally complex partly owing to the involvement of broad innervation, co-release of neuromodulators, complex intra- and extrasynaptic mechanism, existence of multiple receptor subtypes and high interconnectivity within the brain. In this work, we propose an efficient yet sufficiently realistic computational neural modelling framework to study some of these complex behaviours. Specifically, we propose a novel dynamical neural circuit model that integrates the effective neuromodulator-induced currents based on various experimental data (e.g. electrophysiology, neuropharmacology and voltammetry). The model can incorporate multiple interacting brain regions, including neuromodulator sources, simulate efficiently and easily extendable to large-scale brain models, e.g. for neuroimaging purposes. As an example, we model a network of mutually interacting neural populations in the lateral hypothalamus, dorsal raphe nucleus and locus coeruleus, which are major sources of neuromodulator orexin/hypocretin, serotonin and norepinephrine/noradrenaline, respectively, and which play significant roles in regulating many physiological functions. We demonstrate that such a model can provide predictions of systemic drug effects of the popular antidepressants (e.g. reuptake inhibitors), neuromodulator antagonists or their combinations. Finally, we developed user-friendly graphical user interface software for model simulation and visualization for both fundamental sciences and pharmacological studies. © 2017 The Authors.
An integrated modelling framework for neural circuits with multiple neuromodulators
Vemana, Vinith
2017-01-01
Neuromodulators are endogenous neurochemicals that regulate biophysical and biochemical processes, which control brain function and behaviour, and are often the targets of neuropharmacological drugs. Neuromodulator effects are generally complex partly owing to the involvement of broad innervation, co-release of neuromodulators, complex intra- and extrasynaptic mechanism, existence of multiple receptor subtypes and high interconnectivity within the brain. In this work, we propose an efficient yet sufficiently realistic computational neural modelling framework to study some of these complex behaviours. Specifically, we propose a novel dynamical neural circuit model that integrates the effective neuromodulator-induced currents based on various experimental data (e.g. electrophysiology, neuropharmacology and voltammetry). The model can incorporate multiple interacting brain regions, including neuromodulator sources, simulate efficiently and easily extendable to large-scale brain models, e.g. for neuroimaging purposes. As an example, we model a network of mutually interacting neural populations in the lateral hypothalamus, dorsal raphe nucleus and locus coeruleus, which are major sources of neuromodulator orexin/hypocretin, serotonin and norepinephrine/noradrenaline, respectively, and which play significant roles in regulating many physiological functions. We demonstrate that such a model can provide predictions of systemic drug effects of the popular antidepressants (e.g. reuptake inhibitors), neuromodulator antagonists or their combinations. Finally, we developed user-friendly graphical user interface software for model simulation and visualization for both fundamental sciences and pharmacological studies. PMID:28100828
Analysis of bacterial migration. 2: Studies with multiple attractant gradients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strauss, I.; Frymier, P.D.; Hahn, C.M.
1995-02-01
Many motile bacteria exhibit chemotaxis, the ability to bias their random motion toward or away from increasing concentrations of chemical substances which benefit or inhibit their survival, respectively. Since bacteria encounter numerous chemical concentration gradients simultaneously in natural surroundings, it is necessary to know quantitatively how a bacterial population responds in the presence of more than one chemical stimulus to develop predictive mathematical models describing bacterial migration in natural systems. This work evaluates three hypothetical models describing the integration of chemical signals from multiple stimuli: high sensitivity, maximum signal, and simple additivity. An expression for the tumbling probability for individualmore » stimuli is modified according to the proposed models and incorporated into the cell balance equation for a 1-D attractant gradient. Random motility and chemotactic sensitivity coefficients, required input parameters for the model, are measured for single stimulus responses. Theoretical predictions with the three signal integration models are compared to the net chemotactic response of Escherichia coli to co- and antidirectional gradients of D-fucose and [alpha]-methylaspartate in the stopped-flow diffusion chamber assay. Results eliminate the high-sensitivity model and favor the simple additivity over the maximum signal. None of the simple models, however, accurately predict the observed behavior, suggesting a more complex model with more steps in the signal processing mechanism is required to predict responses to multiple stimuli.« less
A minimal model for multiple epidemics and immunity spreading.
Sneppen, Kim; Trusina, Ala; Jensen, Mogens H; Bornholdt, Stefan
2010-10-18
Pathogens and parasites are ubiquitous in the living world, being limited only by availability of suitable hosts. The ability to transmit a particular disease depends on competing infections as well as on the status of host immunity. Multiple diseases compete for the same resource and their fate is coupled to each other. Such couplings have many facets, for example cross-immunization between related influenza strains, mutual inhibition by killing the host, or possible even a mutual catalytic effect if host immunity is impaired. We here introduce a minimal model for an unlimited number of unrelated pathogens whose interaction is simplified to simple mutual exclusion. The model incorporates an ongoing development of host immunity to past diseases, while leaving the system open for emergence of new diseases. The model exhibits a rich dynamical behavior with interacting infection waves, leaving broad trails of immunization in the host population. This obtained immunization pattern depends only on the system size and on the mutation rate that initiates new diseases.
DeepFix: A Fully Convolutional Neural Network for Predicting Human Eye Fixations.
Kruthiventi, Srinivas S S; Ayush, Kumar; Babu, R Venkatesh
2017-09-01
Understanding and predicting the human visual attention mechanism is an active area of research in the fields of neuroscience and computer vision. In this paper, we propose DeepFix, a fully convolutional neural network, which models the bottom-up mechanism of visual attention via saliency prediction. Unlike classical works, which characterize the saliency map using various hand-crafted features, our model automatically learns features in a hierarchical fashion and predicts the saliency map in an end-to-end manner. DeepFix is designed to capture semantics at multiple scales while taking global context into account, by using network layers with very large receptive fields. Generally, fully convolutional nets are spatially invariant-this prevents them from modeling location-dependent patterns (e.g., centre-bias). Our network handles this by incorporating a novel location-biased convolutional layer. We evaluate our model on multiple challenging saliency data sets and show that it achieves the state-of-the-art results.
ERIC Educational Resources Information Center
Nelson, Lin M.
A project was undertaken to increase retention in a health education telecourse by incorporating a competency-based orientation to distance learning and learner-centered instructional strategies into the telecourse, and by using multiple media for content delivery and interaction. A general orientation to distance learning was developed that…
Optimal Multiple Surface Segmentation With Shape and Context Priors
Bai, Junjie; Garvin, Mona K.; Sonka, Milan; Buatti, John M.; Wu, Xiaodong
2014-01-01
Segmentation of multiple surfaces in medical images is a challenging problem, further complicated by the frequent presence of weak boundary evidence, large object deformations, and mutual influence between adjacent objects. This paper reports a novel approach to multi-object segmentation that incorporates both shape and context prior knowledge in a 3-D graph-theoretic framework to help overcome the stated challenges. We employ an arc-based graph representation to incorporate a wide spectrum of prior information through pair-wise energy terms. In particular, a shape-prior term is used to penalize local shape changes and a context-prior term is used to penalize local surface-distance changes from a model of the expected shape and surface distances, respectively. The globally optimal solution for multiple surfaces is obtained by computing a maximum flow in a low-order polynomial time. The proposed method was validated on intraretinal layer segmentation of optical coherence tomography images and demonstrated statistically significant improvement of segmentation accuracy compared to our earlier graph-search method that was not utilizing shape and context priors. The mean unsigned surface positioning errors obtained by the conventional graph-search approach (6.30 ± 1.58 μm) was improved to 5.14 ± 0.99 μm when employing our new method with shape and context priors. PMID:23193309
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahmadi, Rouhollah, E-mail: rouhollahahmadi@yahoo.com; Khamehchi, Ehsan
Conditioning stochastic simulations are very important in many geostatistical applications that call for the introduction of nonlinear and multiple-point data in reservoir modeling. Here, a new methodology is proposed for the incorporation of different data types into multiple-point statistics (MPS) simulation frameworks. Unlike the previous techniques that call for an approximate forward model (filter) for integration of secondary data into geologically constructed models, the proposed approach develops an intermediate space where all the primary and secondary data are easily mapped onto. Definition of the intermediate space, as may be achieved via application of artificial intelligence tools like neural networks andmore » fuzzy inference systems, eliminates the need for using filters as in previous techniques. The applicability of the proposed approach in conditioning MPS simulations to static and geologic data is verified by modeling a real example of discrete fracture networks using conventional well-log data. The training patterns are well reproduced in the realizations, while the model is also consistent with the map of secondary data.« less
Arab, A.; Wildhaber, M.L.; Wikle, C.K.; Gentry, C.N.
2008-01-01
Fisheries studies often employ multiple gears that result in large percentages of zero values. We considered a zero-inflated Poisson (ZIP) model with random effects to address these excessive zeros. By employing a Bayesian ZIP model that simultaneously incorporates data from multiple gears to analyze data from the Missouri River, we were able to compare gears and make more year, segment, and macrohabitat comparisons than did the original data analysis. For channel catfish Ictalurus punctatus, our results rank (highest to lowest) the mean catch per unit area (CPUA) for gears (beach seine, benthic trawl, electrofishing, and drifting trammel net); years (1998 and 1997); macrohabitats (tributary mouth, connected secondary channel, nonconnected secondary channel, and bend); and river segment zones (channelized, inter-reservoir, and least-altered). For shovelnose sturgeon Scaphirhynchus platorynchus, the mean CPUA was significantly higher for benthic trawls and drifting trammel nets; 1998 and 1997; tributary mouths, bends, and connected secondary channels; and some channelized or least-altered inter-reservoir segments. One important advantage of our approach is the ability to reliably infer patterns of relative abundance by means of multiple gears without using gear efficiencies. ?? Copyright by the American Fisheries Society 2008.
A minimal kinetic model for a viral DNA packaging machine.
Yang, Qin; Catalano, Carlos Enrique
2004-01-20
Terminase enzymes are common to both eukaryotic and prokaryotic double-stranded DNA viruses. These enzymes possess ATPase and nuclease activities that work in concert to "package" a viral genome into an empty procapsid, and it is likely that terminase enzymes from disparate viruses utilize a common packaging mechanism. Bacteriophage lambda terminase possesses a site-specific nuclease activity, a so-called helicase activity, a DNA translocase activity, and multiple ATPase catalytic sites that function to package viral DNA. Allosteric interactions between the multiple catalytic sites have been reported. This study probes these catalytic interactions using enzyme kinetic, photoaffinity labeling, and vanadate inhibition studies. The ensemble of data forms the basis for a minimal kinetic model for lambda terminase. The model incorporates an ADP-driven conformational reorganization of the terminase subunits assembled on viral DNA, which is central to the activation of a catalytically competent packaging machine. The proposed model provides a unifying mechanism for allosteric interaction between the multiple catalytic sites of the holoenzyme and explains much of the kinetic data in the literature. Given that similar packaging mechanisms have been proposed for viruses as dissimilar as lambda and the herpes viruses, the model may find general utility in our global understanding of the enzymology of virus assembly.
Koerner, Tess K; Zhang, Yang
2017-02-27
Neurophysiological studies are often designed to examine relationships between measures from different testing conditions, time points, or analysis techniques within the same group of participants. Appropriate statistical techniques that can take into account repeated measures and multivariate predictor variables are integral and essential to successful data analysis and interpretation. This work implements and compares conventional Pearson correlations and linear mixed-effects (LME) regression models using data from two recently published auditory electrophysiology studies. For the specific research questions in both studies, the Pearson correlation test is inappropriate for determining strengths between the behavioral responses for speech-in-noise recognition and the multiple neurophysiological measures as the neural responses across listening conditions were simply treated as independent measures. In contrast, the LME models allow a systematic approach to incorporate both fixed-effect and random-effect terms to deal with the categorical grouping factor of listening conditions, between-subject baseline differences in the multiple measures, and the correlational structure among the predictor variables. Together, the comparative data demonstrate the advantages as well as the necessity to apply mixed-effects models to properly account for the built-in relationships among the multiple predictor variables, which has important implications for proper statistical modeling and interpretation of human behavior in terms of neural correlates and biomarkers.
A microcontroller-based simulation of dural venous sinus injury for neurosurgical training.
Cleary, Daniel R; Siler, Dominic A; Whitney, Nathaniel; Selden, Nathan R
2018-05-01
OBJECTIVE Surgical simulation has the potential to supplement and enhance traditional resident training. However, the high cost of equipment and limited number of available scenarios have inhibited wider integration of simulation in neurosurgical education. In this study the authors provide initial validation of a novel, low-cost simulation platform that recreates the stress of surgery using a combination of hands-on, model-based, and computer elements. Trainee skill was quantified using multiple time and performance measures. The simulation was initially validated using trainees at the start of their intern year. METHODS The simulation recreates intraoperative superior sagittal sinus injury complicated by air embolism. The simulator model consists of 2 components: a reusable base and a disposable craniotomy pack. The simulator software is flexible and modular to allow adjustments in difficulty or the creation of entirely new clinical scenarios. The reusable simulator base incorporates a powerful microcomputer and multiple sensors and actuators to provide continuous feedback to the software controller, which in turn adjusts both the screen output and physical elements of the model. The disposable craniotomy pack incorporates 3D-printed sections of model skull and brain, as well as artificial dura that incorporates a model sagittal sinus. RESULTS Twelve participants at the 2015 Western Region Society of Neurological Surgeons postgraduate year 1 resident course ("boot camp") provided informed consent and enrolled in a study testing the prototype device. Each trainee was required to successfully create a bilateral parasagittal craniotomy, repair a dural sinus tear, and recognize and correct an air embolus. Participant stress was measured using a heart rate wrist monitor. After participation, each resident completed a 13-question categorical survey. CONCLUSIONS All trainee participants experienced tachycardia during the simulation, although the point in the simulation at which they experienced tachycardia varied. Survey results indicated that participants agreed the simulation was realistic, created stress, and was a useful tool in training neurosurgical residents. This simulator represents a novel, low-cost approach for hands-on training that effectively teaches and tests residents without risk of patient injury.
Identification of Boolean Network Models From Time Series Data Incorporating Prior Knowledge.
Leifeld, Thomas; Zhang, Zhihua; Zhang, Ping
2018-01-01
Motivation: Mathematical models take an important place in science and engineering. A model can help scientists to explain dynamic behavior of a system and to understand the functionality of system components. Since length of a time series and number of replicates is limited by the cost of experiments, Boolean networks as a structurally simple and parameter-free logical model for gene regulatory networks have attracted interests of many scientists. In order to fit into the biological contexts and to lower the data requirements, biological prior knowledge is taken into consideration during the inference procedure. In the literature, the existing identification approaches can only deal with a subset of possible types of prior knowledge. Results: We propose a new approach to identify Boolean networks from time series data incorporating prior knowledge, such as partial network structure, canalizing property, positive and negative unateness. Using vector form of Boolean variables and applying a generalized matrix multiplication called the semi-tensor product (STP), each Boolean function can be equivalently converted into a matrix expression. Based on this, the identification problem is reformulated as an integer linear programming problem to reveal the system matrix of Boolean model in a computationally efficient way, whose dynamics are consistent with the important dynamics captured in the data. By using prior knowledge the number of candidate functions can be reduced during the inference. Hence, identification incorporating prior knowledge is especially suitable for the case of small size time series data and data without sufficient stimuli. The proposed approach is illustrated with the help of a biological model of the network of oxidative stress response. Conclusions: The combination of efficient reformulation of the identification problem with the possibility to incorporate various types of prior knowledge enables the application of computational model inference to systems with limited amount of time series data. The general applicability of this methodological approach makes it suitable for a variety of biological systems and of general interest for biological and medical research.
75 FR 17737 - Industrial Economics, Incorporated; Transfer of Data
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-07
... ENVIRONMENTAL PROTECTION AGENCY [EPA-HQ-OPP-2010-0194; FRL-8820-2] Industrial Economics... (CBI) by the submitter, will be transferred to Industrial Economics, Incorporated in accordance with 40 CFR 2.307(h)(3) and 2.308(i)(2). Industrial Economics, Incorporated has been awarded multiple...
Design Approaches to Support Preservice Teachers in Scientific Modeling
NASA Astrophysics Data System (ADS)
Kenyon, Lisa; Davis, Elizabeth A.; Hug, Barbara
2011-02-01
Engaging children in scientific practices is hard for beginning teachers. One such scientific practice with which beginning teachers may have limited experience is scientific modeling. We have iteratively designed preservice teacher learning experiences and materials intended to help teachers achieve learning goals associated with scientific modeling. Our work has taken place across multiple years at three university sites, with preservice teachers focused on early childhood, elementary, and middle school teaching. Based on results from our empirical studies supporting these design decisions, we discuss design features of our modeling instruction in each iteration. Our results suggest some successes in supporting preservice teachers in engaging students in modeling practice. We propose design principles that can guide science teacher educators in incorporating modeling in teacher education.
Multi-focused geospatial analysis using probes.
Butkiewicz, Thomas; Dou, Wenwen; Wartell, Zachary; Ribarsky, William; Chang, Remco
2008-01-01
Traditional geospatial information visualizations often present views that restrict the user to a single perspective. When zoomed out, local trends and anomalies become suppressed and lost; when zoomed in for local inspection, spatial awareness and comparison between regions become limited. In our model, coordinated visualizations are integrated within individual probe interfaces, which depict the local data in user-defined regions-of-interest. Our probe concept can be incorporated into a variety of geospatial visualizations to empower users with the ability to observe, coordinate, and compare data across multiple local regions. It is especially useful when dealing with complex simulations or analyses where behavior in various localities differs from other localities and from the system as a whole. We illustrate the effectiveness of our technique over traditional interfaces by incorporating it within three existing geospatial visualization systems: an agent-based social simulation, a census data exploration tool, and an 3D GIS environment for analyzing urban change over time. In each case, the probe-based interaction enhances spatial awareness, improves inspection and comparison capabilities, expands the range of scopes, and facilitates collaboration among multiple users.
Liang, H; Shi, B C; Guo, Z L; Chai, Z H
2014-05-01
In this paper, a phase-field-based multiple-relaxation-time lattice Boltzmann (LB) model is proposed for incompressible multiphase flow systems. In this model, one distribution function is used to solve the Chan-Hilliard equation and the other is adopted to solve the Navier-Stokes equations. Unlike previous phase-field-based LB models, a proper source term is incorporated in the interfacial evolution equation such that the Chan-Hilliard equation can be derived exactly and also a pressure distribution is designed to recover the correct hydrodynamic equations. Furthermore, the pressure and velocity fields can be calculated explicitly. A series of numerical tests, including Zalesak's disk rotation, a single vortex, a deformation field, and a static droplet, have been performed to test the accuracy and stability of the present model. The results show that, compared with the previous models, the present model is more stable and achieves an overall improvement in the accuracy of the capturing interface. In addition, compared to the single-relaxation-time LB model, the present model can effectively reduce the spurious velocity and fluctuation of the kinetic energy. Finally, as an application, the Rayleigh-Taylor instability at high Reynolds numbers is investigated.
NASA Technical Reports Server (NTRS)
Mcdougal, David S. (Editor)
1990-01-01
FIRE (First ISCCP Regional Experiment) is a U.S. cloud-radiation research program formed in 1984 to increase the basic understanding of cirrus and marine stratocumulus cloud systems, to develop realistic parameterizations for these systems, and to validate and improve ISCCP cloud product retrievals. Presentations of results culminating the first 5 years of FIRE research activities were highlighted. The 1986 Cirrus Intensive Field Observations (IFO), the 1987 Marine Stratocumulus IFO, the Extended Time Observations (ETO), and modeling activities are described. Collaborative efforts involving the comparison of multiple data sets, incorporation of data measurements into modeling activities, validation of ISCCP cloud parameters, and development of parameterization schemes for General Circulation Models (GCMs) are described.
F100(3) parallel compressor computer code and user's manual
NASA Technical Reports Server (NTRS)
Mazzawy, R. S.; Fulkerson, D. A.; Haddad, D. E.; Clark, T. A.
1978-01-01
The Pratt & Whitney Aircraft multiple segment parallel compressor model has been modified to include the influence of variable compressor vane geometry on the sensitivity to circumferential flow distortion. Further, performance characteristics of the F100 (3) compression system have been incorporated into the model on a blade row basis. In this modified form, the distortion's circumferential location is referenced relative to the variable vane controlling sensors of the F100 (3) engine so that the proper solution can be obtained regardless of distortion orientation. This feature is particularly important for the analysis of inlet temperature distortion. Compatibility with fixed geometry compressor applications has been maintained in the model.
Modeling the Impact of Motivation, Personality, and Emotion on Social Behavior
NASA Astrophysics Data System (ADS)
Miller, Lynn C.; Read, Stephen J.; Zachary, Wayne; Rosoff, Andrew
Models seeking to predict human social behavior must contend with multiple sources of individual and group variability that underlie social behavior. One set of interrelated factors that strongly contribute to that variability - motivations, personality, and emotions - has been only minimally incorporated in previous computational models of social behavior. The Personality, Affect, Culture (PAC) framework is a theory-based computational model that addresses this gap. PAC is used to simulate social agents whose social behavior varies according to their personalities and emotions, which, in turn, vary according to their motivations and underlying motive control parameters. Examples involving disease spread and counter-insurgency operations show how PAC can be used to study behavioral variability in different social contexts.
Walsh, Daniel P.; Norton, Andrew S.; Storm, Daniel J.; Van Deelen, Timothy R.; Heisy, Dennis M.
2018-01-01
Implicit and explicit use of expert knowledge to inform ecological analyses is becoming increasingly common because it often represents the sole source of information in many circumstances. Thus, there is a need to develop statistical methods that explicitly incorporate expert knowledge, and can successfully leverage this information while properly accounting for associated uncertainty during analysis. Studies of cause-specific mortality provide an example of implicit use of expert knowledge when causes-of-death are uncertain and assigned based on the observer's knowledge of the most likely cause. To explicitly incorporate this use of expert knowledge and the associated uncertainty, we developed a statistical model for estimating cause-specific mortality using a data augmentation approach within a Bayesian hierarchical framework. Specifically, for each mortality event, we elicited the observer's belief of cause-of-death by having them specify the probability that the death was due to each potential cause. These probabilities were then used as prior predictive values within our framework. This hierarchical framework permitted a simple and rigorous estimation method that was easily modified to include covariate effects and regularizing terms. Although applied to survival analysis, this method can be extended to any event-time analysis with multiple event types, for which there is uncertainty regarding the true outcome. We conducted simulations to determine how our framework compared to traditional approaches that use expert knowledge implicitly and assume that cause-of-death is specified accurately. Simulation results supported the inclusion of observer uncertainty in cause-of-death assignment in modeling of cause-specific mortality to improve model performance and inference. Finally, we applied the statistical model we developed and a traditional method to cause-specific survival data for white-tailed deer, and compared results. We demonstrate that model selection results changed between the two approaches, and incorporating observer knowledge in cause-of-death increased the variability associated with parameter estimates when compared to the traditional approach. These differences between the two approaches can impact reported results, and therefore, it is critical to explicitly incorporate expert knowledge in statistical methods to ensure rigorous inference.
Hierarchical Modeling and Robust Synthesis for the Preliminary Design of Large Scale Complex Systems
NASA Technical Reports Server (NTRS)
Koch, Patrick N.
1997-01-01
Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis; Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration; and Noise modeling techniques for implementing robust preliminary design when approximate models are employed. Hierarchical partitioning and modeling techniques including intermediate responses, linking variables, and compatibility constraints are incorporated within a hierarchical compromise decision support problem formulation for synthesizing subproblem solutions for a partitioned system. Experimentation and approximation techniques are employed for concurrent investigations and modeling of partitioned subproblems. A modified composite experiment is introduced for fitting better predictive models across the ranges of the factors, and an approach for constructing partitioned response surfaces is developed to reduce the computational expense of experimentation for fitting models in a large number of factors. Noise modeling techniques are compared and recommendations are offered for the implementation of robust design when approximate models are sought. These techniques, approaches, and recommendations are incorporated within the method developed for hierarchical robust preliminary design exploration. This method as well as the associated approaches are illustrated through their application to the preliminary design of a commercial turbofan turbine propulsion system. The case study is developed in collaboration with Allison Engine Company, Rolls Royce Aerospace, and is based on the Allison AE3007 existing engine designed for midsize commercial, regional business jets. For this case study, the turbofan system-level problem is partitioned into engine cycle design and configuration design and a compressor modules integrated for more detailed subsystem-level design exploration, improving system evaluation. The fan and low pressure turbine subsystems are also modeled, but in less detail. Given the defined partitioning, these subproblems are investigated independently and concurrently, and response surface models are constructed to approximate the responses of each. These response models are then incorporated within a commercial turbofan hierarchical compromise decision support problem formulation. Five design scenarios are investigated, and robust solutions are identified. The method and solutions identified are verified by comparison with the AE3007 engine. The solutions obtained are similar to the AE3007 cycle and configuration, but are better with respect to many of the requirements.
Maduko, C O; Akoh, C C; Park, Y W
2007-05-01
Infant milk fat analogs resembling human milk fat were synthesized by an enzymatic interesterification between tripalmitin, coconut oil, safflower oil, and soybean oil in hexane. A commercially immobilized 1,3-specific lipase, Lipozyme RM IM, obtained from Rhizomucor miehei was used as a biocatalyst. The effects of substrate molar ratio, reaction time, and incubation temperature on the incorporation of palmitic acid at the sn-2 position of the triacylglycerols were investigated. A central composite design with 5 levels and 3 factors consisting of substrate ratio, reaction temperature, and incubation time was used to model and optimize the reaction conditions using response surface methodology. A quadratic model using multiple regressions was then obtained for the incorporation of palmitic acid at the sn-2 positions of glycerols as the response. The coefficient of determination (R2) value for the model was 0.845. The incorporation of palmitic acid appeared to increase with the decrease in substrate molar ratio and increase in reaction temperature, and optimum incubation time occurred at 18 h. The optimal conditions generated from the model for the targeted 40% palmitic acid incorporation at the sn-2 position were 3 mol/mol, 14.4 h, and 55 degrees C; and 2.8 mol/mol, 19.6 h, and 55 degrees C for substrate ratio (moles of total fatty acid/moles of tripalmitin), time, and temperature, respectively. Infant milk fat containing fatty acid composition and sn-2 fatty acid profile similar to human milk fat was successfully produced. The fat analogs produced under optimal conditions had total and sn-2 positional palmitic acid levels comparable to that of human milk fat.
NASA Astrophysics Data System (ADS)
Andre, R.; Carlsson, J.; Gorelenkova, M.; Jardin, S.; Kaye, S.; Poli, F.; Yuan, X.
2016-10-01
TRANSP is an integrated interpretive and predictive transport analysis tool that incorporates state of the art heating/current drive sources and transport models. The treatments and transport solvers are becoming increasingly sophisticated and comprehensive. For instance, the ISOLVER component provides a free boundary equilibrium solution, while the PT- SOLVER transport solver is especially suited for stiff transport models such as TGLF. TRANSP incorporates high fidelity heating and current drive source models, such as NUBEAM for neutral beam injection, the beam tracing code TORBEAM for EC, TORIC for ICRF, the ray tracing TORAY and GENRAY for EC. The implementation of selected components makes efficient use of MPI for speed up of code calculations. Recently the GENRAY-CQL3D solver for modeling of LH heating and current drive has been implemented and currently being extended to multiple antennas, to allow modeling of EAST discharges. Also, GENRAY+CQL3D is being extended to the use of EC/EBW and of HHFW for NSTX-U. This poster will describe present uses of the code worldwide, as well as plans for upgrading the physics modules and code framework. Work supported by the US Department of Energy under DE-AC02-CH0911466.
Conn, P.B.; Kendall, W.L.; Samuel, M.D.
2004-01-01
Estimates of waterfowl demographic parameters often come from resighting studies where birds fit with individually identifiable neck collars are resighted at a distance. Concerns have been raised about the effects of collar loss on parameter estimates, and the reliability of extrapolating from collared individuals to the population. Models previously proposed to account for collar loss do not allow survival or harvest parameters to depend on neck collar presence or absence. Also, few models have incorporated recent advances in mark-recapture theory that allow for multiple states or auxiliary encounters such as band recoveries. We propose a multistate model for tag loss in which the presence or absence of a collar is considered as a state variable. In this framework, demographic parameters are corrected for tag loss and questions related to collar effects on survival and recovery rates can be addressed. Encounters of individuals between closed sampling periods also can be incorporated in the analysis. We discuss data requirements for answering questions related to tag loss and sampling designs that lend themselves to this purpose. We illustrate the application of our model using a study of lesser snow geese (Chen caerulescens caerulescens).
Godinez, William J; Rohr, Karl
2015-02-01
Tracking subcellular structures as well as viral structures displayed as 'particles' in fluorescence microscopy images yields quantitative information on the underlying dynamical processes. We have developed an approach for tracking multiple fluorescent particles based on probabilistic data association. The approach combines a localization scheme that uses a bottom-up strategy based on the spot-enhancing filter as well as a top-down strategy based on an ellipsoidal sampling scheme that uses the Gaussian probability distributions computed by a Kalman filter. The localization scheme yields multiple measurements that are incorporated into the Kalman filter via a combined innovation, where the association probabilities are interpreted as weights calculated using an image likelihood. To track objects in close proximity, we compute the support of each image position relative to the neighboring objects of a tracked object and use this support to recalculate the weights. To cope with multiple motion models, we integrated the interacting multiple model algorithm. The approach has been successfully applied to synthetic 2-D and 3-D images as well as to real 2-D and 3-D microscopy images, and the performance has been quantified. In addition, the approach was successfully applied to the 2-D and 3-D image data of the recent Particle Tracking Challenge at the IEEE International Symposium on Biomedical Imaging (ISBI) 2012.
Thermal Modeling Method Improvements for SAGE III on ISS
NASA Technical Reports Server (NTRS)
Liles, Kaitlin; Amundsen, Ruth; Davis, Warren; McLeod, Shawn
2015-01-01
The Stratospheric Aerosol and Gas Experiment III (SAGE III) instrument is the fifth in a series of instruments developed for monitoring aerosols and gaseous constituents in the stratosphere and troposphere. SAGE III will be delivered to the International Space Station (ISS) via the SpaceX Dragon vehicle. A detailed thermal model of the SAGE III payload, which consists of multiple subsystems, has been developed in Thermal Desktop (TD). Many innovative analysis methods have been used in developing this model; these will be described in the paper. This paper builds on a paper presented at TFAWS 2013, which described some of the initial developments of efficient methods for SAGE III. The current paper describes additional improvements that have been made since that time. To expedite the correlation of the model to thermal vacuum (TVAC) testing, the chambers and GSE for both TVAC chambers at Langley used to test the payload were incorporated within the thermal model. This allowed the runs of TVAC predictions and correlations to be run within the flight model, thus eliminating the need for separate models for TVAC. In one TVAC test, radiant lamps were used which necessitated shooting rays from the lamps, and running in both solar and IR wavebands. A new Dragon model was incorporated which entailed a change in orientation; that change was made using an assembly, so that any potential additional new Dragon orbits could be added in the future without modification of the model. The Earth orbit parameters such as albedo and Earth infrared flux were incorporated as time-varying values that change over the course of the orbit; despite being required in one of the ISS documents, this had not been done before by any previous payload. All parameters such as initial temperature, heater voltage, and location of the payload are defined based on the case definition. For one component, testing was performed in both air and vacuum; incorporating the air convection in a submodel that was only built for the in-air cases allowed correlation of all testing to be done in a single model. These modeling improvements and more will be described and illustrated in the paper.
A Deep-Structured Conditional Random Field Model for Object Silhouette Tracking
Shafiee, Mohammad Javad; Azimifar, Zohreh; Wong, Alexander
2015-01-01
In this work, we introduce a deep-structured conditional random field (DS-CRF) model for the purpose of state-based object silhouette tracking. The proposed DS-CRF model consists of a series of state layers, where each state layer spatially characterizes the object silhouette at a particular point in time. The interactions between adjacent state layers are established by inter-layer connectivity dynamically determined based on inter-frame optical flow. By incorporate both spatial and temporal context in a dynamic fashion within such a deep-structured probabilistic graphical model, the proposed DS-CRF model allows us to develop a framework that can accurately and efficiently track object silhouettes that can change greatly over time, as well as under different situations such as occlusion and multiple targets within the scene. Experiment results using video surveillance datasets containing different scenarios such as occlusion and multiple targets showed that the proposed DS-CRF approach provides strong object silhouette tracking performance when compared to baseline methods such as mean-shift tracking, as well as state-of-the-art methods such as context tracking and boosted particle filtering. PMID:26313943
Kuan, Hui-Shun; Betterton, Meredith D.
2016-01-01
Motor protein motion on biopolymers can be described by models related to the totally asymmetric simple exclusion process (TASEP). Inspired by experiments on the motion of kinesin-4 motors on antiparallel microtubule overlaps, we analyze a model incorporating the TASEP on two antiparallel lanes with binding kinetics and lane switching. We determine the steady-state motor density profiles using phase-plane analysis of the steady-state mean field equations and kinetic Monte Carlo simulations. We focus on the density-density phase plane, where we find an analytic solution to the mean field model. By studying the phase-space flows, we determine the model’s fixed points and their changes with parameters. Phases previously identified for the single-lane model occur for low switching rate between lanes. We predict a multiple coexistence phase due to additional fixed points that appear as the switching rate increases: switching moves motors from the higher-density to the lower-density lane, causing local jamming and creating multiple domain walls. We determine the phase diagram of the model for both symmetric and general boundary conditions. PMID:27627345
He, Zihuai; Xu, Bin; Lee, Seunggeun; Ionita-Laza, Iuliana
2017-09-07
Substantial progress has been made in the functional annotation of genetic variation in the human genome. Integrative analysis that incorporates such functional annotations into sequencing studies can aid the discovery of disease-associated genetic variants, especially those with unknown function and located outside protein-coding regions. Direct incorporation of one functional annotation as weight in existing dispersion and burden tests can suffer substantial loss of power when the functional annotation is not predictive of the risk status of a variant. Here, we have developed unified tests that can utilize multiple functional annotations simultaneously for integrative association analysis with efficient computational techniques. We show that the proposed tests significantly improve power when variant risk status can be predicted by functional annotations. Importantly, when functional annotations are not predictive of risk status, the proposed tests incur only minimal loss of power in relation to existing dispersion and burden tests, and under certain circumstances they can even have improved power by learning a weight that better approximates the underlying disease model in a data-adaptive manner. The tests can be constructed with summary statistics of existing dispersion and burden tests for sequencing data, therefore allowing meta-analysis of multiple studies without sharing individual-level data. We applied the proposed tests to a meta-analysis of noncoding rare variants in Metabochip data on 12,281 individuals from eight studies for lipid traits. By incorporating the Eigen functional score, we detected significant associations between noncoding rare variants in SLC22A3 and low-density lipoprotein and total cholesterol, associations that are missed by standard dispersion and burden tests. Copyright © 2017 American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.
An omnibus test for family-based association studies with multiple SNPs and multiple phenotypes.
Lasky-Su, Jessica; Murphy, Amy; McQueen, Matthew B; Weiss, Scott; Lange, Christoph
2010-06-01
We propose an omnibus family-based association test (MFBAT) that can be applied to multiple markers and multiple phenotypes and that has only one degree of freedom. The proposed test statistic extends current FBAT methodology to incorporate multiple markers as well as multiple phenotypes. Using simulation studies, power estimates for the proposed methodology are compared with the standard methodologies. On the basis of these simulations, we find that MFBAT substantially outperforms other methods, including haplotypic approaches and doing multiple tests with single single-nucleotide polymorphisms (SNPs) and single phenotypes. The practical relevance of the approach is illustrated by an application to asthma in which SNP/phenotype combinations are identified and reach overall significance that would not have been identified using other approaches. This methodology is directly applicable to cases in which there are multiple SNPs, such as candidate gene studies, cases in which there are multiple phenotypes, such as expression data, and cases in which there are multiple phenotypes and genotypes, such as genome-wide association studies that incorporate expression profiles as phenotypes. This program is available in the PBAT analysis package.
Low Dose Radiation Cancer Risks: Epidemiological and Toxicological Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
David G. Hoel, PhD
2012-04-19
The basic purpose of this one year research grant was to extend the two stage clonal expansion model (TSCE) of carcinogenesis to exposures other than the usual single acute exposure. The two-stage clonal expansion model of carcinogenesis incorporates the biological process of carcinogenesis, which involves two mutations and the clonal proliferation of the intermediate cells, in a stochastic, mathematical way. The current TSCE model serves a general purpose of acute exposure models but requires numerical computation of both the survival and hazard functions. The primary objective of this research project was to develop the analytical expressions for the survival functionmore » and the hazard function of the occurrence of the first cancer cell for acute, continuous and multiple exposure cases within the framework of the piece-wise constant parameter two-stage clonal expansion model of carcinogenesis. For acute exposure and multiple exposures of acute series, it is either only allowed to have the first mutation rate vary with the dose, or to have all the parameters be dose dependent; for multiple exposures of continuous exposures, all the parameters are allowed to vary with the dose. With these analytical functions, it becomes easy to evaluate the risks of cancer and allows one to deal with the various exposure patterns in cancer risk assessment. A second objective was to apply the TSCE model with varing continuous exposures from the cancer studies of inhaled plutonium in beagle dogs. Using step functions to estimate the retention functions of the pulmonary exposure of plutonium the multiple exposure versions of the TSCE model was to be used to estimate the beagle dog lung cancer risks. The mathematical equations of the multiple exposure versions of the TSCE model were developed. A draft manuscript which is attached provides the results of this mathematical work. The application work using the beagle dog data from plutonium exposure has not been completed due to the fact that the research project did not continue beyond its first year.« less
Borchers, D L; Langrock, R
2015-12-01
We develop maximum likelihood methods for line transect surveys in which animals go undetected at distance zero, either because they are stochastically unavailable while within view or because they are missed when they are available. These incorporate a Markov-modulated Poisson process model for animal availability, allowing more clustered availability events than is possible with Poisson availability models. They include a mark-recapture component arising from the independent-observer survey, leading to more accurate estimation of detection probability given availability. We develop models for situations in which (a) multiple detections of the same individual are possible and (b) some or all of the availability process parameters are estimated from the line transect survey itself, rather than from independent data. We investigate estimator performance by simulation, and compare the multiple-detection estimators with estimators that use only initial detections of individuals, and with a single-observer estimator. Simultaneous estimation of detection function parameters and availability model parameters is shown to be feasible from the line transect survey alone with multiple detections and double-observer data but not with single-observer data. Recording multiple detections of individuals improves estimator precision substantially when estimating the availability model parameters from survey data, and we recommend that these data be gathered. We apply the methods to estimate detection probability from a double-observer survey of North Atlantic minke whales, and find that double-observer data greatly improve estimator precision here too. © 2015 The Authors Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society.
PFLOTRAN: Reactive Flow & Transport Code for Use on Laptops to Leadership-Class Supercomputers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hammond, Glenn E.; Lichtner, Peter C.; Lu, Chuan
PFLOTRAN, a next-generation reactive flow and transport code for modeling subsurface processes, has been designed from the ground up to run efficiently on machines ranging from leadership-class supercomputers to laptops. Based on an object-oriented design, the code is easily extensible to incorporate additional processes. It can interface seamlessly with Fortran 9X, C and C++ codes. Domain decomposition parallelism is employed, with the PETSc parallel framework used to manage parallel solvers, data structures and communication. Features of the code include a modular input file, implementation of high-performance I/O using parallel HDF5, ability to perform multiple realization simulations with multiple processors permore » realization in a seamless manner, and multiple modes for multiphase flow and multicomponent geochemical transport. Chemical reactions currently implemented in the code include homogeneous aqueous complexing reactions and heterogeneous mineral precipitation/dissolution, ion exchange, surface complexation and a multirate kinetic sorption model. PFLOTRAN has demonstrated petascale performance using 2{sup 17} processor cores with over 2 billion degrees of freedom. Accomplishments achieved to date include applications to the Hanford 300 Area and modeling CO{sub 2} sequestration in deep geologic formations.« less
NASA Technical Reports Server (NTRS)
Parker, Linda Neergaard; Zank, Gary P.
2013-01-01
We present preliminary results from a model that diffusively accelerates particles at multiple shocks. Our basic approach is related to box models (Protheroe and Stanev, 1998; Moraal and Axford, 1983; Ball and Kirk, 1992; Drury et al., 1999) in which a distribution of particles is diffusively accelerated inside the box while simultaneously experiencing decompression through adiabatic expansion and losses from the convection and diffusion of particles outside the box (Melrose and Pope, 1993; Zank et al., 2000). We adiabatically decompress the accelerated particle distribution between each shock by either the method explored in Melrose and Pope (1993) and Pope and Melrose (1994) or by the approach set forth in Zank et al. (2000) where we solve the transport equation by a method analogous to operator splitting. The second method incorporates the additional loss terms of convection and diffusion and allows for the use of a variable time between shocks. We use a maximum injection energy (Emax) appropriate for quasi-parallel and quasi-perpendicular shocks (Zank et al., 2000, 2006; Dosch and Shalchi, 2010) and provide a preliminary application of the diffusive acceleration of particles by multiple shocks with frequencies appropriate for solar maximum (i.e., a non-Markovian process).
System Dynamics Modeling for Public Health: Background and Opportunities
Homer, Jack B.; Hirsch, Gary B.
2006-01-01
The systems modeling methodology of system dynamics is well suited to address the dynamic complexity that characterizes many public health issues. The system dynamics approach involves the development of computer simulation models that portray processes of accumulation and feedback and that may be tested systematically to find effective policies for overcoming policy resistance. System dynamics modeling of chronic disease prevention should seek to incorporate all the basic elements of a modern ecological approach, including disease outcomes, health and risk behaviors, environmental factors, and health-related resources and delivery systems. System dynamics shows promise as a means of modeling multiple interacting diseases and risks, the interaction of delivery systems and diseased populations, and matters of national and state policy. PMID:16449591
Integrated Main Propulsion System Performance Reconstruction Process/Models
NASA Technical Reports Server (NTRS)
Lopez, Eduardo; Elliott, Katie; Snell, Steven; Evans, Michael
2013-01-01
The Integrated Main Propulsion System (MPS) Performance Reconstruction process provides the MPS post-flight data files needed for postflight reporting to the project integration management and key customers to verify flight performance. This process/model was used as the baseline for the currently ongoing Space Launch System (SLS) work. The process utilizes several methodologies, including multiple software programs, to model integrated propulsion system performance through space shuttle ascent. It is used to evaluate integrated propulsion systems, including propellant tanks, feed systems, rocket engine, and pressurization systems performance throughout ascent based on flight pressure and temperature data. The latest revision incorporates new methods based on main engine power balance model updates to model higher mixture ratio operation at lower engine power levels.
Crop status evaluations and yield predictions
NASA Technical Reports Server (NTRS)
Haun, J. R.
1975-01-01
A model was developed for predicting the day 50 percent of the wheat crop is planted in North Dakota. This model incorporates location as an independent variable. The Julian date when 50 percent of the crop was planted for the nine divisions of North Dakota for seven years was regressed on the 49 variables through the step-down multiple regression procedure. This procedure begins with all of the independent variables and sequentially removes variables that are below a predetermined level of significance after each step. The prediction equation was tested on daily data. The accuracy of the model is considered satisfactory for finding the historic dates on which to initiate yield prediction model. Growth prediction models were also developed for spring wheat.
Mesoscopic Modeling of Blood Clotting: Coagulation Cascade and Platelets Adhesion
NASA Astrophysics Data System (ADS)
Yazdani, Alireza; Li, Zhen; Karniadakis, George
2015-11-01
The process of clot formation and growth at a site on a blood vessel wall involve a number of multi-scale simultaneous processes including: multiple chemical reactions in the coagulation cascade, species transport and flow. To model these processes we have incorporated advection-diffusion-reaction (ADR) of multiple species into an extended version of Dissipative Particle Dynamics (DPD) method which is considered as a coarse-grained Molecular Dynamics method. At the continuum level this is equivalent to the Navier-Stokes equation plus one advection-diffusion equation for each specie. The chemistry of clot formation is now understood to be determined by mechanisms involving reactions among many species in dilute solution, where reaction rate constants and species diffusion coefficients in plasma are known. The role of blood particulates, i.e. red cells and platelets, in the clotting process is studied by including them separately and together in the simulations. An agonist-induced platelet activation mechanism is presented, while platelets adhesive dynamics based on a stochastic bond formation/dissociation process is included in the model.
Bright and compact macromolecular probes for bioimaging applications
NASA Astrophysics Data System (ADS)
Thapaliya, Ek Raj; Zhang, Yang; Dhakal, Pravat; Brown, Adrienne S.; Wilson, James N.; Collins, Kevin M.; Raymo, Françisco M.
2018-02-01
Amphiphilic macromolecules with multiple borondipyrromethene (BODIPY) chromophores appended to a common poly(methacrylate) backbone were synthesized by the random co-polymerization of appropriate methacrylate monomers. The resulting polymers incorporate also hydrophilic oligo(ethylene glycol) and hydrophobic decyl side chains designed to impose aqueous solubility and insulate the chromophoric components from each other respectively. The presence of multiple chromophores translates into a significant enhancement in molar absorption coefficient, relative to a model BODIPY monomer. The effective insulation of the fluorophores minimizes interchromophoric interactions and mitigates depressive effects on the fluorescence quantum yield. The overall result is a 6-fold enhancement in brightness, relative to the model monomer. These macromolecular probes can be injected into live Caenorhabditis elegans to allow their visualization with a 4-fold increase in signal intensity, relative to the model system. Furthermore, they can be conjugated to secondary antibodies, under standard amide-coupling conditions, with negligible influence on the binding affinity of the biomoleucles to allow the implementation of immunolabeling protocols.
Crase, Beth; Liedloff, Adam; Vesk, Peter A; Fukuda, Yusuke; Wintle, Brendan A
2014-08-01
Species distribution models (SDMs) are widely used to forecast changes in the spatial distributions of species and communities in response to climate change. However, spatial autocorrelation (SA) is rarely accounted for in these models, despite its ubiquity in broad-scale ecological data. While spatial autocorrelation in model residuals is known to result in biased parameter estimates and the inflation of type I errors, the influence of unmodeled SA on species' range forecasts is poorly understood. Here we quantify how accounting for SA in SDMs influences the magnitude of range shift forecasts produced by SDMs for multiple climate change scenarios. SDMs were fitted to simulated data with a known autocorrelation structure, and to field observations of three mangrove communities from northern Australia displaying strong spatial autocorrelation. Three modeling approaches were implemented: environment-only models (most frequently applied in species' range forecasts), and two approaches that incorporate SA; autologistic models and residuals autocovariate (RAC) models. Differences in forecasts among modeling approaches and climate scenarios were quantified. While all model predictions at the current time closely matched that of the actual current distribution of the mangrove communities, under the climate change scenarios environment-only models forecast substantially greater range shifts than models incorporating SA. Furthermore, the magnitude of these differences intensified with increasing increments of climate change across the scenarios. When models do not account for SA, forecasts of species' range shifts indicate more extreme impacts of climate change, compared to models that explicitly account for SA. Therefore, where biological or population processes induce substantial autocorrelation in the distribution of organisms, and this is not modeled, model predictions will be inaccurate. These results have global importance for conservation efforts as inaccurate forecasts lead to ineffective prioritization of conservation activities and potentially to avoidable species extinctions. © 2014 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Mariano, Adrian V.; Grossmann, John M.
2010-11-01
Reflectance-domain methods convert hyperspectral data from radiance to reflectance using an atmospheric compensation model. Material detection and identification are performed by comparing the compensated data to target reflectance spectra. We introduce two radiance-domain approaches, Single atmosphere Adaptive Cosine Estimator (SACE) and Multiple atmosphere ACE (MACE) in which the target reflectance spectra are instead converted into sensor-reaching radiance using physics-based models. For SACE, known illumination and atmospheric conditions are incorporated in a single atmospheric model. For MACE the conditions are unknown so the algorithm uses many atmospheric models to cover the range of environmental variability, and it approximates the result using a subspace model. This approach is sometimes called the invariant method, and requires the choice of a subspace dimension for the model. We compare these two radiance-domain approaches to a Reflectance-domain ACE (RACE) approach on a HYDICE image featuring concealed materials. All three algorithms use the ACE detector, and all three techniques are able to detect most of the hidden materials in the imagery. For MACE we observe a strong dependence on the choice of the material subspace dimension. Increasing this value can lead to a decline in performance.
Soneson, Charlotte; Fontes, Magnus
2012-01-01
Analysis of multivariate data sets from, for example, microarray studies frequently results in lists of genes which are associated with some response of interest. The biological interpretation is often complicated by the statistical instability of the obtained gene lists, which may partly be due to the functional redundancy among genes, implying that multiple genes can play exchangeable roles in the cell. In this paper, we use the concept of exchangeability of random variables to model this functional redundancy and thereby account for the instability. We present a flexible framework to incorporate the exchangeability into the representation of lists. The proposed framework supports straightforward comparison between any 2 lists. It can also be used to generate new more stable gene rankings incorporating more information from the experimental data. Using 2 microarray data sets, we show that the proposed method provides more robust gene rankings than existing methods with respect to sampling variations, without compromising the biological significance of the rankings.
A Novel Joint Problem of Routing, Scheduling, and Variable-Width Channel Allocation in WMNs
Liu, Wan-Yu; Chou, Chun-Hung
2014-01-01
This paper investigates a novel joint problem of routing, scheduling, and channel allocation for single-radio multichannel wireless mesh networks in which multiple channel widths can be adjusted dynamically through a new software technology so that more concurrent transmissions and suppressed overlapping channel interference can be achieved. Although the previous works have studied this joint problem, their linear programming models for the problem were not incorporated with some delicate constraints. As a result, this paper first constructs a linear programming model with more practical concerns and then proposes a simulated annealing approach with a novel encoding mechanism, in which the configurations of multiple time slots are devised to characterize the dynamic transmission process. Experimental results show that our approach can find the same or similar solutions as the optimal solutions for smaller-scale problems and can efficiently find good-quality solutions for a variety of larger-scale problems. PMID:24982990
A Mechanistic Model for Cooperative Behavior of Co-transcribing RNA Polymerases
Heberling, Tamra; Davis, Lisa; Gedeon, Jakub; Morgan, Charles; Gedeon, Tomáš
2016-01-01
In fast-transcribing prokaryotic genes, such as an rrn gene in Escherichia coli, many RNA polymerases (RNAPs) transcribe the DNA simultaneously. Active elongation of RNAPs is often interrupted by pauses, which has been observed to cause RNAP traffic jams; yet some studies indicate that elongation seems to be faster in the presence of multiple RNAPs than elongation by a single RNAP. We propose that an interaction between RNAPs via the torque produced by RNAP motion on helically twisted DNA can explain this apparent paradox. We have incorporated the torque mechanism into a stochastic model and simulated transcription both with and without torque. Simulation results illustrate that the torque causes shorter pause durations and fewer collisions between polymerases. Our results suggest that the torsional interaction of RNAPs is an important mechanism in maintaining fast transcription times, and that transcription should be viewed as a cooperative group effort by multiple polymerases. PMID:27517607
NASA Astrophysics Data System (ADS)
Cantrell, Jason T.
This document outlines in detail the research performed by applying shape memory polymers in a generic unimorph actuator configuration. A set of experiments designed to investigate the influence of transverse curvature, the relative widths of shape memory polymer and composite substrates, and shape memory polymer thickness on actuator recoverability after multiple thermo-mechanical cycles is presented in detail. A theoretical model of the moment required to maintain shape fixity with minimal shape retention loss was developed and experimentally validated for unimorph composite actuators of varying cross-sectional areas. Theoretical models were also developed and evaluated to determine the relationship between the materials neutral axes and thermal stability during a thermo-mechanical cycle. Research was conducted on the incorporation of shape memory polymers on micro air vehicle wings to maximize shape fixity and shape recoverability while minimizing the volume of shape memory polymer on the wing surface. Applications based research also included experimentally evaluating the feasibility of shape memory polymers on deployable satellite antenna ribs both with and without resistance heaters which could be utilized to assist in antenna deployment.
Unsupervised Domain Adaptation with Multiple Acoustic Models
2010-12-01
Discriminative MAP Adaptation Standard ML-MAP has been extended to incorporate discrim- inative training criteria such as MMI and MPE [10]. Dis- criminative MAP...smoothing variable I . For example, the MMI - MAP mean is given by ( mmi -map) jm = fnumjm (O) den jm(O)g+Djm̂jm + I (ml-map) jm f numjm den... MMI training, and Djm is the Gaussian-dependent parameter for the extended Baum-Welch (EBW) algorithm. MMI -MAP has been successfully applied in
Informedia at TRECVID2014: MED and MER, Semantic Indexing, Surveillance Event Detection
2014-11-10
multiple ranked lists for a given system query. Our system incorporates various retrieval methods such as Vector Space Model, tf-idf, BM25, language...separable space before applying the linear classifier. As the EFM is an approximation, we run the risk of a slight drop in performance. Figure 4 shows...validation set are fused. • CMU_Run3: After removing junk shots (by the junk /black frame detectors), MultiModal Pseudo Relevance Feedback (MMPRF) [12
Modeling Uncertainty in Military Supply Chain Management Decisions
2014-06-23
a compound probability distribution (Eppen and Martin, 1988; Lau and Lau , 2003; Lin, 2008). This paper will incorporate the previously described...distribution with and is selected for the regular state and the N (0.27,0.19) is chosen for state 2. The demand in each state for a given lead...supplier receives orders of size Q from the buyer and purchases inventory from its vendors in a quantity that is an integer multiple N of the buyer’s
NASA Technical Reports Server (NTRS)
Ly, Uy-Loi; Schoemig, Ewald
1993-01-01
In the past few years, the mixed H(sub 2)/H-infinity control problem has been the object of much research interest since it allows the incorporation of robust stability into the LQG framework. The general mixed H(sub 2)/H-infinity design problem has yet to be solved analytically. Numerous schemes have considered upper bounds for the H(sub 2)-performance criterion and/or imposed restrictive constraints on the class of systems under investigation. Furthermore, many modern control applications rely on dynamic models obtained from finite-element analysis and thus involve high-order plant models. Hence the capability to design low-order (fixed-order) controllers is of great importance. In this research a new design method was developed that optimizes the exact H(sub 2)-norm of a certain subsystem subject to robust stability in terms of H-infinity constraints and a minimal number of system assumptions. The derived algorithm is based on a differentiable scalar time-domain penalty function to represent the H-infinity constraints in the overall optimization. The scheme is capable of handling multiple plant conditions and hence multiple performance criteria and H-infinity constraints and incorporates additional constraints such as fixed-order and/or fixed structure controllers. The defined penalty function is applicable to any constraint that is expressible in form of a real symmetric matrix-inequity.
In Vitro Tumor Models: Advantages, Disadvantages, Variables, and Selecting the Right Platform.
Katt, Moriah E; Placone, Amanda L; Wong, Andrew D; Xu, Zinnia S; Searson, Peter C
2016-01-01
In vitro tumor models have provided important tools for cancer research and serve as low-cost screening platforms for drug therapies; however, cancer recurrence remains largely unchecked due to metastasis, which is the cause of the majority of cancer-related deaths. The need for an improved understanding of the progression and treatment of cancer has pushed for increased accuracy and physiological relevance of in vitro tumor models. As a result, in vitro tumor models have concurrently increased in complexity and their output parameters further diversified, since these models have progressed beyond simple proliferation, invasion, and cytotoxicity screens and have begun recapitulating critical steps in the metastatic cascade, such as intravasation, extravasation, angiogenesis, matrix remodeling, and tumor cell dormancy. Advances in tumor cell biology, 3D cell culture, tissue engineering, biomaterials, microfabrication, and microfluidics have enabled rapid development of new in vitro tumor models that often incorporate multiple cell types, extracellular matrix materials, and spatial and temporal introduction of soluble factors. Other innovations include the incorporation of perfusable microvessels to simulate the tumor vasculature and model intravasation and extravasation. The drive toward precision medicine has increased interest in adapting in vitro tumor models for patient-specific therapies, clinical management, and assessment of metastatic potential. Here, we review the wide range of current in vitro tumor models and summarize their advantages, disadvantages, and suitability in modeling specific aspects of the metastatic cascade and drug treatment.
In Vitro Tumor Models: Advantages, Disadvantages, Variables, and Selecting the Right Platform
Katt, Moriah E.; Placone, Amanda L.; Wong, Andrew D.; Xu, Zinnia S.; Searson, Peter C.
2016-01-01
In vitro tumor models have provided important tools for cancer research and serve as low-cost screening platforms for drug therapies; however, cancer recurrence remains largely unchecked due to metastasis, which is the cause of the majority of cancer-related deaths. The need for an improved understanding of the progression and treatment of cancer has pushed for increased accuracy and physiological relevance of in vitro tumor models. As a result, in vitro tumor models have concurrently increased in complexity and their output parameters further diversified, since these models have progressed beyond simple proliferation, invasion, and cytotoxicity screens and have begun recapitulating critical steps in the metastatic cascade, such as intravasation, extravasation, angiogenesis, matrix remodeling, and tumor cell dormancy. Advances in tumor cell biology, 3D cell culture, tissue engineering, biomaterials, microfabrication, and microfluidics have enabled rapid development of new in vitro tumor models that often incorporate multiple cell types, extracellular matrix materials, and spatial and temporal introduction of soluble factors. Other innovations include the incorporation of perfusable microvessels to simulate the tumor vasculature and model intravasation and extravasation. The drive toward precision medicine has increased interest in adapting in vitro tumor models for patient-specific therapies, clinical management, and assessment of metastatic potential. Here, we review the wide range of current in vitro tumor models and summarize their advantages, disadvantages, and suitability in modeling specific aspects of the metastatic cascade and drug treatment. PMID:26904541
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gruszka, T.P.
1987-01-01
Starting from the dynamic equations of electromagnetics we derive mutual impedance formulas that include the effects of induced polarization (IP) and electromagnetic (EM) coupling. The mutual impedance formulas are given for four geometries: a fullspace, a cylinder in a fullspace, a halfspace, and a layer over a halfspace. IP effects are characterized by a Cole-Cole model, the properties of which are fully investigated. From the general mutual impedance formulas specific limiting forms are defined to characterize the IP and EM effects. Using these limiting forms a framework is developed to justify the addition or multiplication of the two effects. Themore » additive and multiplicative models are compared in the cylinder and layer geometries with the conclusion that the additive model proves to be more accurate over a wider range of frequencies than the multiplicative model. The nature of the IP and EM effects is illustrated in all four geometries showing the effects of relevant parameters. In all cases it is shown that the real part of the mutual impedance contains important IP information that is less influenced by EM effects. Finally the effects of boundaries are illustrated by the cylinder and layer geometries and a theory is developed to incorporate EM effects and IP effects from multiple regions which utilizes frequency dependent real dilution factors. The author also included a brief review of some EM removal schemes and dilution theory approximations.« less
Troeller, A; Soehn, M; Yan, D
2012-06-01
Introducing an extended, phenomenological, generalized equivalent uniform dose (eEUD) that incorporates multiple volume-effect parameters for different dose-ranges. The generalized EUD (gEUD) was introduced as an estimate of the EUD that incorporates a single, tissue-specific parameter - the volume-effect-parameter (VEP) 'a'. As a purely phenomenological concept, its radio-biological equivalency to a given inhomogeneous dose distribution is not a priori clear and mechanistic models based on radio-biological parameters are assumed to better resemble the underlying biology. However, for normal organs mechanistic models are hard to derive, since the structural organization of the tissue plays a significant role. Consequently, phenomenological approaches might be especially useful in order to describe dose-response for normal tissues. However, the single parameter used to estimate the gEUD may not suffice in accurately representing more complex biological effects that have been discussed in the literature. For instance, radio-biological parameters and hence the effects of fractionation are known to be dose-range dependent. Therefore, we propose an extended phenomenological eEUD formula that incorporates multiple VEPs accounting for dose-range dependency. The eEUD introduced is a piecewise polynomial expansion of the gEUD formula. In general, it allows for an arbitrary number of VEPs, each valid for a certain dose-range. We proved that the formula fulfills required mathematical and physical criteria such as invertibility of the underlying dose-effect and continuity in dose. Furthermore, it contains the gEUD as a special case, if all VEPs are equal to 'a' from the gEUD model. The eEUD is a concept that expands the gEUD such that it can theoretically represent dose-range dependent effects. Its practicality, however, remains to be shown. As a next step, this will be done by estimating the eEUD from patient data using maximum-likelihood based NTCP modelling in the same way it is commonly done for the gEUD. © 2012 American Association of Physicists in Medicine.
Bujkiewicz, Sylwia; Thompson, John R; Riley, Richard D; Abrams, Keith R
2016-03-30
A number of meta-analytical methods have been proposed that aim to evaluate surrogate endpoints. Bivariate meta-analytical methods can be used to predict the treatment effect for the final outcome from the treatment effect estimate measured on the surrogate endpoint while taking into account the uncertainty around the effect estimate for the surrogate endpoint. In this paper, extensions to multivariate models are developed aiming to include multiple surrogate endpoints with the potential benefit of reducing the uncertainty when making predictions. In this Bayesian multivariate meta-analytic framework, the between-study variability is modelled in a formulation of a product of normal univariate distributions. This formulation is particularly convenient for including multiple surrogate endpoints and flexible for modelling the outcomes which can be surrogate endpoints to the final outcome and potentially to one another. Two models are proposed, first, using an unstructured between-study covariance matrix by assuming the treatment effects on all outcomes are correlated and second, using a structured between-study covariance matrix by assuming treatment effects on some of the outcomes are conditionally independent. While the two models are developed for the summary data on a study level, the individual-level association is taken into account by the use of the Prentice's criteria (obtained from individual patient data) to inform the within study correlations in the models. The modelling techniques are investigated using an example in relapsing remitting multiple sclerosis where the disability worsening is the final outcome, while relapse rate and MRI lesions are potential surrogates to the disability progression. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Near-wall turbulence model and its application to fully developed turbulent channel and pipe flows
NASA Technical Reports Server (NTRS)
Kim, S.-W.
1990-01-01
A near-wall turbulence model and its incorporation into a multiple-timescale turbulence model are presented. The near-wall turbulence model is obtained from a k-equation turbulence model and a near-wall analysis. In the method, the equations for the conservation of mass, momentum, and turbulent kinetic energy are integrated up to the wall, and the energy transfer and the dissipation rates inside the near-wall layer are obtained from algebraic equations. Fully developed turbulent channel and pipe flows are solved using a finite element method. The computational results compare favorably with experimental data. It is also shown that the turbulence model can resolve the overshoot phenomena of the turbulent kinetic energy and the dissipation rate in the region very close to the wall.
Modelling Wolbachia infection in a sex-structured mosquito population carrying West Nile virus.
Farkas, József Z; Gourley, Stephen A; Liu, Rongsong; Yakubu, Abdul-Aziz
2017-09-01
Wolbachia is possibly the most studied reproductive parasite of arthropod species. It appears to be a promising candidate for biocontrol of some mosquito borne diseases. We begin by developing a sex-structured model for a Wolbachia infected mosquito population. Our model incorporates the key effects of Wolbachia infection including cytoplasmic incompatibility and male killing. We also allow the possibility of reduced reproductive output, incomplete maternal transmission, and different mortality rates for uninfected/infected male/female individuals. We study the existence and local stability of equilibria, including the biologically relevant and interesting boundary equilibria. For some biologically relevant parameter regimes there may be multiple coexistence steady states including, very importantly, a coexistence steady state in which Wolbachia infected individuals dominate. We also extend the model to incorporate West Nile virus (WNv) dynamics, using an SEI modelling approach. Recent evidence suggests that a particular strain of Wolbachia infection significantly reduces WNv replication in Aedes aegypti. We model this via increased time spent in the WNv-exposed compartment for Wolbachia infected female mosquitoes. A basic reproduction number [Formula: see text] is computed for the WNv infection. Our results suggest that, if the mosquito population consists mainly of Wolbachia infected individuals, WNv eradication is likely if WNv replication in Wolbachia infected individuals is sufficiently reduced.
Bayesian GGE biplot models applied to maize multi-environments trials.
de Oliveira, L A; da Silva, C P; Nuvunga, J J; da Silva, A Q; Balestre, M
2016-06-17
The additive main effects and multiplicative interaction (AMMI) and the genotype main effects and genotype x environment interaction (GGE) models stand out among the linear-bilinear models used in genotype x environment interaction studies. Despite the advantages of their use to describe genotype x environment (AMMI) or genotype and genotype x environment (GGE) interactions, these methods have known limitations that are inherent to fixed effects models, including difficulty in treating variance heterogeneity and missing data. Traditional biplots include no measure of uncertainty regarding the principal components. The present study aimed to apply the Bayesian approach to GGE biplot models and assess the implications for selecting stable and adapted genotypes. Our results demonstrated that the Bayesian approach applied to GGE models with non-informative priors was consistent with the traditional GGE biplot analysis, although the credible region incorporated into the biplot enabled distinguishing, based on probability, the performance of genotypes, and their relationships with the environments in the biplot. Those regions also enabled the identification of groups of genotypes and environments with similar effects in terms of adaptability and stability. The relative position of genotypes and environments in biplots is highly affected by the experimental accuracy. Thus, incorporation of uncertainty in biplots is a key tool for breeders to make decisions regarding stability selection and adaptability and the definition of mega-environments.
You, Seng Chan; Lee, Seongwon; Cho, Soo-Yeon; Park, Hojun; Jung, Sungjae; Cho, Jaehyeong; Yoon, Dukyong; Park, Rae Woong
2017-01-01
It is increasingly necessary to generate medical evidence applicable to Asian people compared to those in Western countries. Observational Health Data Sciences a Informatics (OHDSI) is an international collaborative which aims to facilitate generating high-quality evidence via creating and applying open-source data analytic solutions to a large network of health databases across countries. We aimed to incorporate Korean nationwide cohort data into the OHDSI network by converting the national sample cohort into Observational Medical Outcomes Partnership-Common Data Model (OMOP-CDM). The data of 1.13 million subjects was converted to OMOP-CDM, resulting in average 99.1% conversion rate. The ACHILLES, open-source OMOP-CDM-based data profiling tool, was conducted on the converted database to visualize data-driven characterization and access the quality of data. The OMOP-CDM version of National Health Insurance Service-National Sample Cohort (NHIS-NSC) can be a valuable tool for multiple aspects of medical research by incorporation into the OHDSI research network.
Optimizing Irrigation Water Allocation under Multiple Sources of Uncertainty in an Arid River Basin
NASA Astrophysics Data System (ADS)
Wei, Y.; Tang, D.; Gao, H.; Ding, Y.
2015-12-01
Population growth and climate change add additional pressures affecting water resources management strategies for meeting demands from different economic sectors. It is especially challenging in arid regions where fresh water is limited. For instance, in the Tailanhe River Basin (Xinjiang, China), a compromise must be made between water suppliers and users during drought years. This study presents a multi-objective irrigation water allocation model to cope with water scarcity in arid river basins. To deal with the uncertainties from multiple sources in the water allocation system (e.g., variations of available water amount, crop yield, crop prices, and water price), the model employs a interval linear programming approach. The multi-objective optimization model developed from this study is characterized by integrating eco-system service theory into water-saving measures. For evaluation purposes, the model is used to construct an optimal allocation system for irrigation areas fed by the Tailan River (Xinjiang Province, China). The objective functions to be optimized are formulated based on these irrigation areas' economic, social, and ecological benefits. The optimal irrigation water allocation plans are made under different hydroclimate conditions (wet year, normal year, and dry year), with multiple sources of uncertainty represented. The modeling tool and results are valuable for advising decision making by the local water authority—and the agricultural community—especially on measures for coping with water scarcity (by incorporating uncertain factors associated with crop production planning).
Koerner, Tess K.; Zhang, Yang
2017-01-01
Neurophysiological studies are often designed to examine relationships between measures from different testing conditions, time points, or analysis techniques within the same group of participants. Appropriate statistical techniques that can take into account repeated measures and multivariate predictor variables are integral and essential to successful data analysis and interpretation. This work implements and compares conventional Pearson correlations and linear mixed-effects (LME) regression models using data from two recently published auditory electrophysiology studies. For the specific research questions in both studies, the Pearson correlation test is inappropriate for determining strengths between the behavioral responses for speech-in-noise recognition and the multiple neurophysiological measures as the neural responses across listening conditions were simply treated as independent measures. In contrast, the LME models allow a systematic approach to incorporate both fixed-effect and random-effect terms to deal with the categorical grouping factor of listening conditions, between-subject baseline differences in the multiple measures, and the correlational structure among the predictor variables. Together, the comparative data demonstrate the advantages as well as the necessity to apply mixed-effects models to properly account for the built-in relationships among the multiple predictor variables, which has important implications for proper statistical modeling and interpretation of human behavior in terms of neural correlates and biomarkers. PMID:28264422
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fischer, N. O.
The goal of this proposal is to demonstrate that co-localization of protein subunit antigens and adjuvants on nanolipoprotein particles (NLPs) can increase the protective efficacy of recombinant subunit antigens from Burkholderia spp. and Francisella tularensis against an aerosol challenge. NLPs are are biocompatible, high-density lipoprotein mimetics that are amenable to the incorporation of multiple, chemically-disparate adjuvant and antigen molecules. We hypothesize that the ability to co-localize optimized adjuvant formulations with subunit antigens within a single particle will enhance the stimulation and activation of key immune effector cells, increasing the protective efficacy of subunit antigen-based vaccines. While Burkholderia spp. and F.more » tularensis subunit antigens are the focus of this proposal, we anticipate that this approach is applicable to a wide range of DOD-relevant biothreat agents. The F344 rat aerosol challenge model for F. tularensis has been successfully established at Battelle under this contract, and Year 3 efficacy studies performed at Battelle demonstrated that an NLP vaccine formulation was able to enhance survival of female F344 rats relative to naïve animals. In addition, Year 3 focused on the incorporation of multiple Burkholderia antigens (both polysaccharides and proteins) onto adjuvanted NLPs, with immunological analysis poised to begin in the next quarter.« less
Bayesian Networks Improve Causal Environmental Assessments for Evidence-Based Policy.
Carriger, John F; Barron, Mace G; Newman, Michael C
2016-12-20
Rule-based weight of evidence approaches to ecological risk assessment may not account for uncertainties and generally lack probabilistic integration of lines of evidence. Bayesian networks allow causal inferences to be made from evidence by including causal knowledge about the problem, using this knowledge with probabilistic calculus to combine multiple lines of evidence, and minimizing biases in predicting or diagnosing causal relationships. Too often, sources of uncertainty in conventional weight of evidence approaches are ignored that can be accounted for with Bayesian networks. Specifying and propagating uncertainties improve the ability of models to incorporate strength of the evidence in the risk management phase of an assessment. Probabilistic inference from a Bayesian network allows evaluation of changes in uncertainty for variables from the evidence. The network structure and probabilistic framework of a Bayesian approach provide advantages over qualitative approaches in weight of evidence for capturing the impacts of multiple sources of quantifiable uncertainty on predictions of ecological risk. Bayesian networks can facilitate the development of evidence-based policy under conditions of uncertainty by incorporating analytical inaccuracies or the implications of imperfect information, structuring and communicating causal issues through qualitative directed graph formulations, and quantitatively comparing the causal power of multiple stressors on valued ecological resources. These aspects are demonstrated through hypothetical problem scenarios that explore some major benefits of using Bayesian networks for reasoning and making inferences in evidence-based policy.
Simulink-Based Simulation Architecture for Evaluating Controls for Aerospace Vehicles (SAREC-ASV)
NASA Technical Reports Server (NTRS)
Christhilf, David m.; Bacon, Barton J.
2006-01-01
The Simulation Architecture for Evaluating Controls for Aerospace Vehicles (SAREC-ASV) is a Simulink-based approach to providing an engineering quality desktop simulation capability for finding trim solutions, extracting linear models for vehicle analysis and control law development, and generating open-loop and closed-loop time history responses for control system evaluation. It represents a useful level of maturity rather than a finished product. The layout is hierarchical and supports concurrent component development and validation, with support from the Concurrent Versions System (CVS) software management tool. Real Time Workshop (RTW) is used to generate pre-compiled code for substantial component modules, and templates permit switching seamlessly between original Simulink and code compiled for various platforms. Two previous limitations are addressed. Turn around time for incorporating tabular model components was improved through auto-generation of required Simulink diagrams based on data received in XML format. The layout was modified to exploit a Simulink "compile once, evaluate multiple times" capability for zero elapsed time for use in trimming and linearizing. Trim is achieved through a Graphical User Interface (GUI) with a narrow, script definable interface to the vehicle model which facilitates incorporating new models.
Economic and environmental optimization of waste treatment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Münster, M.; Ravn, H.; Hedegaard, K.
2015-04-15
Highlights: • Optimizing waste treatment by incorporating LCA methodology. • Applying different objectives (minimizing costs or GHG emissions). • Prioritizing multiple objectives given different weights. • Optimum depends on objective and assumed displaced electricity production. - Abstract: This article presents the new systems engineering optimization model, OptiWaste, which incorporates a life cycle assessment (LCA) methodology and captures important characteristics of waste management systems. As part of the optimization, the model identifies the most attractive waste management options. The model renders it possible to apply different optimization objectives such as minimizing costs or greenhouse gas emissions or to prioritize several objectivesmore » given different weights. A simple illustrative case is analysed, covering alternative treatments of one tonne of residual household waste: incineration of the full amount or sorting out organic waste for biogas production for either combined heat and power generation or as fuel in vehicles. The case study illustrates that the optimal solution depends on the objective and assumptions regarding the background system – illustrated with different assumptions regarding displaced electricity production. The article shows that it is feasible to combine LCA methodology with optimization. Furthermore, it highlights the need for including the integrated waste and energy system into the model.« less
NASA Astrophysics Data System (ADS)
Sreekanth, J.; Moore, Catherine
2018-04-01
The application of global sensitivity and uncertainty analysis techniques to groundwater models of deep sedimentary basins are typically challenged by large computational burdens combined with associated numerical stability issues. The highly parameterized approaches required for exploring the predictive uncertainty associated with the heterogeneous hydraulic characteristics of multiple aquifers and aquitards in these sedimentary basins exacerbate these issues. A novel Patch Modelling Methodology is proposed for improving the computational feasibility of stochastic modelling analysis of large-scale and complex groundwater models. The method incorporates a nested groundwater modelling framework that enables efficient simulation of groundwater flow and transport across multiple spatial and temporal scales. The method also allows different processes to be simulated within different model scales. Existing nested model methodologies are extended by employing 'joining predictions' for extrapolating prediction-salient information from one model scale to the next. This establishes a feedback mechanism supporting the transfer of information from child models to parent models as well as parent models to child models in a computationally efficient manner. This feedback mechanism is simple and flexible and ensures that while the salient small scale features influencing larger scale prediction are transferred back to the larger scale, this does not require the live coupling of models. This method allows the modelling of multiple groundwater flow and transport processes using separate groundwater models that are built for the appropriate spatial and temporal scales, within a stochastic framework, while also removing the computational burden associated with live model coupling. The utility of the method is demonstrated by application to an actual large scale aquifer injection scheme in Australia.
The weighted priors approach for combining expert opinions in logistic regression experiments
Quinlan, Kevin R.; Anderson-Cook, Christine M.; Myers, Kary L.
2017-04-24
When modeling the reliability of a system or component, it is not uncommon for more than one expert to provide very different prior estimates of the expected reliability as a function of an explanatory variable such as age or temperature. Our goal in this paper is to incorporate all information from the experts when choosing a design about which units to test. Bayesian design of experiments has been shown to be very successful for generalized linear models, including logistic regression models. We use this approach to develop methodology for the case where there are several potentially non-overlapping priors under consideration.more » While multiple priors have been used for analysis in the past, they have never been used in a design context. The Weighted Priors method performs well for a broad range of true underlying model parameter choices and is more robust when compared to other reasonable design choices. Finally, we illustrate the method through multiple scenarios and a motivating example. Additional figures for this article are available in the online supplementary information.« less
Violent video games and delinquent behavior in adolescents: A risk factor perspective.
Exelmans, Liese; Custers, Kathleen; Van den Bulck, Jan
2015-05-01
Over the years, criminological research has identified a number of risk factors that contribute to the development of aggressive and delinquent behavior. Although studies have identified media violence in general and violent video gaming in particular as significant predictors of aggressive behavior, exposure to violent video games has been largely omitted from the risk factor literature on delinquent behavior. This cross-sectional study therefore investigates the relationship between violent video game play and adolescents' delinquent behavior using a risk factor approach. An online survey was completed by 3,372 Flemish adolescents, aged 12-18 years old. Data were analyzed by means of negative binomial regression modelling. Results indicated a significant contribution of violent video games in delinquent behavior over and beyond multiple known risk variables (peer delinquency, sensation seeking, prior victimization, and alienation). Moreover, the final model that incorporated the gaming genres proved to be significantly better than the model without the gaming genres. Results provided support for a cumulative and multiplicative risk model for delinquent behavior. Aggr. Behav. 41:267-279, 2015. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Spindel, J E; Begum, H; Akdemir, D; Collard, B; Redoña, E; Jannink, J-L; McCouch, S
2016-01-01
To address the multiple challenges to food security posed by global climate change, population growth and rising incomes, plant breeders are developing new crop varieties that can enhance both agricultural productivity and environmental sustainability. Current breeding practices, however, are unable to keep pace with demand. Genomic selection (GS) is a new technique that helps accelerate the rate of genetic gain in breeding by using whole-genome data to predict the breeding value of offspring. Here, we describe a new GS model that combines RR-BLUP with markers fit as fixed effects selected from the results of a genome-wide-association study (GWAS) on the RR-BLUP training data. We term this model GS + de novo GWAS. In a breeding population of tropical rice, GS + de novo GWAS outperformed six other models for a variety of traits and in multiple environments. On the basis of these results, we propose an extended, two-part breeding design that can be used to efficiently integrate novel variation into elite breeding populations, thus expanding genetic diversity and enhancing the potential for sustainable productivity gains. PMID:26860200
The weighted priors approach for combining expert opinions in logistic regression experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quinlan, Kevin R.; Anderson-Cook, Christine M.; Myers, Kary L.
When modeling the reliability of a system or component, it is not uncommon for more than one expert to provide very different prior estimates of the expected reliability as a function of an explanatory variable such as age or temperature. Our goal in this paper is to incorporate all information from the experts when choosing a design about which units to test. Bayesian design of experiments has been shown to be very successful for generalized linear models, including logistic regression models. We use this approach to develop methodology for the case where there are several potentially non-overlapping priors under consideration.more » While multiple priors have been used for analysis in the past, they have never been used in a design context. The Weighted Priors method performs well for a broad range of true underlying model parameter choices and is more robust when compared to other reasonable design choices. Finally, we illustrate the method through multiple scenarios and a motivating example. Additional figures for this article are available in the online supplementary information.« less
Morris, Melody K; Shriver, Zachary; Sasisekharan, Ram; Lauffenburger, Douglas A
2012-03-01
Mathematical models have substantially improved our ability to predict the response of a complex biological system to perturbation, but their use is typically limited by difficulties in specifying model topology and parameter values. Additionally, incorporating entities across different biological scales ranging from molecular to organismal in the same model is not trivial. Here, we present a framework called "querying quantitative logic models" (Q2LM) for building and asking questions of constrained fuzzy logic (cFL) models. cFL is a recently developed modeling formalism that uses logic gates to describe influences among entities, with transfer functions to describe quantitative dependencies. Q2LM does not rely on dedicated data to train the parameters of the transfer functions, and it permits straight-forward incorporation of entities at multiple biological scales. The Q2LM framework can be employed to ask questions such as: Which therapeutic perturbations accomplish a designated goal, and under what environmental conditions will these perturbations be effective? We demonstrate the utility of this framework for generating testable hypotheses in two examples: (i) a intracellular signaling network model; and (ii) a model for pharmacokinetics and pharmacodynamics of cell-cytokine interactions; in the latter, we validate hypotheses concerning molecular design of granulocyte colony stimulating factor. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
LANDSAT 4 band 6 data evaluation
NASA Technical Reports Server (NTRS)
1983-01-01
Multiple altitude TM thermal infrared images were analyzed and the observed radiance values were computed. The data obtained represent an experimental relation between preceived radiance and altitude. A LOWTRAB approach was tested which incorporates a modification to the path radiance model. This modification assumes that the scattering out of the optical path is equal in magnitude and direction to the scattering into the path. The radiance observed at altitude by an aircraft sensor was used as input to the model. Expected radiance as a function of altitude was then computed down to the ground. The results were not very satisfactory because of somewhat large errors in temperature and because of the difference in the shape of the modeled and experimental curves.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simunovic, Srdjan
2015-02-16
CASL's modeling and simulation technology, the Virtual Environment for Reactor Applications (VERA), incorporates coupled physics and science-based models, state-of-the-art numerical methods, modern computational science, integrated uncertainty quantification (UQ) and validation against data from operating pressurized water reactors (PWRs), single-effect experiments, and integral tests. The computational simulation component of VERA is the VERA Core Simulator (VERA-CS). The core simulator is the specific collection of multi-physics computer codes used to model and deplete a LWR core over multiple cycles. The core simulator has a single common input file that drives all of the different physics codes. The parser code, VERAIn, converts VERAmore » Input into an XML file that is used as input to different VERA codes.« less
The topographical model of multiple sclerosis
Cook, Karin; De Nino, Scott; Fletcher, Madhuri
2016-01-01
Relapses and progression contribute to multiple sclerosis (MS) disease course, but neither the relationship between them nor the spectrum of clinical heterogeneity has been fully characterized. A hypothesis-driven, biologically informed model could build on the clinical phenotypes to encompass the dynamic admixture of factors underlying MS disease course. In this medical hypothesis, we put forth a dynamic model of MS disease course that incorporates localization and other drivers of disability to propose a clinical manifestation framework that visualizes MS in a clinically individualized way. The topographical model encapsulates 5 factors (localization of relapses and causative lesions; relapse frequency, severity, and recovery; and progression rate), visualized utilizing dynamic 3-dimensional renderings. The central hypothesis is that, like symptom recrudescence in Uhthoff phenomenon and pseudoexacerbations, progression clinically recapitulates prior relapse symptoms and unmasks previously silent lesions, incrementally revealing underlying lesion topography. The model uses real-time simulation software to depict disease course archetypes and illuminate several well-described but poorly reconciled phenomena including the clinical/MRI paradox and prognostic significance of lesion location and burden on disease outcomes. Utilization of this model could allow for earlier and more clinically precise identification of progressive MS and predictive implications can be empirically tested. PMID:27648465
NASA Astrophysics Data System (ADS)
Terando, A. J.; Reich, B. J.; Pacifici, K.
2013-12-01
Fire is an important disturbance process in many coupled natural-human systems. Changes in the frequency and severity of fires due to anthropogenic climate change could have significant costs to society and the plant and animal communities that are adapted to a particular fire regime Planning for these changes requires a robust model of the relationship between climate and fire that accounts for multiple sources of uncertainty that are present when simulating ecological and climatological processes. Here we model how anthropogenic climate change could affect the wildfire regime for a region in the Southeast US whose natural ecosystems are dependent on frequent, low-intensity fires while humans are at risk from large catastrophic fires. We develop a modeling framework that incorporates three major sources of uncertainty: (1) uncertainty in the ecological drivers of expected monthly area burned, (2) uncertainty in the environmental drivers influencing the probability of an extreme fire event, and (3) structural uncertainty in different downscaled climate models. In addition we use two policy-relevant emission scenarios (climate stabilization and 'business-as-usual') to characterize the uncertainty in future greenhouse gas forcings. We use a Bayesian framework to incorporate different sources of uncertainty including simulation of predictive errors and Stochastic Search Variable Selection. Our results suggest that although the mean process remains stationary, the probability of extreme fires declines through time, owing to the persistence of high atmospheric moisture content during the peak fire season that dampens the effect of increasing temperatures. Including multiple sources of uncertainty leads to wide prediction intervals, but is potentially more useful for decision-makers that will require adaptation strategies that are robust to rapid but uncertain climate and ecological change.
Missile Guidance Law Based on Robust Model Predictive Control Using Neural-Network Optimization.
Li, Zhijun; Xia, Yuanqing; Su, Chun-Yi; Deng, Jun; Fu, Jun; He, Wei
2015-08-01
In this brief, the utilization of robust model-based predictive control is investigated for the problem of missile interception. Treating the target acceleration as a bounded disturbance, novel guidance law using model predictive control is developed by incorporating missile inside constraints. The combined model predictive approach could be transformed as a constrained quadratic programming (QP) problem, which may be solved using a linear variational inequality-based primal-dual neural network over a finite receding horizon. Online solutions to multiple parametric QP problems are used so that constrained optimal control decisions can be made in real time. Simulation studies are conducted to illustrate the effectiveness and performance of the proposed guidance control law for missile interception.
MO-C-18A-01: Advances in Model-Based 3D Image Reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, G; Pan, X; Stayman, J
2014-06-15
Recent years have seen the emergence of CT image reconstruction techniques that exploit physical models of the imaging system, photon statistics, and even the patient to achieve improved 3D image quality and/or reduction of radiation dose. With numerous advantages in comparison to conventional 3D filtered backprojection, such techniques bring a variety of challenges as well, including: a demanding computational load associated with sophisticated forward models and iterative optimization methods; nonlinearity and nonstationarity in image quality characteristics; a complex dependency on multiple free parameters; and the need to understand how best to incorporate prior information (including patient-specific prior images) within themore » reconstruction process. The advantages, however, are even greater – for example: improved image quality; reduced dose; robustness to noise and artifacts; task-specific reconstruction protocols; suitability to novel CT imaging platforms and noncircular orbits; and incorporation of known characteristics of the imager and patient that are conventionally discarded. This symposium features experts in 3D image reconstruction, image quality assessment, and the translation of such methods to emerging clinical applications. Dr. Chen will address novel methods for the incorporation of prior information in 3D and 4D CT reconstruction techniques. Dr. Pan will show recent advances in optimization-based reconstruction that enable potential reduction of dose and sampling requirements. Dr. Stayman will describe a “task-based imaging” approach that leverages models of the imaging system and patient in combination with a specification of the imaging task to optimize both the acquisition and reconstruction process. Dr. Samei will describe the development of methods for image quality assessment in such nonlinear reconstruction techniques and the use of these methods to characterize and optimize image quality and dose in a spectrum of clinical applications. Learning Objectives: Learn the general methodologies associated with model-based 3D image reconstruction. Learn the potential advantages in image quality and dose associated with model-based image reconstruction. Learn the challenges associated with computational load and image quality assessment for such reconstruction methods. Learn how imaging task can be incorporated as a means to drive optimal image acquisition and reconstruction techniques. Learn how model-based reconstruction methods can incorporate prior information to improve image quality, ease sampling requirements, and reduce dose.« less
Chen, Yi; Fisher, Kate J.; Lloyd, Mark; Wood, Elizabeth R.; Coppola, Domenico; Siegel, Erin; Shibata, David; Chen, Yian A.; Koomen, John M.
2017-01-01
Quantitative evaluation of protein expression across multiple cancer-related signaling pathways (e.g. Wnt/β-catenin, TGF-β, receptor tyrosine kinases (RTK), MAP kinases, NF-κB, and apoptosis) in tumor tissues may enable the development of a molecular profile for each individual tumor that can aid in the selection of appropriate targeted cancer therapies. Here, we describe the development of a broadly applicable protocol to develop and implement quantitative mass spectrometry assays using cell line models and frozen tissue specimens from colon cancer patients. Cell lines are used to develop peptide-based assays for protein quantification, which are incorporated into a method based on SDS-PAGE protein fractionation, in-gel digestion, and liquid chromatography-multiple reaction monitoring mass spectrometry (LC-MRM/MS). This analytical platform is then applied to frozen tumor tissues. This protocol can be broadly applied to the study of human disease using multiplexed LC-MRM assays. PMID:28808993
Correlation between diffusion kurtosis and NODDI metrics in neonates and young children
NASA Astrophysics Data System (ADS)
Ahmed, Shaheen; Wang, Zhiyue J.; Chia, Jonathan M.; Rollins, Nancy K.
2016-03-01
Diffusion Tensor Imaging (DTI) uses single shell gradient encoding scheme for studying brain tissue diffusion. NODDI (Neurite Orientation Dispersion and Density Imaging) incorporates a gradient scheme with multiple b-values which is used to characterize neurite density and coherence of neuron fiber orientations. Similarly, the diffusion kurtosis imaging also uses a multiple shell scheme to quantify non-Gaussian diffusion but does not assume a tissue model like NODDI. In this study we investigate the connection between metrics derived by NODDI and DKI in children with ages from 46 weeks to 6 years. We correlate the NODDI metrics and Kurtosis measures from the same ROIs in multiple brain regions. We compare the range of these metrics between neonates (46 - 47 weeks), infants (2 -10 months) and young children (2 - 6 years). We find that there exists strong correlation between neurite density vs. mean kurtosis, orientation dispersion vs. kurtosis fractional anisotropy (FA) in pediatric brain imaging.
Phased-array-fed antenna configuration study, volume 2
NASA Technical Reports Server (NTRS)
Sorbello, R. M.; Zaghloul, A. I.; Lee, B. S.; Siddiqi, S.; Geller, B. D.
1983-01-01
Increased capacity in future satellite systems can be achieved through antenna systems which provide multiplicity of frequency reuses at K sub a band. A number of antenna configurations which can provide multiple fixed spot beams and multiple independent spot scanning beams at 20 GHz are addressed. Each design incorporates a phased array with distributed MMIC amplifiers and phasesifters feeding a two reflector optical system. The tradeoffs required for the design of these systems and the corresponding performances are presented. Five final designs are studied. In so doing, a type of MMIC/waveguide transition is described, and measured results of the breadboard model are presented. Other hardware components developed are described. This includes a square orthomode transducer, a subarray fed with a beamforming network to measure scanning performance, and another subarray used to study mutual coupling considerations. Discussions of the advantages and disadvantages of the final design are included.
[Adoption of new technologies by health services: the challenge of analyzing relevant factors].
Trindade, Evelinda
2008-05-01
The exponential increase in the incorporation of health technologies has been considered a key factor in increased expenditures by the health sector. Such decisions involve multiple levels and stakeholders. Decentralization has multiplied the decision-making levels, with numerous difficult choices and limited resources. The interrelationship between stakeholders is complex, in creative systems with multiple determinants and confounders. The current review discusses the interaction between the factors influencing the decisions to incorporate technologies by health services, and proposes a structure for their analysis. The application and intensity of these factors in decision-making and the incorporation of products and programs by health services shapes the installed capacity of local and regional networks and modifies the health system. Empirical observation of decision-making and technology incorporation in Brazilian health services poses an important challenge. The structured recognition and measurement of these variables can assist proactive planning of health services.
Packet communications in satellites with multiple-beam antennas and signal processing
NASA Technical Reports Server (NTRS)
Davies, R.; Chethik, F.; Penick, M.
1980-01-01
A communication satellite with a multiple-beam antenna and onboard signal processing is considered for use in a 'message-switched' data relay system. The signal processor may incorporate demodulation, routing, storage, and remodulation of the data. A system user model is established and key functional elements for the signal processing are identified. With the throughput and delay requirements as the controlled variables, the hardware complexity, operational discipline, occupied bandwidth, and overall user end-to-end cost are estimated for (1) random-access packet switching; and (2) reservation-access packet switching. Other aspects of this network (eg, the adaptability to channel switched traffic requirements) are examined. For the given requirements and constraints, the reservation system appears to be the most attractive protocol.
Bayesian rationality in evaluating multiple testimonies: incorporating the role of coherence.
Harris, Adam J L; Hahn, Ulrike
2009-09-01
Routinely in day-to-day life, as well as in formal settings such as the courtroom, people must aggregate information they receive from different sources. One intuitively important but underresearched factor in this context is the degree to which the reports from different sources fit together, that is, their coherence. The authors examine a version of Bayes' theorem that not only includes factors such as prior beliefs and witness reliability, as do other models of information aggregation, but also makes transparent the effect of the coherence of multiple testimonies on the believability of the information. The results suggest that participants are sensitive to all the normatively relevant factors when assessing the believability of a set of witness testimonies. (c) 2009 APA, all rights reserved.
Simic, Vladimir
2016-06-01
As the number of end-of-life vehicles (ELVs) is estimated to increase to 79.3 million units per year by 2020 (e.g., 40 million units were generated in 2010), there is strong motivation to effectively manage this fast-growing waste flow. Intensive work on management of ELVs is necessary in order to more successfully tackle this important environmental challenge. This paper proposes an interval-parameter chance-constraint programming model for end-of-life vehicles management under rigorous environmental regulations. The proposed model can incorporate various uncertainty information in the modeling process. The complex relationships between different ELV management sub-systems are successfully addressed. Particularly, the formulated model can help identify optimal patterns of procurement from multiple sources of ELV supply, production and inventory planning in multiple vehicle recycling factories, and allocation of sorted material flows to multiple final destinations under rigorous environmental regulations. A case study is conducted in order to demonstrate the potentials and applicability of the proposed model. Various constraint-violation probability levels are examined in detail. Influences of parameter uncertainty on model solutions are thoroughly investigated. Useful solutions for the management of ELVs are obtained under different probabilities of violating system constraints. The formulated model is able to tackle a hard, uncertainty existing ELV management problem. The presented model has advantages in providing bases for determining long-term ELV management plans with desired compromises between economic efficiency of vehicle recycling system and system-reliability considerations. The results are helpful for supporting generation and improvement of ELV management plans. Copyright © 2016 Elsevier Ltd. All rights reserved.
Using Genotype Abundance to Improve Phylogenetic Inference
Mesin, Luka; Victora, Gabriel D; Minin, Vladimir N; Matsen, Frederick A
2018-01-01
Abstract Modern biological techniques enable very dense genetic sampling of unfolding evolutionary histories, and thus frequently sample some genotypes multiple times. This motivates strategies to incorporate genotype abundance information in phylogenetic inference. In this article, we synthesize a stochastic process model with standard sequence-based phylogenetic optimality, and show that tree estimation is substantially improved by doing so. Our method is validated with extensive simulations and an experimental single-cell lineage tracing study of germinal center B cell receptor affinity maturation. PMID:29474671
Determinants of customer satisfaction with hospitals: a managerial model.
Andaleeb, S S
1998-01-01
States that rapid changes in the environment have exerted significant pressures on hospitals to incorporate patient satisfaction in their strategic stance and quest for market share and long-term viability. This study proposes and tests a five-factor model that explains considerable variation in customer satisfaction with hospitals. These factors include communication with patients, competence of the staff, their demeanour, quality of the facilities, and perceived costs; they also represent strategic concepts that managers can address in their bid to remain competitive. A probability sample was selected and a multiple regression model used to test the hypotheses. The results indicate that all five variables were significant in the model and explained 62 per cent of the variation in the dependent variable. Managerial implications of the proposed model are discussed.
Ocaña-Peinado, Francisco M; Valderrama, Mariano J; Bouzas, Paula R
2013-05-01
The problem of developing a 2-week-on ahead forecast of atmospheric cypress pollen levels is tackled in this paper by developing a principal component multiple regression model involving several climatic variables. The efficacy of the proposed model is validated by means of an application to real data of Cupressaceae pollen concentration in the city of Granada (southeast of Spain). The model was applied to data from 11 consecutive years (1995-2005), with 2006 being used to validate the forecasts. Based on the work of different authors, factors as temperature, humidity, hours of sun and wind speed were incorporated in the model. This methodology explains approximately 75-80% of the variability in the airborne Cupressaceae pollen concentration.
Effect of Multiple Delays in an Eco-Epidemiological Model with Strong Allee Effect
NASA Astrophysics Data System (ADS)
Ghosh, Kakali; Biswas, Santanu; Samanta, Sudip; Tiwari, Pankaj Kumar; Alshomrani, Ali Saleh; Chattopadhyay, Joydev
In the present article, we make an attempt to investigate the effect of two time delays, logistic delay and gestation delay, on an eco-epidemiological model. In the proposed model, strong Allee effect is considered in the growth term of the prey population. We incorporate two time lags and inspect elementary mathematical characteristic of the proposed model such as boundedness, uniform persistence, stability and Hopf-bifurcation for all possible combinations of both delays at the interior equilibrium point of the system. We observe that increase in gestation delay leads to chaotic solutions through the limit cycle. We also observe that the Allee effect play a major role in controlling the chaos. We execute several numerical simulations to illustrate the proposed mathematical model and our analytical findings.
Nagahama, Ryoji; Matoba, Tetsuya; Nakano, Kaku; Kim-Mitsuyama, Shokei; Sunagawa, Kenji; Egashira, Kensuke
2012-10-01
Critical limb ischemia is a severe form of peripheral artery disease (PAD) for which neither surgical revascularization nor endovascular therapy nor current medicinal therapy has sufficient therapeutic effects. Peroxisome proliferator activated receptor-γ agonists present angiogenic activity in vitro; however, systemic administration of peroxisome proliferator-activated receptor-γ agonists is hampered by its side effects, including heart failure. Here, we demonstrate that the nanoparticle (NP)-mediated delivery of the peroxisome proliferator activated receptor-γ agonist pioglitazone enhances its therapeutic efficacy on ischemia-induced neovascularization in a murine model. In a nondiabetic murine model of hindlimb ischemia, a single intramuscular injection of pioglitazone-incorporated NP (1 µg/kg) into ischemic muscles significantly improved the blood flow recovery in the ischemic limbs, significantly increasing the number of CD31-positive capillaries and α-smooth muscle actin-positive arterioles. The therapeutic effects of pioglitazone-incorporated NP were diminished by the peroxisome proliferator activated receptor-γ antagonist GW9662 and were not observed in endothelial NO synthase-deficient mice. Pioglitazone-incorporated NP induced endothelial NO synthase phosphorylation, as demonstrated by Western blot analysis, as well as expression of multiple angiogenic growth factors in vivo, including vascular endothelial growth factor-A, vascular endothelial growth factor-B, and fibroblast growth factor-1, as demonstrated by real-time polymerase chain reaction. Intramuscular injection of pioglitazone (1 µg/kg) was ineffective, and oral administration necessitated a >500 μg/kg per day dose to produce therapeutic effects equivalent to those of pioglitazone-incorporated NP. NP-mediated drug delivery is a novel modality that may enhance the effectiveness of therapeutic neovascularization, surpassing the effectiveness of current treatments for peripheral artery disease with critical limb ischemia.
Mohd Yusof, Mohd Yusmiaidil Putera; Cauwels, Rita; Deschepper, Ellen; Martens, Luc
2015-08-01
The third molar development (TMD) has been widely utilized as one of the radiographic method for dental age estimation. By using the same radiograph of the same individual, third molar eruption (TME) information can be incorporated to the TMD regression model. This study aims to evaluate the performance of dental age estimation in individual method models and the combined model (TMD and TME) based on the classic regressions of multiple linear and principal component analysis. A sample of 705 digital panoramic radiographs of Malay sub-adults aged between 14.1 and 23.8 years was collected. The techniques described by Gleiser and Hunt (modified by Kohler) and Olze were employed to stage the TMD and TME, respectively. The data was divided to develop three respective models based on the two regressions of multiple linear and principal component analysis. The trained models were then validated on the test sample and the accuracy of age prediction was compared between each model. The coefficient of determination (R²) and root mean square error (RMSE) were calculated. In both genders, adjusted R² yielded an increment in the linear regressions of combined model as compared to the individual models. The overall decrease in RMSE was detected in combined model as compared to TMD (0.03-0.06) and TME (0.2-0.8). In principal component regression, low value of adjusted R(2) and high RMSE except in male were exhibited in combined model. Dental age estimation is better predicted using combined model in multiple linear regression models. Copyright © 2015 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
Constraints on genes shape long-term conservation of macro-synteny in metazoan genomes.
Lv, Jie; Havlak, Paul; Putnam, Nicholas H
2011-10-05
Many metazoan genomes conserve chromosome-scale gene linkage relationships ("macro-synteny") from the common ancestor of multicellular animal life 1234, but the biological explanation for this conservation is still unknown. Double cut and join (DCJ) is a simple, well-studied model of neutral genome evolution amenable to both simulation and mathematical analysis 5, but as we show here, it is not sufficent to explain long-term macro-synteny conservation. We examine a family of simple (one-parameter) extensions of DCJ to identify models and choices of parameters consistent with the levels of macro- and micro-synteny conservation observed among animal genomes. Our software implements a flexible strategy for incorporating genomic context into the DCJ model to incorporate various types of genomic context ("DCJ-[C]"), and is available as open source software from http://github.com/putnamlab/dcj-c. A simple model of genome evolution, in which DCJ moves are allowed only if they maintain chromosomal linkage among a set of constrained genes, can simultaneously account for the level of macro-synteny conservation and for correlated conservation among multiple pairs of species. Simulations under this model indicate that a constraint on approximately 7% of metazoan genes is sufficient to constrain genome rearrangement to an average rate of 25 inversions and 1.7 translocations per million years.
Lappi, Otto; Mole, Callum
2018-06-11
The authors present an approach to the coordination of eye movements and locomotion in naturalistic steering tasks. It is based on recent empirical research, in particular, on driver eye movements, that poses challenges for existing accounts of how we visually steer a course. They first analyze how the ideas of feedback and feedforward processes and internal models are treated in control theoretical steering models within vision science and engineering, which share an underlying architecture but have historically developed in very separate ways. The authors then show how these traditions can be naturally (re)integrated with each other and with contemporary neuroscience, to better understand the skill and gaze strategies involved. They then propose a conceptual model that (a) gives a unified account to the coordination of gaze and steering control, (b) incorporates higher-level path planning, and (c) draws on the literature on paired forward and inverse models in predictive control. Although each of these (a-c) has been considered before (also in the context of driving), integrating them into a single framework and the authors' multiple waypoint identification hypothesis within that framework are novel. The proposed hypothesis is relevant to all forms of visually guided locomotion. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
A model for the rapid assessment of the impact of aviation noise near airports.
Torija, Antonio J; Self, Rod H; Flindell, Ian H
2017-02-01
This paper introduces a simplified model [Rapid Aviation Noise Evaluator (RANE)] for the calculation of aviation noise within the context of multi-disciplinary strategic environmental assessment where input data are both limited and constrained by compatibility requirements against other disciplines. RANE relies upon the concept of noise cylinders around defined flight-tracks with the Noise Radius determined from publicly available Noise-Power-Distance curves rather than the computationally intensive multiple point-to-point grid calculation with subsequent ISO-contour interpolation methods adopted in the FAA's Integrated Noise Model (INM) and similar models. Preliminary results indicate that for simple single runway scenarios, changes in airport noise contour areas can be estimated with minimal uncertainty compared against grid-point calculation methods such as INM. In situations where such outputs are all that is required for preliminary strategic environmental assessment, there are considerable benefits in reduced input data and computation requirements. Further development of the noise-cylinder-based model (such as the incorporation of lateral attenuation, engine-installation-effects or horizontal track dispersion via the assumption of more complex noise surfaces formed around the flight-track) will allow for more complex assessment to be carried out. RANE is intended to be incorporated into technology evaluators for the noise impact assessment of novel aircraft concepts.
NASA Astrophysics Data System (ADS)
Pratama, Cecep; Ito, Takeo; Sasajima, Ryohei; Tabei, Takao; Kimata, Fumiaki; Gunawan, Endra; Ohta, Yusaku; Yamashina, Tadashi; Ismail, Nazli; Nurdin, Irwandi; Sugiyanto, Didik; Muksin, Umar; Meilano, Irwan
2017-10-01
Postseismic motion in the middle-field (100-500 km from the epicenter) geodetic data resulting from the 2012 Indian Ocean earthquake exhibited rapid change during the two months following the rupture. This pattern probably indicates multiple postseismic deformation mechanisms and might have been controlled by transient rheology. Therefore, the relative contribution of transient rheology in the oceanic asthenosphere and afterslip in the oceanic lithosphere should be incorporated to explain short- and long-term transitional features of postseismic signals. In this study, using two years of post-earthquake geodetic data from northern Sumatra, a three-dimensional spherical-earth finite-element model was constructed based on a heterogeneous structure and incorporating transient rheology. A rheology model combined with stress-driven afterslip was estimated. Our best-fit model suggests an oceanic lithosphere thickness of 75 km with oceanic asthenosphere viscosity values of 1 × 1017 Pa s and 2 × 1018 Pa s for the Kelvin and Maxwell viscosity models, respectively. The model results indicate that horizontal landward motion and vertical uplift in northern Sumatra require viscoelastic relaxation of the oceanic asthenosphere coupled with afterslip in the lithosphere. The present study demonstrates that transient rheology is essential for reproducing the rapidly changing motion of postseismic deformation in the middle-field area.
Using Audit Information to Adjust Parameter Estimates for Data Errors in Clinical Trials
Shepherd, Bryan E.; Shaw, Pamela A.; Dodd, Lori E.
2013-01-01
Background Audits are often performed to assess the quality of clinical trial data, but beyond detecting fraud or sloppiness, the audit data is generally ignored. In earlier work using data from a non-randomized study, Shepherd and Yu (2011) developed statistical methods to incorporate audit results into study estimates, and demonstrated that audit data could be used to eliminate bias. Purpose In this manuscript we examine the usefulness of audit-based error-correction methods in clinical trial settings where a continuous outcome is of primary interest. Methods We demonstrate the bias of multiple linear regression estimates in general settings with an outcome that may have errors and a set of covariates for which some may have errors and others, including treatment assignment, are recorded correctly for all subjects. We study this bias under different assumptions including independence between treatment assignment, covariates, and data errors (conceivable in a double-blinded randomized trial) and independence between treatment assignment and covariates but not data errors (possible in an unblinded randomized trial). We review moment-based estimators to incorporate the audit data and propose new multiple imputation estimators. The performance of estimators is studied in simulations. Results When treatment is randomized and unrelated to data errors, estimates of the treatment effect using the original error-prone data (i.e., ignoring the audit results) are unbiased. In this setting, both moment and multiple imputation estimators incorporating audit data are more variable than standard analyses using the original data. In contrast, in settings where treatment is randomized but correlated with data errors and in settings where treatment is not randomized, standard treatment effect estimates will be biased. And in all settings, parameter estimates for the original, error-prone covariates will be biased. Treatment and covariate effect estimates can be corrected by incorporating audit data using either the multiple imputation or moment-based approaches. Bias, precision, and coverage of confidence intervals improve as the audit size increases. Limitations The extent of bias and the performance of methods depend on the extent and nature of the error as well as the size of the audit. This work only considers methods for the linear model. Settings much different than those considered here need further study. Conclusions In randomized trials with continuous outcomes and treatment assignment independent of data errors, standard analyses of treatment effects will be unbiased and are recommended. However, if treatment assignment is correlated with data errors or other covariates, naive analyses may be biased. In these settings, and when covariate effects are of interest, approaches for incorporating audit results should be considered. PMID:22848072
Impact of Multiple Environmental Stresses on Wetland Vegetation Dynamics
NASA Astrophysics Data System (ADS)
Muneepeerakul, C. P.; Tamea, S.; Muneepeerakul, R.; Miralles-Wilhelm, F. R.; Rinaldo, A.; Rodriguez-Iturbe, I.
2009-12-01
This research quantifies the impacts of climate change on the dynamics of wetland vegetation under the effect of multiple stresses, such as drought, water-logging, shade and nutrients. The effects of these stresses are investigated through a mechanistic model that captures the co-evolving nature between marsh emergent plant species and their resources (water, nitrogen, light, and oxygen). The model explicitly considers the feedback mechanisms between vegetation, light and nitrogen dynamics as well as the specific dynamics of plant leaves, rhizomes, and roots. Each plant species is characterized by three independent traits, namely leaf nitrogen (N) content, specific leaf area, and allometric carbon (C) allocation to rhizome storage, which govern the ability to gain and maintain resources as well as to survive in a particular multi-stressed environment. The modeling of plant growth incorporates C and N into the construction of leaves and roots, whose amount of new biomass is determined by the dynamic plant allocation scheme. Nitrogen is internally recycled between pools of plants, litter, humus, microbes, and mineral N. The N dynamics are modeled using a parallel scheme, with the major modifications being the calculation of the aerobic and anoxic periods and the incorporation of the anaerobic processes. A simple hydrologic model with stochastic rainfall is used to describe the water level dynamics and the soil moisture profile. Soil water balance is evaluated at the daily time scale and includes rainfall, evapotranspiration and lateral flow to/from an external water body, with evapotranspiration loss equal to the potential value, governed by the daily average condition of atmospheric water demand. The resulting feedback dynamics arising from the coupled system of plant-soil-microbe are studied in details and species’ fitnesses in the 3-D trait space are compared across various rainfall patterns with different mean and fluctuations. The model results are then compared with those from experiments and field studies reported in the literature, providing insights about the physiological features that enable plants to thrive in different wetland environments and climate regimes.
Competitive advantage for multiple-memory strategies in an artificial market
NASA Astrophysics Data System (ADS)
Mitman, Kurt E.; Choe, Sehyo C.; Johnson, Neil F.
2005-05-01
We consider a simple binary market model containing N competitive agents. The novel feature of our model is that it incorporates the tendency shown by traders to look for patterns in past price movements over multiple time scales, i.e. multiple memory-lengths. In the regime where these memory-lengths are all small, the average winnings per agent exceed those obtained for either (1) a pure population where all agents have equal memory-length, or (2) a mixed population comprising sub-populations of equal-memory agents with each sub-population having a different memory-length. Agents who consistently play strategies of a given memory-length, are found to win more on average -- switching between strategies with different memory lengths incurs an effective penalty, while switching between strategies of equal memory does not. Agents employing short-memory strategies can outperform agents using long-memory strategies, even in the regime where an equal-memory system would have favored the use of long-memory strategies. Using the many-body 'Crowd-Anticrowd' theory, we obtain analytic expressions which are in good agreement with the observed numerical results. In the context of financial markets, our results suggest that multiple-memory agents have a better chance of identifying price patterns of unknown length and hence will typically have higher winnings.
Improving Vector Evaluated Particle Swarm Optimisation Using Multiple Nondominated Leaders
Lim, Kian Sheng; Buyamin, Salinda; Ahmad, Anita; Shapiai, Mohd Ibrahim; Naim, Faradila; Mubin, Marizan; Kim, Dong Hwa
2014-01-01
The vector evaluated particle swarm optimisation (VEPSO) algorithm was previously improved by incorporating nondominated solutions for solving multiobjective optimisation problems. However, the obtained solutions did not converge close to the Pareto front and also did not distribute evenly over the Pareto front. Therefore, in this study, the concept of multiple nondominated leaders is incorporated to further improve the VEPSO algorithm. Hence, multiple nondominated solutions that are best at a respective objective function are used to guide particles in finding optimal solutions. The improved VEPSO is measured by the number of nondominated solutions found, generational distance, spread, and hypervolume. The results from the conducted experiments show that the proposed VEPSO significantly improved the existing VEPSO algorithms. PMID:24883386
Gis-Based Route Finding Using ANT Colony Optimization and Urban Traffic Data from Different Sources
NASA Astrophysics Data System (ADS)
Davoodi, M.; Mesgari, M. S.
2015-12-01
Nowadays traffic data is obtained from multiple sources including GPS, Video Vehicle Detectors (VVD), Automatic Number Plate Recognition (ANPR), Floating Car Data (FCD), VANETs, etc. All such data can be used for route finding. This paper proposes a model for finding the optimum route based on the integration of traffic data from different sources. Ant Colony Optimization is applied in this paper because the concept of this method (movement of ants in a network) is similar to urban road network and movements of cars. The results indicate that this model is capable of incorporating data from different sources, which may even be inconsistent.
An integrative psychotherapist's account of his focus when treating self-critical patients.
Shahar, Golan
2013-09-01
This article presents the factors on which I focus as an integrative psychotherapist when treating self-critical patients. I first describe my personal version of psychotherapy integration. Drawing principally from Wachtel's cyclical psychodynamic model, I also incorporate existential and neurocognitive elements highlighting patients' future-oriented thinking and goal-directed action. I then relate this integrative model to the vexing clinical problem of self-criticism. Finally, I outline three types of interventions I attempt to implement in each session: (1) Multiple-Selves Analysis (MSA); (2) Behavioral Activation (BA), conceptualized integratively; and (3) use of therapist's presence. 2013 APA, all rights reserved
The Impact of Prophage on the Equilibria and Stability of Phage and Host
NASA Astrophysics Data System (ADS)
Yu, Pei; Nadeem, Alina; Wahl, Lindi M.
2017-06-01
In this paper, we present a bacteriophage model that includes prophage, that is, phage genomes that are incorporated into the host cell genome. The general model is described by an 18-dimensional system of ordinary differential equations. This study focuses on asymptotic behaviour of the model, and thus the system is reduced to a simple six-dimensional model, involving uninfected host cells, infected host cells and phage. We use dynamical system theory to explore the dynamic behaviour of the model, studying in particular the impact of prophage on the equilibria and stability of phage and host. We employ bifurcation and stability theory, centre manifold and normal form theory to show that the system has multiple equilibrium solutions which undergo a series of bifurcations, finally leading to oscillating motions. Numerical simulations are presented to illustrate and confirm the analytical predictions. The results of this study indicate that in some parameter regimes, the host cell population may drive the phage to extinction through diversification, that is, if multiple types of host emerge; this prediction holds even if the phage population is likewise diverse. This parameter regime is restricted, however, if infecting phage are able to recombine with prophage sequences in the host cell genome.
A Privacy Preservation Model for Health-Related Social Networking Sites.
Li, Jingquan
2015-07-08
The increasing use of social networking sites (SNS) in health care has resulted in a growing number of individuals posting personal health information online. These sites may disclose users' health information to many different individuals and organizations and mine it for a variety of commercial and research purposes, yet the revelation of personal health information to unauthorized individuals or entities brings a concomitant concern of greater risk for loss of privacy among users. Many users join multiple social networks for different purposes and enter personal and other specific information covering social, professional, and health domains into other websites. Integration of multiple online and real social networks makes the users vulnerable to unintentional and intentional security threats and misuse. This paper analyzes the privacy and security characteristics of leading health-related SNS. It presents a threat model and identifies the most important threats to users and SNS providers. Building on threat analysis and modeling, this paper presents a privacy preservation model that incorporates individual self-protection and privacy-by-design approaches and uses the model to develop principles and countermeasures to protect user privacy. This study paves the way for analysis and design of privacy-preserving mechanisms on health-related SNS.
A Privacy Preservation Model for Health-Related Social Networking Sites
2015-01-01
The increasing use of social networking sites (SNS) in health care has resulted in a growing number of individuals posting personal health information online. These sites may disclose users' health information to many different individuals and organizations and mine it for a variety of commercial and research purposes, yet the revelation of personal health information to unauthorized individuals or entities brings a concomitant concern of greater risk for loss of privacy among users. Many users join multiple social networks for different purposes and enter personal and other specific information covering social, professional, and health domains into other websites. Integration of multiple online and real social networks makes the users vulnerable to unintentional and intentional security threats and misuse. This paper analyzes the privacy and security characteristics of leading health-related SNS. It presents a threat model and identifies the most important threats to users and SNS providers. Building on threat analysis and modeling, this paper presents a privacy preservation model that incorporates individual self-protection and privacy-by-design approaches and uses the model to develop principles and countermeasures to protect user privacy. This study paves the way for analysis and design of privacy-preserving mechanisms on health-related SNS. PMID:26155953
Rolland-Lagan, Anne-Gaëlle; Paquette, Mathieu; Tweedle, Valerie; Akimenko, Marie-Andrée
2012-03-01
The fact that some organisms are able to regenerate organs of the correct shape and size following amputation is particularly fascinating, but the mechanism by which this occurs remains poorly understood. The zebrafish (Danio rerio) caudal fin has emerged as a model system for the study of bone development and regeneration. The fin comprises 16 to 18 bony rays, each containing multiple joints along its proximodistal axis that give rise to segments. Experimental observations on fin ray growth, regeneration and joint formation have been described, but no unified theory has yet been put forward to explain how growth and joint patterns are controlled. We present a model for the control of fin ray growth during development and regeneration, integrated with a model for joint pattern formation, which is in agreement with published, as well as new, experimental data. We propose that fin ray growth and joint patterning are coordinated through the interaction of three morphogens. When the model is extended to incorporate multiple rays across the fin, it also accounts for how the caudal fin acquires its shape during development, and regains its correct size and shape following amputation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lester, Brian; Scherzinger, William
2017-01-19
Here, a new method for the solution of the non-linear equations forming the core of constitutive model integration is proposed. Specifically, the trust-region method that has been developed in the numerical optimization community is successfully modified for use in implicit integration of elastic-plastic models. Although attention here is restricted to these rate-independent formulations, the proposed approach holds substantial promise for adoption with models incorporating complex physics, multiple inelastic mechanisms, and/or multiphysics. As a first step, the non-quadratic Hosford yield surface is used as a representative case to investigate computationally challenging constitutive models. The theory and implementation are presented, discussed, andmore » compared to other common integration schemes. Multiple boundary value problems are studied and used to verify the proposed algorithm and demonstrate the capabilities of this approach over more common methodologies. Robustness and speed are then investigated and compared to existing algorithms. Through these efforts, it is shown that the utilization of a trust-region approach leads to superior performance versus a traditional closest-point projection Newton-Raphson method and comparable speed and robustness to a line search augmented scheme.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lester, Brian T.; Scherzinger, William M.
2017-01-19
A new method for the solution of the non-linear equations forming the core of constitutive model integration is proposed. Specifically, the trust-region method that has been developed in the numerical optimization community is successfully modified for use in implicit integration of elastic-plastic models. Although attention here is restricted to these rate-independent formulations, the proposed approach holds substantial promise for adoption with models incorporating complex physics, multiple inelastic mechanisms, and/or multiphysics. As a first step, the non-quadratic Hosford yield surface is used as a representative case to investigate computationally challenging constitutive models. The theory and implementation are presented, discussed, and comparedmore » to other common integration schemes. Multiple boundary value problems are studied and used to verify the proposed algorithm and demonstrate the capabilities of this approach over more common methodologies. Robustness and speed are then investigated and compared to existing algorithms. As a result through these efforts, it is shown that the utilization of a trust-region approach leads to superior performance versus a traditional closest-point projection Newton-Raphson method and comparable speed and robustness to a line search augmented scheme.« less
Sabourin, Jeremy; Nobel, Andrew B.; Valdar, William
2014-01-01
Genomewide association studies sometimes identify loci at which both the number and identities of the underlying causal variants are ambiguous. In such cases, statistical methods that model effects of multiple SNPs simultaneously can help disentangle the observed patterns of association and provide information about how those SNPs could be prioritized for follow-up studies. Current multi-SNP methods, however, tend to assume that SNP effects are well captured by additive genetics; yet when genetic dominance is present, this assumption translates to reduced power and faulty prioritizations. We describe a statistical procedure for prioritizing SNPs at GWAS loci that efficiently models both additive and dominance effects. Our method, LLARRMA-dawg, combines a group LASSO procedure for sparse modeling of multiple SNP effects with a resampling procedure based on fractional observation weights; it estimates for each SNP the robustness of association with the phenotype both to sampling variation and to competing explanations from other SNPs. In producing a SNP prioritization that best identifies underlying true signals, we show that: our method easily outperforms a single marker analysis; when additive-only signals are present, our joint model for additive and dominance is equivalent to or only slightly less powerful than modeling additive-only effects; and, when dominance signals are present, even in combination with substantial additive effects, our joint model is unequivocally more powerful than a model assuming additivity. We also describe how performance can be improved through calibrated randomized penalization, and discuss how dominance in ungenotyped SNPs can be incorporated through either heterozygote dosage or multiple imputation. PMID:25417853
A Hierarchical Analysis of Tree Growth and Environmental Drivers Across Eastern US Temperate Forests
NASA Astrophysics Data System (ADS)
Mantooth, J.; Dietze, M.
2014-12-01
Improving predictions of how forests in the eastern United States will respond to future global change requires a better understanding of the drivers of variability in tree growth rates. Current inventory data lack the temporal resolution to characterize interannual variability, while existing growth records lack the extent required to assess spatial scales of variability. Therefore, we established a network of forest inventory plots across ten sites across the eastern US, and measured growth in adult trees using increment cores. Sites were chosen to maximize climate space explored, while within sites, plots were spread across primary environmental gradients to explore landscape-level variability in growth. Using the annual growth record available from tree cores, we explored the responses of trees to multiple environmental covariates over multiple spatial and temporal scales. We hypothesized that within and across sites growth rates vary among species, and that intraspecific growth rates increase with temperature along a species' range. We also hypothesized that trees show synchrony in growth responses to landscape-scale climatic changes. Initial analyses of growth increments indicate that across sites, trees with intermediate shade tolerance, e.g. Red Oak (Quercus rubra), tend to have the highest growth rates. At the site level, there is evidence for synchrony in response to large-scale climatic events (e.g. prolonged drought and above average temperatures). However, growth responses to climate at the landscape scale have yet to be detected. Our current analysis utilizes hierarchical Bayesian state-space modeling to focus on growth responses of adult trees to environmental covariates at multiple spatial and temporal scales. This predictive model of tree growth currently incorporates observed effects at the individual, plot, site, and landscape scale. Current analysis using this model shows a potential slowing of growth in the past decade for two sites in the northeastern US (Harvard Forest and Bartlett Experimental Forest), however more work is required to determine the robustness of this trend. Finally, these observations are being incorporated into ecosystem models using the Brown Dog informatics tools and the Predictive Ecosystem Analyzer (PEcAn) data assimilation workflow.
Cost-Effectiveness of POC Coagulation Testing Using Multiple Electrode Aggregometry.
Straub, Niels; Bauer, Ekaterina; Agarwal, Seema; Meybohm, Patrick; Zacharowski, Kai; Hanke, Alexander A; Weber, Christian F
2016-01-01
The economic effects of Point-of-Care (POC) coagulation testing including Multiple Electrode Aggregometry (MEA) with the Multiplate device have not been examined. A health economic model with associated clinical endpoints was developed to calculate the effectiveness and estimated costs of coagulation analyses based on standard laboratory testing (SLT) or POC testing offering the possibility to assess platelet dysfunction using aggregometric measures. Cost estimates included pre- and perioperative costs of hemotherapy, intra- and post-operative coagulation testing costs, and hospitalization costs, including the costs of transfusion-related complications. Our model calculation using a simulated true-to-life cohort of 10,000 cardiac surgery patients assigned to each testing alternative demonstrated that there were 950 fewer patients in the POC branch who required any transfusion of red blood cells. The subsequent numbers of massive transfusions and patients with transfusion-related complications were reduced with the POC testing by 284 and 126, respectively. The average expected total cost in the POC branch was 288 Euro lower for every treated patient than that in the SLT branch. Incorporating aggregometric analyses using MEA into hemotherapy algorithms improved medical outcomes in cardiac surgery patients in the presented health economic model. There was an overall better economic outcome associated with POC testing compared with SLT testing despite the higher costs of testing.
Intelligence-aided multitarget tracking for urban operations - a case study: counter terrorism
NASA Astrophysics Data System (ADS)
Sathyan, T.; Bharadwaj, K.; Sinha, A.; Kirubarajan, T.
2006-05-01
In this paper, we present a framework for tracking multiple mobile targets in an urban environment based on data from multiple sources of information, and for evaluating the threat these targets pose to assets of interest (AOI). The motivating scenario is one where we have to track many targets, each with different (unknown) destinations and/or intents. The tracking algorithm is aided by information about the urban environment (e.g., road maps, buildings, hideouts), and strategic and intelligence data. The tracking algorithm needs to be dynamic in that it has to handle a time-varying number of targets and the ever-changing urban environment depending on the locations of the moving objects and AOI. Our solution uses the variable structure interacting multiple model (VS-IMM) estimator, which has been shown to be effective in tracking targets based on road map information. Intelligence information is represented as target class information and incorporated through a combined likelihood calculation within the VS-IMM estimator. In addition, we develop a model to calculate the probability that a particular target can attack a given AOI. This model for the calculation of the probability of attack is based on the target kinematic and class information. Simulation results are presented to demonstrate the operation of the proposed framework on a representative scenario.
NASA Astrophysics Data System (ADS)
Ma, Chuang; Bao, Zhong-Kui; Zhang, Hai-Feng
2017-10-01
So far, many network-structure-based link prediction methods have been proposed. However, these methods only highlight one or two structural features of networks, and then use the methods to predict missing links in different networks. The performances of these existing methods are not always satisfied in all cases since each network has its unique underlying structural features. In this paper, by analyzing different real networks, we find that the structural features of different networks are remarkably different. In particular, even in the same network, their inner structural features are utterly different. Therefore, more structural features should be considered. However, owing to the remarkably different structural features, the contributions of different features are hard to be given in advance. Inspired by these facts, an adaptive fusion model regarding link prediction is proposed to incorporate multiple structural features. In the model, a logistic function combing multiple structural features is defined, then the weight of each feature in the logistic function is adaptively determined by exploiting the known structure information. Last, we use the "learnt" logistic function to predict the connection probabilities of missing links. According to our experimental results, we find that the performance of our adaptive fusion model is better than many similarity indices.
NASA Astrophysics Data System (ADS)
Parker, L. N.; Zank, G. P.
2013-12-01
Successful forecasting of energetic particle events in space weather models require algorithms for correctly predicting the spectrum of ions accelerated from a background population of charged particles. We present preliminary results from a model that diffusively accelerates particles at multiple shocks. Our basic approach is related to box models (Protheroe and Stanev, 1998; Moraal and Axford, 1983; Ball and Kirk, 1992; Drury et al., 1999) in which a distribution of particles is diffusively accelerated inside the box while simultaneously experiencing decompression through adiabatic expansion and losses from the convection and diffusion of particles outside the box (Melrose and Pope, 1993; Zank et al., 2000). We adiabatically decompress the accelerated particle distribution between each shock by either the method explored in Melrose and Pope (1993) and Pope and Melrose (1994) or by the approach set forth in Zank et al. (2000) where we solve the transport equation by a method analogous to operator splitting. The second method incorporates the additional loss terms of convection and diffusion and allows for the use of a variable time between shocks. We use a maximum injection energy (Emax) appropriate for quasi-parallel and quasi-perpendicular shocks (Zank et al., 2000, 2006; Dosch and Shalchi, 2010) and provide a preliminary application of the diffusive acceleration of particles by multiple shocks with frequencies appropriate for solar maximum (i.e., a non-Markovian process).
Genetic transformation in citrus: Thinking outside the box
USDA-ARS?s Scientific Manuscript database
Conventional breeding methods to incorporate resistance in citrus are very slow, due to extended juvenility from seedling trees and multiple generations needed to incorporate resistance from distant relatives. Use of transgenic methods may provide disease resistance in less time. Published protocols...
NASA Astrophysics Data System (ADS)
Muenich, R. L.; Kalcic, M. M.; Teshager, A. D.; Long, C. M.; Wang, Y. C.; Scavia, D.
2017-12-01
Thanks to the availability of open-source software, online tutorials, and advanced software capabilities, watershed modeling has expanded its user-base and applications significantly in the past thirty years. Even complicated models like the Soil and Water Assessment Tool (SWAT) are being used and documented in hundreds of peer-reviewed publications each year, and likely more applied in practice. These models can help improve our understanding of present, past, and future conditions, or analyze important "what-if" management scenarios. However, baseline data and methods are often adopted and applied without rigorous testing. In multiple collaborative projects, we have evaluated the influence of some of these common approaches on model results. Specifically, we examined impacts of baseline data and assumptions involved in manure application, combined sewer overflows, and climate data incorporation across multiple watersheds in the Western Lake Erie Basin. In these efforts, we seek to understand the impact of using typical modeling data and assumptions, versus using improved data and enhanced assumptions on model outcomes and thus ultimately, study conclusions. We provide guidance for modelers as they adopt and apply data and models for their specific study region. While it is difficult to quantitatively assess the full uncertainty surrounding model input data and assumptions, recognizing the impacts of model input choices is important when considering actions at the both the field and watershed scales.
A Bayesian network modeling approach to forecasting the 21st century worldwide status of polar bears
NASA Astrophysics Data System (ADS)
Amstrup, Steven C.; Marcot, Bruce G.; Douglas, David C.
To inform the U.S. Fish and Wildlife Service decision, whether or not to list polar bears as threatened under the Endangered Species Act (ESA), we projected the status of the world's polar bears (Ursus maritimus) for decades centered on future years 2025, 2050, 2075, and 2095. We defined four ecoregions based on current and projected sea ice conditions: seasonal ice, Canadian Archipelago, polar basin divergent, and polar basin convergent ecoregions. We incorporated general circulation model projections of future sea ice into a Bayesian network (BN) model structured around the factors considered in ESA decisions. This first-generation BN model combined empirical data, interpretations of data, and professional judgments of one polar bear expert into a probabilistic framework that identifies causal links between environmental stressors and polar bear responses. We provide guidance regarding steps necessary to refine the model, including adding inputs from other experts. The BN model projected extirpation of polar bears from the seasonal ice and polar basin divergent ecoregions, where ≈2/3 of the world's polar bears currently occur, by mid century. Projections were less dire in other ecoregions. Decline in ice habitat was the overriding factor driving the model outcomes. Although this is a first-generation model, the dependence of polar bears on sea ice is universally accepted, and the observed sea ice decline is faster than models suggest. Therefore, incorporating judgments of multiple experts in a final model is not expected to fundamentally alter the outlook for polar bears described here.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faulds, James E.; Hinz, Nicholas H.; Coolbaugh, Mark F.
We have undertaken an integrated geologic, geochemical, and geophysical study of a broad 240-km-wide, 400-km-long transect stretching from west-central to eastern Nevada in the Great Basin region of the western USA. The main goal of this study is to produce a comprehensive geothermal potential map that incorporates up to 11 parameters and identifies geothermal play fairways that represent potential blind or hidden geothermal systems. Our new geothermal potential map incorporates: 1) heat flow; 2) geochemistry from springs and wells; 3) structural setting; 4) recency of faulting; 5) slip rates on Quaternary faults; 6) regional strain rate; 7) slip and dilationmore » tendency on Quaternary faults; 8) seismologic data; 9) gravity data; 10) magnetotelluric data (where available); and 11) seismic reflection data (primarily from the Carson Sink and Steptoe basins). The transect is respectively anchored on its western and eastern ends by regional 3D modeling of the Carson Sink and Steptoe basins, which will provide more detailed geothermal potential maps of these two promising areas. To date, geological, geochemical, and geophysical data sets have been assembled into an ArcGIS platform and combined into a preliminary predictive geothermal play fairway model using various statistical techniques. The fairway model consists of the following components, each of which are represented in grid-cell format in ArcGIS and combined using specified weights and mathematical operators: 1) structural component of permeability; 2) regional-scale component of permeability; 3) combined permeability, and 4) heat source model. The preliminary model demonstrates that the multiple data sets can be successfully combined into a comprehensive favorability map. An initial evaluation using known geothermal systems as benchmarks to test interpretations indicates that the preliminary modeling has done a good job assigning relative ranks of geothermal potential. However, a major challenge is defining logical relative rankings of each parameter and how best to combine the multiple data sets into the geothermal potential/ permeability map. Ongoing feedback and data analysis are in use to revise the grouping and weighting of some parameters in order to develop a more robust, optimized, final model. The final product will incorporate more parameters into a geothermal potential map than any previous effort in the region and may serve as a prototype to develop comprehensive geothermal potential maps for other regions.« less
Animal and in silico models for the study of sarcomeric cardiomyopathies
Duncker, Dirk J.; Bakkers, Jeroen; Brundel, Bianca J.; Robbins, Jeff; Tardiff, Jil C.; Carrier, Lucie
2015-01-01
Over the past decade, our understanding of cardiomyopathies has improved dramatically, due to improvements in screening and detection of gene defects in the human genome as well as a variety of novel animal models (mouse, zebrafish, and drosophila) and in silico computational models. These novel experimental tools have created a platform that is highly complementary to the naturally occurring cardiomyopathies in cats and dogs that had been available for some time. A fully integrative approach, which incorporates all these modalities, is likely required for significant steps forward in understanding the molecular underpinnings and pathogenesis of cardiomyopathies. Finally, novel technologies, including CRISPR/Cas9, which have already been proved to work in zebrafish, are currently being employed to engineer sarcomeric cardiomyopathy in larger animals, including pigs and non-human primates. In the mouse, the increased speed with which these techniques can be employed to engineer precise ‘knock-in’ models that previously took years to make via multiple rounds of homologous recombination-based gene targeting promises multiple and precise models of human cardiac disease for future study. Such novel genetically engineered animal models recapitulating human sarcomeric protein defects will help bridging the gap to translate therapeutic targets from small animal and in silico models to the human patient with sarcomeric cardiomyopathy. PMID:25600962
Behavioural change models for infectious disease transmission: a systematic review (2010–2015)
2016-01-01
We review behavioural change models (BCMs) for infectious disease transmission in humans. Following the Cochrane collaboration guidelines and the PRISMA statement, our systematic search and selection yielded 178 papers covering the period 2010–2015. We observe an increasing trend in published BCMs, frequently coupled to (re)emergence events, and propose a categorization by distinguishing how information translates into preventive actions. Behaviour is usually captured by introducing information as a dynamic parameter (76/178) or by introducing an economic objective function, either with (26/178) or without (37/178) imitation. Approaches using information thresholds (29/178) and exogenous behaviour formation (16/178) are also popular. We further classify according to disease, prevention measure, transmission model (with 81/178 population, 6/178 metapopulation and 91/178 individual-level models) and the way prevention impacts transmission. We highlight the minority (15%) of studies that use any real-life data for parametrization or validation and note that BCMs increasingly use social media data and generally incorporate multiple sources of information (16/178), multiple types of information (17/178) or both (9/178). We conclude that individual-level models are increasingly used and useful to model behaviour changes. Despite recent advancements, we remain concerned that most models are purely theoretical and lack representative data and a validation process. PMID:28003528
NASA Astrophysics Data System (ADS)
Givens, J.; Padowski, J.; Malek, K.; Guzman, C.; Boll, J.; Adam, J. C.; Witinok-Huber, R.
2017-12-01
In the face of climate change and multi-scalar governance objectives, achieving resilience of food-energy-water (FEW) systems requires interdisciplinary approaches. Through coordinated modeling and management efforts, we study "Innovations in the Food-Energy-Water Nexus (INFEWS)" through a case-study in the Columbia River Basin. Previous research on FEW system management and resilience includes some attention to social dynamics (e.g., economic, governance); however, more research is needed to better address social science perspectives. Decisions ultimately taken in this river basin would occur among stakeholders encompassing various institutional power structures including multiple U.S. states, tribal lands, and sovereign nations. The social science lens draws attention to the incompatibility between the engineering definition of resilience (i.e., return to equilibrium or a singular stable state) and the ecological and social system realities, more explicit in the ecological interpretation of resilience (i.e., the ability of a system to move into a different, possibly more resilient state). Social science perspectives include but are not limited to differing views on resilience as normative, system persistence versus transformation, and system boundary issues. To expand understanding of resilience and objectives for complex and dynamic systems, concepts related to inequality, heterogeneity, power, agency, trust, values, culture, history, conflict, and system feedbacks must be more tightly integrated into FEW research. We identify gaps in knowledge and data, and the value and complexity of incorporating social components and processes into systems models. We posit that socio-biophysical system resilience modeling would address important complex, dynamic social relationships, including non-linear dynamics of social interactions, to offer an improved understanding of sustainable management in FEW systems. Conceptual modeling that is presented in our study, represents a starting point for a continued research agenda that incorporates social dynamics into FEW system resilience and management.
Banda, Kalyan; Gregg, Christopher J; Chow, Renee; Varki, Nissi M; Varki, Ajit
2012-08-17
Although N-acetyl groups are common in nature, N-glycolyl groups are rare. Mammals express two major sialic acids, N-acetylneuraminic acid and N-glycolylneuraminic acid (Neu5Gc). Although humans cannot produce Neu5Gc, it is detected in the epithelial lining of hollow organs, endothelial lining of the vasculature, fetal tissues, and carcinomas. This unexpected expression is hypothesized to result via metabolic incorporation of Neu5Gc from mammalian foods. This accumulation has relevance for diseases associated with such nutrients, via interaction with Neu5Gc-specific antibodies. Little is known about how ingested sialic acids in general and Neu5Gc in particular are metabolized in the gastrointestinal tract. We studied the gastrointestinal and systemic fate of Neu5Gc-containing glycoproteins (Neu5Gc-glycoproteins) or free Neu5Gc in the Neu5Gc-free Cmah(-/-) mouse model. Ingested free Neu5Gc showed rapid absorption into the circulation and urinary excretion. In contrast, ingestion of Neu5Gc-glycoproteins led to Neu5Gc incorporation into the small intestinal wall, appearance in circulation at a steady-state level for several hours, and metabolic incorporation into multiple peripheral tissue glycoproteins and glycolipids, thus conclusively proving that Neu5Gc can be metabolically incorporated from food. Feeding Neu5Gc-glycoproteins but not free Neu5Gc mimics the human condition, causing tissue incorporation into human-like sites in Cmah(-/-) fetal and adult tissues, as well as developing tumors. Thus, glycoproteins containing glycosidically linked Neu5Gc are the likely dietary source for human tissue accumulation, and not the free monosaccharide. This human-like model can be used to elucidate specific mechanisms of Neu5Gc delivery from the gut to tissues, as well as general mechanisms of metabolism of ingested sialic acids.
A Clinical Decision Support System for Breast Cancer Patients
NASA Astrophysics Data System (ADS)
Fernandes, Ana S.; Alves, Pedro; Jarman, Ian H.; Etchells, Terence A.; Fonseca, José M.; Lisboa, Paulo J. G.
This paper proposes a Web clinical decision support system for clinical oncologists and for breast cancer patients making prognostic assessments, using the particular characteristics of the individual patient. This system comprises three different prognostic modelling methodologies: the clinically widely used Nottingham prognostic index (NPI); the Cox regression modelling and a partial logistic artificial neural network with automatic relevance determination (PLANN-ARD). All three models yield a different prognostic index that can be analysed together in order to obtain a more accurate prognostic assessment of the patient. Missing data is incorporated in the mentioned models, a common issue in medical data that was overcome using multiple imputation techniques. Risk group assignments are also provided through a methodology based on regression trees, where Boolean rules can be obtained expressed with patient characteristics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erickson, Jason P.; Carlson, Deborah K.; Ortiz, Anne
Accurate location of seismic events is crucial for nuclear explosion monitoring. There are several sources of error in seismic location that must be taken into account to obtain high confidence results. Most location techniques account for uncertainties in the phase arrival times (measurement error) and the bias of the velocity model (model error), but they do not account for the uncertainty of the velocity model bias. By determining and incorporating this uncertainty in the location algorithm we seek to improve the accuracy of the calculated locations and uncertainty ellipses. In order to correct for deficiencies in the velocity model, itmore » is necessary to apply station specific corrections to the predicted arrival times. Both master event and multiple event location techniques assume that the station corrections are known perfectly, when in reality there is an uncertainty associated with these corrections. For multiple event location algorithms that calculate station corrections as part of the inversion, it is possible to determine the variance of the corrections. The variance can then be used to weight the arrivals associated with each station, thereby giving more influence to stations with consistent corrections. We have modified an existing multiple event location program (based on PMEL, Pavlis and Booker, 1983). We are exploring weighting arrivals with the inverse of the station correction standard deviation as well using the conditional probability of the calculated station corrections. This is in addition to the weighting already given to the measurement and modeling error terms. We re-locate a group of mining explosions that occurred at Black Thunder, Wyoming, and compare the results to those generated without accounting for station correction uncertainty.« less
Liquid rocket combustor computer code development
NASA Technical Reports Server (NTRS)
Liang, P. Y.
1985-01-01
The Advanced Rocket Injector/Combustor Code (ARICC) that has been developed to model the complete chemical/fluid/thermal processes occurring inside rocket combustion chambers are highlighted. The code, derived from the CONCHAS-SPRAY code originally developed at Los Alamos National Laboratory incorporates powerful features such as the ability to model complex injector combustion chamber geometries, Lagrangian tracking of droplets, full chemical equilibrium and kinetic reactions for multiple species, a fractional volume of fluid (VOF) description of liquid jet injection in addition to the gaseous phase fluid dynamics, and turbulent mass, energy, and momentum transport. Atomization and droplet dynamic models from earlier generation codes are transplated into the present code. Currently, ARICC is specialized for liquid oxygen/hydrogen propellants, although other fuel/oxidizer pairs can be easily substituted.
Investigation of passive atmospheric sounding using millimeter and submillimeter wavelength channels
NASA Technical Reports Server (NTRS)
Gasiewski, Albin J.
1993-01-01
Presented in this study are the results of controlled partially polarimetric measurements of thermal emission at 91.65 GHz from a striated water surface as corroborated by a geometrical optics radiative model. The measurements were obtained outdoors using a precision polarimetric radiometer which directly measured the first three modified Stokes' parameters. Significant variations in these parameters as a function of azimuthal water wave angle were found, with peak-to-peak variations in T(sub u) of up to approximately 10 K. The measurements are well corroborated by the GO model over a range of observations angles from near nadir up to approximately 65 degrees from nadir. The model incorporates both multiple scattering and a realistic downwelling background brightness field.
Best Practices of Multiple-Time SmartWay Award Winners
This EPA presentations focus is on the SmartWay Excellence Awards multiple winners, their best practices in protecting the environment, incorporating sustainability and reducing carbon pollution, along with benefits of being a partner.
MULTIPLE SCALES FOR SUSTAINABLE RESULTS
This session will highlight recent research that incorporates the use of multiple scales and innovative environmental accounting to better inform decisions that affect sustainability, resilience, and vulnerability at all scales. Effective decision-making involves assessment at mu...
Geospatial optimization of siting large-scale solar projects
Macknick, Jordan; Quinby, Ted; Caulfield, Emmet; Gerritsen, Margot; Diffendorfer, James E.; Haines, Seth S.
2014-01-01
guidelines by being user-driven, transparent, interactive, capable of incorporating multiple criteria, and flexible. This work provides the foundation for a dynamic siting assistance tool that can greatly facilitate siting decisions among multiple stakeholders.
High-Performance Mixed Models Based Genome-Wide Association Analysis with omicABEL software
Fabregat-Traver, Diego; Sharapov, Sodbo Zh.; Hayward, Caroline; Rudan, Igor; Campbell, Harry; Aulchenko, Yurii; Bientinesi, Paolo
2014-01-01
To raise the power of genome-wide association studies (GWAS) and avoid false-positive results in structured populations, one can rely on mixed model based tests. When large samples are used, and when multiple traits are to be studied in the ’omics’ context, this approach becomes computationally challenging. Here we consider the problem of mixed-model based GWAS for arbitrary number of traits, and demonstrate that for the analysis of single-trait and multiple-trait scenarios different computational algorithms are optimal. We implement these optimal algorithms in a high-performance computing framework that uses state-of-the-art linear algebra kernels, incorporates optimizations, and avoids redundant computations, increasing throughput while reducing memory usage and energy consumption. We show that, compared to existing libraries, our algorithms and software achieve considerable speed-ups. The OmicABEL software described in this manuscript is available under the GNU GPL v. 3 license as part of the GenABEL project for statistical genomics at http: //www.genabel.org/packages/OmicABEL. PMID:25717363
High-Performance Mixed Models Based Genome-Wide Association Analysis with omicABEL software.
Fabregat-Traver, Diego; Sharapov, Sodbo Zh; Hayward, Caroline; Rudan, Igor; Campbell, Harry; Aulchenko, Yurii; Bientinesi, Paolo
2014-01-01
To raise the power of genome-wide association studies (GWAS) and avoid false-positive results in structured populations, one can rely on mixed model based tests. When large samples are used, and when multiple traits are to be studied in the 'omics' context, this approach becomes computationally challenging. Here we consider the problem of mixed-model based GWAS for arbitrary number of traits, and demonstrate that for the analysis of single-trait and multiple-trait scenarios different computational algorithms are optimal. We implement these optimal algorithms in a high-performance computing framework that uses state-of-the-art linear algebra kernels, incorporates optimizations, and avoids redundant computations, increasing throughput while reducing memory usage and energy consumption. We show that, compared to existing libraries, our algorithms and software achieve considerable speed-ups. The OmicABEL software described in this manuscript is available under the GNU GPL v. 3 license as part of the GenABEL project for statistical genomics at http: //www.genabel.org/packages/OmicABEL.
Chen, A.; Yarmush, M.L.; Maguire, T.
2014-01-01
There is a large emphasis within the pharmaceutical industry to provide tools that will allow early research and development groups to better predict dose ranges for and metabolic responses of candidate molecules in a high throughput manner, prior to entering clinical trials. These tools incorporate approaches ranging from PBPK, QSAR, and molecular dynamics simulations in the in silico realm, to micro cell culture analogue (CCAs)s in the in vitro realm. This paper will serve to review these areas of high throughput predictive research, and highlight hurdles and potential solutions. In particular we will focus on CCAs, as their incorporation with PBPK modeling has the potential to replace animal testing, with a more predictive assay that can combine multiple organ analogs on one microfluidic platform in physiologically correct volume ratios. While several advantages arise from the current embodiments of CCAS in a microfluidic format that can be exploited for realistic simulations of drug absorption, metabolism and action, we explore some of the concerns with these systems, and provide a potential path forward to realizing animal-free solutions. Furthermore we envision that, together with theoretical modeling, CCAs may produce reliable predictions of the efficacy of newly developed drugs. PMID:22571482
Gordon, Andrew S; Marshall, Adele H; Cairns, Karen J
2016-09-20
The number of elderly patients requiring hospitalisation in Europe is rising. With a greater proportion of elderly people in the population comes a greater demand for health services and, in particular, hospital care. Thus, with a growing number of elderly patients requiring hospitalisation competing with non-elderly patients for a fixed (and in some cases, decreasing) number of hospital beds, this results in much longer waiting times for patients, often with a less satisfactory hospital experience. However, if a better understanding of the recurring nature of elderly patient movements between the community and hospital can be developed, then it may be possible for alternative provisions of care in the community to be put in place and thus prevent readmission to hospital. The research in this paper aims to model the multiple patient transitions between hospital and community by utilising a mixture of conditional Coxian phase-type distributions that incorporates Bayes' theorem. For the purpose of demonstration, the results of a simulation study are presented and the model is applied to hospital readmission data from the Lombardy region of Italy. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Praveen, Paurush; Fröhlich, Holger
2013-01-01
Inferring regulatory networks from experimental data via probabilistic graphical models is a popular framework to gain insights into biological systems. However, the inherent noise in experimental data coupled with a limited sample size reduces the performance of network reverse engineering. Prior knowledge from existing sources of biological information can address this low signal to noise problem by biasing the network inference towards biologically plausible network structures. Although integrating various sources of information is desirable, their heterogeneous nature makes this task challenging. We propose two computational methods to incorporate various information sources into a probabilistic consensus structure prior to be used in graphical model inference. Our first model, called Latent Factor Model (LFM), assumes a high degree of correlation among external information sources and reconstructs a hidden variable as a common source in a Bayesian manner. The second model, a Noisy-OR, picks up the strongest support for an interaction among information sources in a probabilistic fashion. Our extensive computational studies on KEGG signaling pathways as well as on gene expression data from breast cancer and yeast heat shock response reveal that both approaches can significantly enhance the reconstruction accuracy of Bayesian Networks compared to other competing methods as well as to the situation without any prior. Our framework allows for using diverse information sources, like pathway databases, GO terms and protein domain data, etc. and is flexible enough to integrate new sources, if available.
Comparison of three GIS-based models for predicting rockfall runout zones at a regional scale
NASA Astrophysics Data System (ADS)
Dorren, Luuk K. A.; Seijmonsbergen, Arie C.
2003-11-01
Site-specific information about the level of protection that mountain forests provide is often not available for large regions. Information regarding rockfalls is especially scarce. The most efficient way to obtain information about rockfall activity and the efficacy of protection forests at a regional scale is to use a simulation model. At present, it is still unknown which forest parameters could be incorporated best in such models. Therefore, the purpose of this study was to test and evaluate a model for rockfall assessment at a regional scale in which simple forest stand parameters, such as the number of trees per hectare and the diameter at breast height, are incorporated. Therefore, a newly developed Geographical Information System (GIS)-based distributed model is compared with two existing rockfall models. The developed model is the only model that calculates the rockfall velocity on the basis of energy loss due to collisions with trees and on the soil surface. The two existing models calculate energy loss over the distance between two cell centres, while the newly developed model is able to calculate multiple bounces within a pixel. The patterns of rockfall runout zones produced by the three models are compared with patterns of rockfall deposits derived from geomorphological field maps. Furthermore, the rockfall velocities modelled by the three models are compared. It is found that the models produced rockfall runout zone maps with rather similar accuracies. However, the developed model performs best on forested hillslopes and it also produces velocities that match best with field estimates on both forested and nonforested hillslopes irrespective of the slope gradient.
Evaluation of Movement Restriction Zone Sizes in Controlling Classical Swine Fever Outbreaks
Yadav, Shankar; Olynk Widmar, Nicole; Lay, Donald C.; Croney, Candace; Weng, Hsin-Yi
2017-01-01
The objective of this study was to compare the impacts of movement restriction zone sizes of 3, 5, 9, and 11 km with that of 7 km (the recommended zone size in the United States) in controlling a classical swine fever (CSF) outbreak. In addition to zone size, different compliance assumptions and outbreak types (single site and multiple site) were incorporated in the study. Three assumptions of compliance level were simulated: baseline, baseline ± 10%, and baseline ± 15%. The compliance level was held constant across all zone sizes in the baseline simulation. In the baseline ± 10% and baseline ± 15% simulations, the compliance level was increased for 3 and 5 km and decreased for 9 and 11 km from the baseline by the indicated percentages. The compliance level remained constant in all simulations for the 7-km zone size. Four single-site (i.e., with one index premises at the onset of outbreak) and four multiple-site (i.e., with more than one index premises at the onset of outbreak) CSF outbreak scenarios in Indiana were simulated incorporating various zone sizes and compliance assumptions using a stochastic between-premises disease spread model to estimate epidemic duration, percentage of infected, and preemptively culled swine premises. Furthermore, a risk assessment model that incorporated the results from the disease spread model was developed to estimate the number of swine premises under movement restrictions that would experience animal welfare outcomes of overcrowding or feed interruption during a CSF outbreak in Indiana. Compared with the 7-km zone size, the 3-km zone size resulted in a longer median epidemic duration, larger percentages of infected premises, and preemptively culled premises (P’s < 0.001) across all compliance assumptions and outbreak types. With the assumption of a higher compliance level, the 5-km zone size significantly (P < 0.001) reduced the epidemic duration and percentage of swine premises that would experience animal welfare outcomes in both outbreak types, whereas assumption of a lower compliance level for 9- and 11-km zone sizes significantly (P < 0.001) increased the epidemic duration and percentage of swine premises with animal welfare outcomes compared with the 7-km zone size. The magnitude of impact due to a zone size varied across the outbreak types (single site and multiple site). Overall, the 7-km zone size was found to be most effective in controlling CSF outbreaks, whereas the 5-km zone size was comparable to the 7-km zone size in some circumstances. PMID:28119920
Modeling emission lag after photoexcitation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jensen, Kevin L.; Petillo, John J.; Ovtchinnikov, Serguei
A theoretical model of delayed emission following photoexcitation from metals and semiconductors is given. Its numerical implementation is designed for beam optics codes used to model photocathodes in rf photoinjectors. The model extends the Moments approach for predicting photocurrent and mean transverse energy as moments of an emitted electron distribution by incorporating time of flight and scattering events that result in emission delay on a sub-picosecond level. The model accounts for a dynamic surface extraction field and changes in the energy distribution and time of emission as a consequence of the laser penetration depth and multiple scattering events during transport.more » Usage in the Particle-in-Cell code MICHELLE to predict the bunch shape and duration with or without laser jitter is given. The consequences of delayed emission effects for ultra-short pulses are discussed.« less
Modeling emission lag after photoexcitation
Jensen, Kevin L.; Petillo, John J.; Ovtchinnikov, Serguei; ...
2017-10-28
A theoretical model of delayed emission following photoexcitation from metals and semiconductors is given. Its numerical implementation is designed for beam optics codes used to model photocathodes in rf photoinjectors. The model extends the Moments approach for predicting photocurrent and mean transverse energy as moments of an emitted electron distribution by incorporating time of flight and scattering events that result in emission delay on a sub-picosecond level. The model accounts for a dynamic surface extraction field and changes in the energy distribution and time of emission as a consequence of the laser penetration depth and multiple scattering events during transport.more » Usage in the Particle-in-Cell code MICHELLE to predict the bunch shape and duration with or without laser jitter is given. The consequences of delayed emission effects for ultra-short pulses are discussed.« less
Auger-electron cascades, charge potential and microdosimetry of iodine-125.
Booz, J; Paretzke, H G; Pomplun, E; Olko, P
1987-01-01
This paper is a contribution to the microdosimetry of I-125. It shows microdosimetric spectra of individual and average disintegrations of I-125 for various target sizes and gives evidence for the relative contributions of energy-deposition events of low and high LET. It further presents information on the relative efficiencies of Auger-electrons and multiple charges in terms of local energy deposition, e.g. to model targets of DNA, and discusses their radiobiological implications, e.g. the microdosimetric understanding of the different efficiencies of specific and random incorporations of I-125. When I-125 is specifically incorporated into DNA, most of the energy deposition events are very large, e.g. above 40 keV/micron for a simulated target volume of 20 nm diameter, regardless of the number and energy of Auger electrons emitted. Therefore it is not necessary, for the discussion of the radiobiological implications, to distinguish between different classes of disintegrations. For unspecific, homogeneous incorporation of I-125 somewhere into tissue, about 20% of the dose to critical targets of 25 nm diameter is made up by disintegrations that happen to occur within these targets. When assuming that other critical targets and target structures can be neglected, this part of the dose will be equally effective as in the case of specific incorporation of I-125 into such target models. In addition, there are the normal, low-LET radiation effects from the other, 80% large fraction of the dose. With this information, for the biological systems and end points for which a short section of the elemental chromatine fiber can be taken as the relevant critical target, it is shown that the expected D37 value for homogeneous unspecific incorporation of I-125 can be estimated when the D37 for specific incorporation in DNA is known. For an example calculation, the estimated D37-value for nonspecific, homogeneous incorporation of I-125 would be about half as effective as specifically incorporated I-125. Thus, the microdosimetric data of the present work show that a high efficiency of homogeneous incorporation of I-125 into the cell nucleus is not necessarily in contradiction with the idea of I-125 disintegrations inside the DNA being the main cause of radiation action.
Doble, Brett; Tan, Marcus; Harris, Anthony; Lorgelly, Paula
2015-02-01
The successful use of a targeted therapy is intrinsically linked to the ability of a companion diagnostic to correctly identify patients most likely to benefit from treatment. The aim of this study was to review the characteristics of companion diagnostics that are of importance for inclusion in an economic evaluation. Approaches for including these characteristics in model-based economic evaluations are compared with the intent to describe best practice methods. Five databases and government agency websites were searched to identify model-based economic evaluations comparing a companion diagnostic and subsequent treatment strategy to another alternative treatment strategy with model parameters for the sensitivity and specificity of the companion diagnostic (primary synthesis). Economic evaluations that limited model parameters for the companion diagnostic to only its cost were also identified (secondary synthesis). Quality was assessed using the Quality of Health Economic Studies instrument. 30 studies were included in the review (primary synthesis n = 12; secondary synthesis n = 18). Incremental cost-effectiveness ratios may be lower when the only parameter for the companion diagnostic included in a model is the cost of testing. Incorporating the test's accuracy in addition to its cost may be a more appropriate methodological approach. Altering the prevalence of the genetic biomarker, specific population tested, type of test, test accuracy and timing/sequence of multiple tests can all impact overall model results. The impact of altering a test's threshold for positivity is unknown as it was not addressed in any of the included studies. Additional quality criteria as outlined in our methodological checklist should be considered due to the shortcomings of standard quality assessment tools in differentiating studies that incorporate important test-related characteristics and those that do not. There is a need to refine methods for incorporating the characteristics of companion diagnostics into model-based economic evaluations to ensure consistent and transparent reimbursement decisions are made.
Re-Analysis of the Solar Phase Curves of the Icy Galilean Satellites
NASA Technical Reports Server (NTRS)
Domingue, Deborah; Verbiscer, Anne
1997-01-01
Re-analysis of the solar phase curves of the icy Galilean satellites demonstrates that the quantitative results are dependent on the single particle scattering function incorporated into the photometric model; however, the qualitative properties are independent. The results presented here show that the general physical characteristics predicted by a Hapke model (B. Hapke, 1986, Icarus 67, 264-280) incorporating a two parameter double Henyey-Greenstein scattering function are similar to the predictions given by the same model incorporating a three parameter double Henyey-Greenstein scattering function as long as the data set being modeled has adequate coverage in phase angle. Conflicting results occur when the large phase angle coverage is inadequate. Analysis of the role of isotropic versus anisotropic multiple scattering shows that for surfaces as bright as Europa the two models predict very similar results over phase angles covered by the data. Differences arise only at those phase angles for which there are no data. The single particle scattering behavior between the leading and trailing hemispheres of Europa and Ganymede is commensurate with magnetospheric alterations of their surfaces. Ion bombardment will produce more forward scattering single scattering functions due to annealing of potential scattering centers within regolith particles (N. J. Sack et al., 1992, Icarus 100, 534-540). Both leading and trailing hemispheres of Europa are consistent with a high porosity model and commensurate with a frost surface. There are no strong differences in predicted porosity between the two hemispheres of Callisto, both are consistent with model porosities midway between that deduced for Europa and the Moon. Surface roughness model estimates predict that surface roughness increases with satellite distance from Jupiter, with lunar surface roughness values falling midway between those measured for Ganymede and Callisto. There is no obvious variation in predicted surface roughness with hemisphere for any of the Galilean satellites.
Ettekal, Idean; Ladd, Gary W.
2015-01-01
Childhood aggression-disruptiveness, chronic peer rejection, and deviant friendships were examined as predictors of early-adolescent rule breaking behaviors. Using a sample of 383 children (193 girls and 190 boys) who were followed from ages 6 to 14, peer rejection trajectories were identified and incorporated into a series of alternative models to assess how chronic peer rejection and deviant friendships mediate the association between stable childhood aggression-disruptiveness and early-adolescent rule breaking. There were multiple mediated pathways to rule breaking that included both behavioral and relational risk factors and findings were consistent for boys and girls. Results have implications for better understanding the influence of multiple social processes in the continuity of antisocial behaviors from middle childhood to early adolescence. PMID:25403544
Treatment Planning and Image Guidance for Radiofrequency Ablations of Large Tumors
Ren, Hongliang; Campos-Nanez, Enrique; Yaniv, Ziv; Banovac, Filip; Abeledo, Hernan; Hata, Nobuhiko; Cleary, Kevin
2014-01-01
This article addresses the two key challenges in computer-assisted percutaneous tumor ablation: planning multiple overlapping ablations for large tumors while avoiding critical structures, and executing the prescribed plan. Towards semi-automatic treatment planning for image-guided surgical interventions, we develop a systematic approach to the needle-based ablation placement task, ranging from pre-operative planning algorithms to an intra-operative execution platform. The planning system incorporates clinical constraints on ablations and trajectories using a multiple objective optimization formulation, which consists of optimal path selection and ablation coverage optimization based on integer programming. The system implementation is presented and validated in phantom studies and on an animal model. The presented system can potentially be further extended for other ablation techniques such as cryotherapy. PMID:24235279
Incorporating pest management into the design of multiple goal-oriented cropping systems
USDA-ARS?s Scientific Manuscript database
Suggestions are offered to facilitate efforts to incorporate pest management goals into the design of crop production systems. The scope of research programs should be expanded to ensure broad multidisciplinary cooperation. Inclusion of farmers, production specialists and researchers from discipli...
Murrihy, Rachael C; Byrne, Mitchell K; Gonsalvez, Craig J
2009-02-01
Internationally, family doctors seeking to enhance their skills in evidence-based mental health treatment are attending brief training workshops, despite clear evidence in the literature that short-term, massed formats are not likely to improve skills in this complex area. Reviews of the educational literature suggest that an optimal model of training would incorporate distributed practice techniques; repeated practice over a lengthy time period, small-group interactive learning, mentoring relationships, skills-based training and an ongoing discussion of actual patients. This study investigates the potential role of group-based training incorporating multiple aspects of good pedagogy for training doctors in basic competencies in brief cognitive behaviour therapy (BCBT). Six groups of family doctors (n = 32) completed eight 2-hour sessions of BCBT group training over a 6-month period. A baseline control design was utilised with pre- and post-training measures of doctors' BCBT skills, knowledge and engagement in BCBT treatment. Family doctors' knowledge, skills in and actual use of BCBT with patients improved significantly over the course of training compared with the control period. This research demonstrates preliminary support for the efficacy of an empirically derived group training model for family doctors. Brief CBT group-based training could prove to be an effective and viable model for future doctor training.
Childs, Lauren M; Held, Nicole L; Young, Mark J; Whitaker, Rachel J; Weitz, Joshua S
2012-01-01
The CRISPR (Clustered Regularly Interspaced Short Palindromic Repeats) system is a recently discovered type of adaptive immune defense in bacteria and archaea that functions via directed incorporation of viral and plasmid DNA into host genomes. Here, we introduce a multiscale model of dynamic coevolution between hosts and viruses in an ecological context that incorporates CRISPR immunity principles. We analyze the model to test whether and how CRISPR immunity induces host and viral diversification and the maintenance of many coexisting strains. We show that hosts and viruses coevolve to form highly diverse communities. We observe the punctuated replacement of existent strains, such that populations have very low similarity compared over the long term. However, in the short term, we observe evolutionary dynamics consistent with both incomplete selective sweeps of novel strains (as single strains and coalitions) and the recurrence of previously rare strains. Coalitions of multiple dominant host strains are predicted to arise because host strains can have nearly identical immune phenotypes mediated by CRISPR defense albeit with different genotypes. We close by discussing how our explicit eco-evolutionary model of CRISPR immunity can help guide efforts to understand the drivers of diversity seen in microbial communities where CRISPR systems are active. PMID:22759281
Zhang, Guosheng; Huang, Kuan-Chieh; Xu, Zheng; Tzeng, Jung-Ying; Conneely, Karen N; Guan, Weihua; Kang, Jian; Li, Yun
2016-05-01
DNA methylation is a key epigenetic mark involved in both normal development and disease progression. Recent advances in high-throughput technologies have enabled genome-wide profiling of DNA methylation. However, DNA methylation profiling often employs different designs and platforms with varying resolution, which hinders joint analysis of methylation data from multiple platforms. In this study, we propose a penalized functional regression model to impute missing methylation data. By incorporating functional predictors, our model utilizes information from nonlocal probes to improve imputation quality. Here, we compared the performance of our functional model to linear regression and the best single probe surrogate in real data and via simulations. Specifically, we applied different imputation approaches to an acute myeloid leukemia dataset consisting of 194 samples and our method showed higher imputation accuracy, manifested, for example, by a 94% relative increase in information content and up to 86% more CpG sites passing post-imputation filtering. Our simulated association study further demonstrated that our method substantially improves the statistical power to identify trait-associated methylation loci. These findings indicate that the penalized functional regression model is a convenient and valuable imputation tool for methylation data, and it can boost statistical power in downstream epigenome-wide association study (EWAS). © 2016 WILEY PERIODICALS, INC.
Improved Predictions of Drug-Drug Interactions Mediated by Time-Dependent Inhibition of CYP3A.
Yadav, Jaydeep; Korzekwa, Ken; Nagar, Swati
2018-05-07
Time-dependent inactivation (TDI) of cytochrome P450s (CYPs) is a leading cause of clinical drug-drug interactions (DDIs). Current methods tend to overpredict DDIs. In this study, a numerical approach was used to model complex CYP3A TDI in human-liver microsomes. The inhibitors evaluated included troleandomycin (TAO), erythromycin (ERY), verapamil (VER), and diltiazem (DTZ) along with the primary metabolites N-demethyl erythromycin (NDE), norverapamil (NV), and N-desmethyl diltiazem (NDD). The complexities incorporated into the models included multiple-binding kinetics, quasi-irreversible inactivation, sequential metabolism, inhibitor depletion, and membrane partitioning. The resulting inactivation parameters were incorporated into static in vitro-in vivo correlation (IVIVC) models to predict clinical DDIs. For 77 clinically observed DDIs, with a hepatic-CYP3A-synthesis-rate constant of 0.000 146 min -1 , the average fold difference between the observed and predicted DDIs was 3.17 for the standard replot method and 1.45 for the numerical method. Similar results were obtained using a synthesis-rate constant of 0.000 32 min -1 . These results suggest that numerical methods can successfully model complex in vitro TDI kinetics and that the resulting DDI predictions are more accurate than those obtained with the standard replot approach.
NASA Astrophysics Data System (ADS)
Song, X.; Chen, X.; Dai, H.; Hammond, G. E.; Song, H. S.; Stegen, J.
2016-12-01
The hyporheic zone is an active region for biogeochemical processes such as carbon and nitrogen cycling, where the groundwater and surface water mix and interact with each other with distinct biogeochemical and thermal properties. The biogeochemical dynamics within the hyporheic zone are driven by both river water and groundwater hydraulic dynamics, which are directly affected by climate change scenarios. Besides that, the hydraulic and thermal properties of local sediments and microbial and chemical processes also play important roles in biogeochemical dynamics. Thus for a comprehensive understanding of the biogeochemical processes in the hyporheic zone, a coupled thermo-hydro-biogeochemical model is needed. As multiple uncertainty sources are involved in the integrated model, it is important to identify its key modules/parameters through sensitivity analysis. In this study, we develop a 2D cross-section model in the hyporheic zone at the DOE Hanford site adjacent to Columbia River and use this model to quantify module and parametric sensitivity on assessment of climate change. To achieve this purpose, We 1) develop a facies-based groundwater flow and heat transfer model that incorporates facies geometry and heterogeneity characterized from a field data set, 2) derive multiple reaction networks/pathways from batch experiments with in-situ samples and integrate temperate dependent reactive transport modules to the flow model, 3) assign multiple climate change scenarios to the coupled model by analyzing historical river stage data, 4) apply a variance-based global sensitivity analysis to quantify scenario/module/parameter uncertainty in hierarchy level. The objectives of the research include: 1) identifing the key control factors of the coupled thermo-hydro-biogeochemical model in the assessment of climate change, and 2) quantify the carbon consumption in different climate change scenarios in the hyporheic zone.
Wang, Yi-Min; Zhou, Dong-Mei; Yuan, Xu-Yin; Zhang, Xiao-Hui; Li, Yi
2018-05-01
Responses of wheat (Triticum aestivum L.) seedling roots to the mixtures of copper (Cu), cadmium (Cd) and humic acids (HA) were investigated using the solution culture experiments, focusing on the interaction patterns between multiple metals and their influences on root proton release. A concentration-addition multiplication (CA) model was introduced into the modeling analysis. In comparison with metal ion activities in bulk-phase solutions, the incorporation of ion activities at the root cell membrane surfaces (CMs) (denoted as {Cu 2+ } 0 and {Cd 2+ } 0 ) into the CA model could significantly improve their correlation with RRE (relative root elongation) from 0.819 to 0.927. Modeling analysis indicated that the co-existence of {Cu 2+ } 0 significantly enhanced the rhizotoxicity of {Cd 2+ } 0 , while no significant effect of {Cd 2+ } 0 on the {Cu 2+ } 0 rhizotoxicity. 10 mg/L HA stimulated the root elongation even under metal stress. Although high concentration of metal ions inhibited the root proton release rate (ΔH + ), both the low concentration of metal ions and HA treatments increased the values of ΔH + . In HA-Cu-Cd mixtures, actions of metal ions on ΔH + values were varied intricately among treatments but well modeled by the CA model. We concluded from the CA models that the electrostatic effect is vitally important for explaining the effect of {Cu 2+ } 0 on the rhizotoxicity of {Cd 2+ } 0 , while it plays no unique role in understanding the influence of {Cd 2+ } 0 on the rhizotoxicity of {Cu 2+ } 0. Thus our study provide a novel way for modeling multiple metals behaviors in the environment and understanding the mechanisms of ion interactions. Copyright © 2018 Elsevier Ltd. All rights reserved.
Efficacy in Teaching through "Multiple Intelligence" Instructional Strategies
ERIC Educational Resources Information Center
Tamilselvi, B.; Geetha, D.
2015-01-01
Multiple intelligence is the theory that "people are smart in more ways than one has immense implication for educators". Howard Gardner proposed a new view of intelligence that is rapidly being incorporated in school curricula. In his theory of Multiple Intelligences, Gardner expanded the concept of intelligence with such areas as music,…
Symptomatic therapy in multiple sclerosis
Frohman, Teresa C.; Castro, Wanda; Shah, Anjali; Courtney, Ardith; Ortstadt, Jeffrey; Davis, Scott L.; Logan, Diana; Abraham, Thomas; Abraham, Jaspreet; Remington, Gina; Treadaway, Katherine; Graves, Donna; Hart, John; Stuve, Olaf; Lemack, Gary; Greenberg, Benjamin; Frohman, Elliot M.
2011-01-01
Multiple sclerosis is the most common disabling neurological disease of young adults. The ability to impact the quality of life of patients with multiple sclerosis should not only incorporate therapies that are disease modifying, but should also include a course of action for the global multidisciplinary management focused on quality of life and functional capabilities. PMID:21694806
Delahaies, Sylvain; Roulstone, Ian; Nichols, Nancy
2017-07-10
We use a variational method to assimilate multiple data streams into the terrestrial ecosystem carbon cycle model DALECv2 (Data Assimilation Linked Ecosystem Carbon). Ecological and dynamical constraints have recently been introduced to constrain unresolved components of this otherwise ill-posed problem. We recast these constraints as a multivariate Gaussian distribution to incorporate them into the variational framework and we demonstrate their advantage through a linear analysis. By using an adjoint method we study a linear approximation of the inverse problem: firstly we perform a sensitivity analysis of the different outputs under consideration, and secondly we use the concept of resolution matricesmore » to diagnose the nature of the ill-posedness and evaluate regularisation strategies. We then study the non-linear problem with an application to real data. Finally, we propose a modification to the model: introducing a spin-up period provides us with a built-in formulation of some ecological constraints which facilitates the variational approach.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Delahaies, Sylvain; Roulstone, Ian; Nichols, Nancy
We use a variational method to assimilate multiple data streams into the terrestrial ecosystem carbon cycle model DALECv2 (Data Assimilation Linked Ecosystem Carbon). Ecological and dynamical constraints have recently been introduced to constrain unresolved components of this otherwise ill-posed problem. We recast these constraints as a multivariate Gaussian distribution to incorporate them into the variational framework and we demonstrate their advantage through a linear analysis. By using an adjoint method we study a linear approximation of the inverse problem: firstly we perform a sensitivity analysis of the different outputs under consideration, and secondly we use the concept of resolution matricesmore » to diagnose the nature of the ill-posedness and evaluate regularisation strategies. We then study the non-linear problem with an application to real data. Finally, we propose a modification to the model: introducing a spin-up period provides us with a built-in formulation of some ecological constraints which facilitates the variational approach.« less
NCI collaborates with Multiple Myeloma Research Foundation
The National Cancer Institute (NCI) announced a collaboration with the Multiple Myeloma Research Foundation (MMRF) to incorporate MMRF's wealth of genomic and clinical data on the disease into the NCI Genomic Data Commons (GDC), a publicly available datab
Combining multiple sources of data to inform conservation of Lesser Prairie-Chicken populations
Ross, Beth; Haukos, David A.; Hagen, Christian A.; Pitman, James
2018-01-01
Conservation of small populations is often based on limited data from spatially and temporally restricted studies, resulting in management actions based on an incomplete assessment of the population drivers. If fluctuations in abundance are related to changes in weather, proper management is especially important, because extreme weather events could disproportionately affect population abundance. Conservation assessments, especially for vulnerable populations, are aided by a knowledge of how extreme events influence population status and trends. Although important for conservation efforts, data may be limited for small or vulnerable populations. Integrated population models maximize information from various sources of data to yield population estimates that fully incorporate uncertainty from multiple data sources while allowing for the explicit incorporation of environmental covariates of interest. Our goal was to assess the relative influence of population drivers for the Lesser Prairie-Chicken (Tympanuchus pallidicinctus) in the core of its range, western and southern Kansas, USA. We used data from roadside lek count surveys, nest monitoring surveys, and survival data from telemetry monitoring combined with climate (Palmer drought severity index) data in an integrated population model. Our results indicate that variability in population growth rate was most influenced by variability in juvenile survival. The Palmer drought severity index had no measurable direct effects on adult survival or mean number of offspring per female; however, there were declines in population growth rate following severe drought. Because declines in population growth rate occurred at a broad spatial scale, declines in response to drought were likely due to decreases in chick and juvenile survival rather than emigration outside of the study area. Overall, our model highlights the importance of accounting for environmental and demographic sources of variability, and provides a thorough method for simultaneously evaluating population demography in response to long-term climate effects.
Symstad, Amy J.; Fisichelli, Nicholas A.; Miller, Brian W.; Rowland, Erika; Schuurman, Gregor W.
2017-01-01
Scenario planning helps managers incorporate climate change into their natural resource decision making through a structured “what-if” process of identifying key uncertainties and potential impacts and responses. Although qualitative scenarios, in which ecosystem responses to climate change are derived via expert opinion, often suffice for managers to begin addressing climate change in their planning, this approach may face limits in resolving the responses of complex systems to altered climate conditions. In addition, this approach may fall short of the scientific credibility managers often require to take actions that differ from current practice. Quantitative simulation modeling of ecosystem response to climate conditions and management actions can provide this credibility, but its utility is limited unless the modeling addresses the most impactful and management-relevant uncertainties and incorporates realistic management actions. We use a case study to compare and contrast management implications derived from qualitative scenario narratives and from scenarios supported by quantitative simulations. We then describe an analytical framework that refines the case study’s integrated approach in order to improve applicability of results to management decisions. The case study illustrates the value of an integrated approach for identifying counterintuitive system dynamics, refining understanding of complex relationships, clarifying the magnitude and timing of changes, identifying and checking the validity of assumptions about resource responses to climate, and refining management directions. Our proposed analytical framework retains qualitative scenario planning as a core element because its participatory approach builds understanding for both managers and scientists, lays the groundwork to focus quantitative simulations on key system dynamics, and clarifies the challenges that subsequent decision making must address.
Zhang, J L; Li, Y P; Huang, G H; Baetz, B W; Liu, J
2017-06-01
In this study, a Bayesian estimation-based simulation-optimization modeling approach (BESMA) is developed for identifying effluent trading strategies. BESMA incorporates nutrient fate modeling with soil and water assessment tool (SWAT), Bayesian estimation, and probabilistic-possibilistic interval programming with fuzzy random coefficients (PPI-FRC) within a general framework. Based on the water quality protocols provided by SWAT, posterior distributions of parameters can be analyzed through Bayesian estimation; stochastic characteristic of nutrient loading can be investigated which provides the inputs for the decision making. PPI-FRC can address multiple uncertainties in the form of intervals with fuzzy random boundaries and the associated system risk through incorporating the concept of possibility and necessity measures. The possibility and necessity measures are suitable for optimistic and pessimistic decision making, respectively. BESMA is applied to a real case of effluent trading planning in the Xiangxihe watershed, China. A number of decision alternatives can be obtained under different trading ratios and treatment rates. The results can not only facilitate identification of optimal effluent-trading schemes, but also gain insight into the effects of trading ratio and treatment rate on decision making. The results also reveal that decision maker's preference towards risk would affect decision alternatives on trading scheme as well as system benefit. Compared with the conventional optimization methods, it is proved that BESMA is advantageous in (i) dealing with multiple uncertainties associated with randomness and fuzziness in effluent-trading planning within a multi-source, multi-reach and multi-period context; (ii) reflecting uncertainties existing in nutrient transport behaviors to improve the accuracy in water quality prediction; and (iii) supporting pessimistic and optimistic decision making for effluent trading as well as promoting diversity of decision alternatives. Copyright © 2017 Elsevier Ltd. All rights reserved.
Moran, Kelly Renee; Fairchild, Geoffrey; Generous, Nicholas; ...
2016-11-14
Mathematical models, such as those that forecast the spread of epidemics or predict the weather, must overcome the challenges of integrating incomplete and inaccurate data in computer simulations, estimating the probability of multiple possible scenarios, incorporating changes in human behavior and/or the pathogen, and environmental factors. In the past 3 decades, the weather forecasting community has made significant advances in data collection, assimilating heterogeneous data steams into models and communicating the uncertainty of their predictions to the general public. Epidemic modelers are struggling with these same issues in forecasting the spread of emerging diseases, such as Zika virus infection andmore » Ebola virus disease. While weather models rely on physical systems, data from satellites, and weather stations, epidemic models rely on human interactions, multiple data sources such as clinical surveillance and Internet data, and environmental or biological factors that can change the pathogen dynamics. We describe some of similarities and differences between these 2 fields and how the epidemic modeling community is rising to the challenges posed by forecasting to help anticipate and guide the mitigation of epidemics. Here, we conclude that some of the fundamental differences between these 2 fields, such as human behavior, make disease forecasting more challenging than weather forecasting.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moran, Kelly Renee; Fairchild, Geoffrey; Generous, Nicholas
Mathematical models, such as those that forecast the spread of epidemics or predict the weather, must overcome the challenges of integrating incomplete and inaccurate data in computer simulations, estimating the probability of multiple possible scenarios, incorporating changes in human behavior and/or the pathogen, and environmental factors. In the past 3 decades, the weather forecasting community has made significant advances in data collection, assimilating heterogeneous data steams into models and communicating the uncertainty of their predictions to the general public. Epidemic modelers are struggling with these same issues in forecasting the spread of emerging diseases, such as Zika virus infection andmore » Ebola virus disease. While weather models rely on physical systems, data from satellites, and weather stations, epidemic models rely on human interactions, multiple data sources such as clinical surveillance and Internet data, and environmental or biological factors that can change the pathogen dynamics. We describe some of similarities and differences between these 2 fields and how the epidemic modeling community is rising to the challenges posed by forecasting to help anticipate and guide the mitigation of epidemics. Here, we conclude that some of the fundamental differences between these 2 fields, such as human behavior, make disease forecasting more challenging than weather forecasting.« less
Moran, Kelly R.; Fairchild, Geoffrey; Generous, Nicholas; Hickmann, Kyle; Osthus, Dave; Priedhorsky, Reid; Hyman, James; Del Valle, Sara Y.
2016-01-01
Mathematical models, such as those that forecast the spread of epidemics or predict the weather, must overcome the challenges of integrating incomplete and inaccurate data in computer simulations, estimating the probability of multiple possible scenarios, incorporating changes in human behavior and/or the pathogen, and environmental factors. In the past 3 decades, the weather forecasting community has made significant advances in data collection, assimilating heterogeneous data steams into models and communicating the uncertainty of their predictions to the general public. Epidemic modelers are struggling with these same issues in forecasting the spread of emerging diseases, such as Zika virus infection and Ebola virus disease. While weather models rely on physical systems, data from satellites, and weather stations, epidemic models rely on human interactions, multiple data sources such as clinical surveillance and Internet data, and environmental or biological factors that can change the pathogen dynamics. We describe some of similarities and differences between these 2 fields and how the epidemic modeling community is rising to the challenges posed by forecasting to help anticipate and guide the mitigation of epidemics. We conclude that some of the fundamental differences between these 2 fields, such as human behavior, make disease forecasting more challenging than weather forecasting. PMID:28830111
Arregui, Sergio; Marinova, Dessislava; Sanz, Joaquín
2018-01-01
In the case of tuberculosis (TB), the capabilities of epidemic models to produce quantitatively robust forecasts are limited by multiple hindrances. Among these, understanding the complex relationship between disease epidemiology and populations’ age structure has been highlighted as one of the most relevant. TB dynamics depends on age in multiple ways, some of which are traditionally simplified in the literature. That is the case of the heterogeneities in contact intensity among different age strata that are common to all airborne diseases, but still typically neglected in the TB case. Furthermore, while demographic structures of many countries are rapidly aging, demographic dynamics are pervasively ignored when modeling TB spreading. In this work, we present a TB transmission model that incorporates country-specific demographic prospects and empirical contact data around a data-driven description of TB dynamics. Using our model, we find that the inclusion of demographic dynamics is followed by an increase in the burden levels predicted for the next decades in the areas of the world that are most hit by the disease today. Similarly, we show that considering realistic patterns of contacts among individuals in different age strata reshapes the transmission patterns reproduced by the models, a result with potential implications for the design of age-focused epidemiological interventions. PMID:29563223
Deep Visual Attention Prediction
NASA Astrophysics Data System (ADS)
Wang, Wenguan; Shen, Jianbing
2018-05-01
In this work, we aim to predict human eye fixation with view-free scenes based on an end-to-end deep learning architecture. Although Convolutional Neural Networks (CNNs) have made substantial improvement on human attention prediction, it is still needed to improve CNN based attention models by efficiently leveraging multi-scale features. Our visual attention network is proposed to capture hierarchical saliency information from deep, coarse layers with global saliency information to shallow, fine layers with local saliency response. Our model is based on a skip-layer network structure, which predicts human attention from multiple convolutional layers with various reception fields. Final saliency prediction is achieved via the cooperation of those global and local predictions. Our model is learned in a deep supervision manner, where supervision is directly fed into multi-level layers, instead of previous approaches of providing supervision only at the output layer and propagating this supervision back to earlier layers. Our model thus incorporates multi-level saliency predictions within a single network, which significantly decreases the redundancy of previous approaches of learning multiple network streams with different input scales. Extensive experimental analysis on various challenging benchmark datasets demonstrate our method yields state-of-the-art performance with competitive inference time.
Physiologically relevant organs on chips
Yum, Kyungsuk; Hong, Soon Gweon; Lee, Luke P.
2015-01-01
Recent advances in integrating microengineering and tissue engineering have generated promising microengineered physiological models for experimental medicine and pharmaceutical research. Here we review the recent development of microengineered physiological systems, or organs on chips, that reconstitute the physiologically critical features of specific human tissues and organs and their interactions. This technology uses microengineering approaches to construct organ-specific microenvironments, reconstituting tissue structures, tissue–tissue interactions and interfaces, and dynamic mechanical and biochemical stimuli found in specific organs, to direct cells to assemble into functional tissues. We first discuss microengineering approaches to reproduce the key elements of physiologically important, dynamic mechanical microenvironments, biochemical microenvironments, and microarchitectures of specific tissues and organs in microfluidic cell culture systems. This is followed by examples of microengineered individual organ models that incorporate the key elements of physiological microenvironments into single microfluidic cell culture systems to reproduce organ-level functions. Finally, microengineered multiple organ systems that simulate multiple organ interactions to better represent human physiology, including human responses to drugs, is covered in this review. This emerging organs-on-chips technology has the potential to become an alternative to 2D and 3D cell culture and animal models for experimental medicine, human disease modeling, drug development, and toxicology. PMID:24357624
Case analysis online: a strategic management case model for the health industry.
Walsh, Anne; Bearden, Eithne
2004-01-01
Despite the plethora of methods and tools available to support strategic management, the challenge for health executives in the next century will relate to their ability to access and interpret data from multiple and intricate communication networks. Integrated digital networks and satellite systems will expand the scope and ease of sharing information between business divisions, and networked systems will facilitate the use of virtual case discussions across universities. While the internet is frequently used to support clinical decisions in the healthcare industry, few executives rely upon the internetfor strategic analysis. Although electronic technologies can easily synthesize data from multiple information channels, research as well as technical issues may deter their application in strategic analysis. As digital models transform access to information, online models may become increasingly relevant in designing strategic solutions. While there are various pedagogical models available to support the strategic management process, this framework was designed to enhance strategic analysis through the application of technology and electronic research. A strategic analysis framework, which incorporated internet research and case analysis in a strategic managementcourse, is described alongwith design and application issues that emerged during the case analysis process.
Space-time modeling of soil moisture
NASA Astrophysics Data System (ADS)
Chen, Zijuan; Mohanty, Binayak P.; Rodriguez-Iturbe, Ignacio
2017-11-01
A physically derived space-time mathematical representation of the soil moisture field is carried out via the soil moisture balance equation driven by stochastic rainfall forcing. The model incorporates spatial diffusion and in its original version, it is shown to be unable to reproduce the relative fast decay in the spatial correlation functions observed in empirical data. This decay resulting from variations in local topography as well as in local soil and vegetation conditions is well reproduced via a jitter process acting multiplicatively over the space-time soil moisture field. The jitter is a multiplicative noise acting on the soil moisture dynamics with the objective to deflate its correlation structure at small spatial scales which are not embedded in the probabilistic structure of the rainfall process that drives the dynamics. These scales of order of several meters to several hundred meters are of great importance in ecohydrologic dynamics. Properties of space-time correlation functions and spectral densities of the model with jitter are explored analytically, and the influence of the jitter parameters, reflecting variabilities of soil moisture at different spatial and temporal scales, is investigated. A case study fitting the derived model to a soil moisture dataset is presented in detail.
Towards improved capability and confidence in coupled atmospheric and wildland fire modeling
NASA Astrophysics Data System (ADS)
Sauer, Jeremy A.
This dissertation work is aimed at improving the capability and confidence in a modernized and improved version of Los Alamos National Laboratory's coupled atmospheric and wild- land fire dynamics model, Higrad-Firetec. Higrad is the hydrodynamics component of this large eddy simulation model that solves the three dimensional, fully compressible Navier-Stokes equations, incorporating a dynamic eddy viscosity formulation through a two-scale turbulence closure scheme. Firetec is the vegetation, drag forcing, and combustion physics portion that is integrated with Higrad. The modern version of Higrad-Firetec incorporates multiple numerical methodologies and high performance computing aspects which combine to yield a unique tool capable of augmenting theoretical and observational investigations in order to better understand the multi-scale, multi-phase, and multi-physics, phenomena involved in coupled atmospheric and environmental dynamics. More specifically, the current work includes extended functionality and validation efforts targeting component processes in coupled atmospheric and wildland fire scenarios. Since observational data of sufficient quality and resolution to validate the fully coupled atmosphere-wildfire scenario simply does not exist, we instead seek to validate components of the full prohibitively convoluted process. This manuscript provides first, an introduction and background into the application space of Higrad-Firetec. Second we document the model formulation, solution procedure, and a simple scalar transport verification exercise. Third, we perform a validate model results against observational data for time averaged flow field metrics in and above four idealized forest canopies. Fourth, we carry out a validation effort for the non-buoyant jet in a crossflow scenario (to which an analogy can be made for atmosphere-wildfire interactions) comparing model results to laboratory data of both steady-in-time and unsteady-in-time metrics. Finally, an extension of model multi-phase physics is implemented, allowing for the representation of multiple collocated fuels as separately evolving constituents leading to differences resulting rate of spread and total burned area. In combination these efforts demonstrate improved capability, increased validation of component functionality, and unique applicability the Higrad-Firetec modeling framework. As a result this work provides a substantially more robust foundation for future new, more widely acceptable investigations into the complexities of coupled atmospheric and wildland fire behavior.
Walter, Jonathan P; Pandy, Marcus G
2017-10-01
The aim of this study was to perform multi-body, muscle-driven, forward-dynamics simulations of human gait using a 6-degree-of-freedom (6-DOF) model of the knee in tandem with a surrogate model of articular contact and force control. A forward-dynamics simulation incorporating position, velocity and contact force-feedback control (FFC) was used to track full-body motion capture data recorded for multiple trials of level walking and stair descent performed by two individuals with instrumented knee implants. Tibiofemoral contact force errors for FFC were compared against those obtained from a standard computed muscle control algorithm (CMC) with a 6-DOF knee contact model (CMC6); CMC with a 1-DOF translating hinge-knee model (CMC1); and static optimization with a 1-DOF translating hinge-knee model (SO). Tibiofemoral joint loads predicted by FFC and CMC6 were comparable for level walking, however FFC produced more accurate results for stair descent. SO yielded reasonable predictions of joint contact loading for level walking but significant differences between model and experiment were observed for stair descent. CMC1 produced the least accurate predictions of tibiofemoral contact loads for both tasks. Our findings suggest that reliable estimates of knee-joint loading may be obtained by incorporating position, velocity and force-feedback control with a multi-DOF model of joint contact in a forward-dynamics simulation of gait. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ou, G.; Nijssen, B.; Nearing, G. S.; Newman, A. J.; Mizukami, N.; Clark, M. P.
2016-12-01
The Structure for Unifying Multiple Modeling Alternatives (SUMMA) provides a unifying modeling framework for process-based hydrologic modeling by defining a general set of conservation equations for mass and energy, with the capability to incorporate multiple choices for spatial discretizations and flux parameterizations. In this study, we provide a first demonstration of large-scale hydrologic simulations using SUMMA through an application to the Columbia River Basin (CRB) in the northwestern United States and Canada for a multi-decadal simulation period. The CRB is discretized into 11,723 hydrologic response units (HRUs) according to the United States Geologic Service Geospatial Fabric. The soil parameters are derived from the Natural Resources Conservation Service Soil Survey Geographic (SSURGO) Database. The land cover parameters are based on the National Land Cover Database from the year 2001 created by the Multi-Resolution Land Characteristics (MRLC) Consortium. The forcing data, including hourly air pressure, temperature, specific humidity, wind speed, precipitation, shortwave and longwave radiations, are based on Phase 2 of the North American Land Data Assimilation System (NLDAS-2) and averaged for each HRU. The simulation results are compared to simulations with the Variable Infiltration Capacity (VIC) model and the Precipitation Runoff Modeling System (PRMS). We are particularly interested in SUMMA's capability to mimic model behaviors of the other two models through the selection of appropriate model parameterizations in SUMMA.
Tracking and Control of a Neutral Particle Beam Using Multiple Model Adaptive Meer Filter.
1987-12-01
34 method incorporated by Zicker in 1983 [32]. Once the beam estimation problem had been solved, the problem of beam control was examined. Zicker conducted a...filter. Then, the methods applied by Meer, and later Zicker , to reduce the computational load of a simple Meer filter, will be presented. 2.5.1 Basic...number of possible methods to prune the hypothesis tree and chose the "Best Half Method" as the most viable (21). Zicker [323, applied the work of Weiss
NASA Technical Reports Server (NTRS)
Clancy, R. T.; Lee, S. W.
1991-01-01
An analysis of emission-phase-function (EPF) observations from the Viking Orbiter Infrared Thermal Mapper (IRTM) yields a wide variety of results regarding dust and cloud scattering in the Mars atmosphere and atmospheric-corrected albedos for the surface of Mars. A multiple scattering radiative transfer model incorporating a bidirectional phase function for the surface and atmospheric scattering by dust and clouds is used to derive surface albedos and dust and ice optical properties and optical depths for these various conditions on Mars.
2012-10-01
right by a pitch (P) and subsequently summed to provide a multi-gate superimposed temperature distribution ( TMG (x)). An example is shown in figure...temperature rise over the coolant, or the difference between the centerline multi gate junction temperature on the upper surface ( TMG ,GaN(0)) of the GaN...TC coolant temperature (°C) TCP(x) cold plate temperature distribution (°C) TGaN(x,y) temperature distribution within GaN (°C) TMG (x) multiple gate
Energy dispersive X-ray analysis on an absolute scale in scanning transmission electron microscopy.
Chen, Z; D'Alfonso, A J; Weyland, M; Taplin, D J; Allen, L J; Findlay, S D
2015-10-01
We demonstrate absolute scale agreement between the number of X-ray counts in energy dispersive X-ray spectroscopy using an atomic-scale coherent electron probe and first-principles simulations. Scan-averaged spectra were collected across a range of thicknesses with precisely determined and controlled microscope parameters. Ionization cross-sections were calculated using the quantum excitation of phonons model, incorporating dynamical (multiple) electron scattering, which is seen to be important even for very thin specimens. Copyright © 2015 Elsevier B.V. All rights reserved.
Pratte, Michael S.; Park, Young Eun; Rademaker, Rosanne L.; Tong, Frank
2016-01-01
If we view a visual scene that contains many objects, then momentarily close our eyes, some details persist while others seem to fade. Discrete models of visual working memory (VWM) assume that only a few items can be actively maintained in memory, beyond which pure guessing will emerge. Alternatively, continuous resource models assume that all items in a visual scene can be stored with some precision. Distinguishing between these competing models is challenging, however, as resource models that allow for stochastically variable precision (across items and trials) can produce error distributions that resemble random guessing behavior. Here, we evaluated the hypothesis that a major source of variability in VWM performance arises from systematic variation in precision across the stimuli themselves; such stimulus-specific variability can be incorporated into both discrete-capacity and variable-precision resource models. Participants viewed multiple oriented gratings, and then reported the orientation of a cued grating from memory. When modeling the overall distribution of VWM errors, we found that the variable-precision resource model outperformed the discrete model. However, VWM errors revealed a pronounced “oblique effect”, with larger errors for oblique than cardinal orientations. After this source of variability was incorporated into both models, we found that the discrete model provided a better account of VWM errors. Our results demonstrate that variable precision across the stimulus space can lead to an unwarranted advantage for resource models that assume stochastically variable precision. When these deterministic sources are adequately modeled, human working memory performance reveals evidence of a discrete capacity limit. PMID:28004957
Pratte, Michael S; Park, Young Eun; Rademaker, Rosanne L; Tong, Frank
2017-01-01
If we view a visual scene that contains many objects, then momentarily close our eyes, some details persist while others seem to fade. Discrete models of visual working memory (VWM) assume that only a few items can be actively maintained in memory, beyond which pure guessing will emerge. Alternatively, continuous resource models assume that all items in a visual scene can be stored with some precision. Distinguishing between these competing models is challenging, however, as resource models that allow for stochastically variable precision (across items and trials) can produce error distributions that resemble random guessing behavior. Here, we evaluated the hypothesis that a major source of variability in VWM performance arises from systematic variation in precision across the stimuli themselves; such stimulus-specific variability can be incorporated into both discrete-capacity and variable-precision resource models. Participants viewed multiple oriented gratings, and then reported the orientation of a cued grating from memory. When modeling the overall distribution of VWM errors, we found that the variable-precision resource model outperformed the discrete model. However, VWM errors revealed a pronounced "oblique effect," with larger errors for oblique than cardinal orientations. After this source of variability was incorporated into both models, we found that the discrete model provided a better account of VWM errors. Our results demonstrate that variable precision across the stimulus space can lead to an unwarranted advantage for resource models that assume stochastically variable precision. When these deterministic sources are adequately modeled, human working memory performance reveals evidence of a discrete capacity limit. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Model-based adaptive 3D sonar reconstruction in reverberating environments.
Saucan, Augustin-Alexandru; Sintes, Christophe; Chonavel, Thierry; Caillec, Jean-Marc Le
2015-10-01
In this paper, we propose a novel model-based approach for 3D underwater scene reconstruction, i.e., bathymetry, for side scan sonar arrays in complex and highly reverberating environments like shallow water areas. The presence of multipath echoes and volume reverberation generates false depth estimates. To improve the resulting bathymetry, this paper proposes and develops an adaptive filter, based on several original geometrical models. This multimodel approach makes it possible to track and separate the direction of arrival trajectories of multiple echoes impinging the array. Echo tracking is perceived as a model-based processing stage, incorporating prior information on the temporal evolution of echoes in order to reject cluttered observations generated by interfering echoes. The results of the proposed filter on simulated and real sonar data showcase the clutter-free and regularized bathymetric reconstruction. Model validation is carried out with goodness of fit tests, and demonstrates the importance of model-based processing for bathymetry reconstruction.
Behavior systems and reinforcement: an integrative approach.
Timberlake, W
1993-01-01
Most traditional conceptions of reinforcement are based on a simple causal model in which responding is strengthened by the presentation of a reinforcer. I argue that reinforcement is better viewed as the outcome of constraint of a functioning causal system comprised of multiple interrelated causal sequences, complex linkages between causes and effects, and a set of initial conditions. Using a simplified system conception of the reinforcement situation, I review the similarities and drawbacks of traditional reinforcement models and analyze the recent contributions of cognitive, regulatory, and ecological approaches. Finally, I show how the concept of behavior systems can begin to incorporate both traditional and recent conceptions of reinforcement in an integrative approach. PMID:8354963
Clare, John; McKinney, Shawn T.; DePue, John E.; Loftin, Cynthia S.
2017-01-01
It is common to use multiple field sampling methods when implementing wildlife surveys to compare method efficacy or cost efficiency, integrate distinct pieces of information provided by separate methods, or evaluate method-specific biases and misclassification error. Existing models that combine information from multiple field methods or sampling devices permit rigorous comparison of method-specific detection parameters, enable estimation of additional parameters such as false-positive detection probability, and improve occurrence or abundance estimates, but with the assumption that the separate sampling methods produce detections independently of one another. This assumption is tenuous if methods are paired or deployed in close proximity simultaneously, a common practice that reduces the additional effort required to implement multiple methods and reduces the risk that differences between method-specific detection parameters are confounded by other environmental factors. We develop occupancy and spatial capture–recapture models that permit covariance between the detections produced by different methods, use simulation to compare estimator performance of the new models to models assuming independence, and provide an empirical application based on American marten (Martes americana) surveys using paired remote cameras, hair catches, and snow tracking. Simulation results indicate existing models that assume that methods independently detect organisms produce biased parameter estimates and substantially understate estimate uncertainty when this assumption is violated, while our reformulated models are robust to either methodological independence or covariance. Empirical results suggested that remote cameras and snow tracking had comparable probability of detecting present martens, but that snow tracking also produced false-positive marten detections that could potentially substantially bias distribution estimates if not corrected for. Remote cameras detected marten individuals more readily than passive hair catches. Inability to photographically distinguish individual sex did not appear to induce negative bias in camera density estimates; instead, hair catches appeared to produce detection competition between individuals that may have been a source of negative bias. Our model reformulations broaden the range of circumstances in which analyses incorporating multiple sources of information can be robustly used, and our empirical results demonstrate that using multiple field-methods can enhance inferences regarding ecological parameters of interest and improve understanding of how reliably survey methods sample these parameters.
Multiple Cognitive Control Effects of Error Likelihood and Conflict
Brown, Joshua W.
2010-01-01
Recent work on cognitive control has suggested a variety of performance monitoring functions of the anterior cingulate cortex, such as errors, conflict, error likelihood, and others. Given the variety of monitoring effects, a corresponding variety of control effects on behavior might be expected. This paper explores whether conflict and error likelihood produce distinct cognitive control effects on behavior, as measured by response time. A change signal task (Brown & Braver, 2005) was modified to include conditions of likely errors due to tardy as well as premature responses, in conditions with and without conflict. The results discriminate between competing hypotheses of independent vs. interacting conflict and error likelihood control effects. Specifically, the results suggest that the likelihood of premature vs. tardy response errors can lead to multiple distinct control effects, which are independent of cognitive control effects driven by response conflict. As a whole, the results point to the existence of multiple distinct cognitive control mechanisms and challenge existing models of cognitive control that incorporate only a single control signal. PMID:19030873
A visualization tool to support decision making in environmental and biological planning
Romañach, Stephanie S.; McKelvy, James M.; Conzelmann, Craig; Suir, Kevin J.
2014-01-01
Large-scale ecosystem management involves consideration of many factors for informed decision making. The EverVIEW Data Viewer is a cross-platform desktop decision support tool to help decision makers compare simulation model outputs from competing plans for restoring Florida's Greater Everglades. The integration of NetCDF metadata conventions into EverVIEW allows end-users from multiple institutions within and beyond the Everglades restoration community to share information and tools. Our development process incorporates continuous interaction with targeted end-users for increased likelihood of adoption. One of EverVIEW's signature features is side-by-side map panels, which can be used to simultaneously compare species or habitat impacts from alternative restoration plans. Other features include examination of potential restoration plan impacts across multiple geographic or tabular displays, and animation through time. As a result of an iterative, standards-driven approach, EverVIEW is relevant to large-scale planning beyond Florida, and is used in multiple biological planning efforts in the United States.
The Effects of Study Tasks in a Computer-Based Chemistry Learning Environment
NASA Astrophysics Data System (ADS)
Urhahne, Detlef; Nick, Sabine; Poepping, Anna Christin; Schulz, Sarah Jayne
2013-12-01
The present study examines the effects of different study tasks on the acquisition of knowledge about acids and bases in a computer-based learning environment. Three different task formats were selected to create three treatment conditions: learning with gap-fill and matching tasks, learning with multiple-choice tasks, and learning only from text and figures without any additional tasks. Participants were 196 ninth-grade students who learned with a self-developed multimedia program in a pretest-posttest control group design. Research results reveal that gap-fill and matching tasks were most effective in promoting knowledge acquisition, followed by multiple-choice tasks, and no tasks at all. The findings are in line with previous research on this topic. The effects can possibly be explained by the generation-recognition model, which predicts that gap-fill and matching tasks trigger more encompassing learning processes than multiple-choice tasks. It is concluded that instructional designers should incorporate more challenging study tasks for enhancing the effectiveness of computer-based learning environments.
A new paper-based platform technology for point-of-care diagnostics.
Gerbers, Roman; Foellscher, Wilke; Chen, Hong; Anagnostopoulos, Constantine; Faghri, Mohammad
2014-10-21
Currently, the Lateral flow Immunoassays (LFIAs) are not able to perform complex multi-step immunodetection tests because of their inability to introduce multiple reagents in a controlled manner to the detection area autonomously. In this research, a point-of-care (POC) paper-based lateral flow immunosensor was developed incorporating a novel microfluidic valve technology. Layers of paper and tape were used to create a three-dimensional structure to form the fluidic network. Unlike the existing LFIAs, multiple directional valves are embedded in the test strip layers to control the order and the timing of mixing for the sample and multiple reagents. In this paper, we report a four-valve device which autonomously directs three different fluids to flow sequentially over the detection area. As proof of concept, a three-step alkaline phosphatase based Enzyme-Linked ImmunoSorbent Assay (ELISA) protocol with Rabbit IgG as the model analyte was conducted to prove the suitability of the device for immunoassays. The detection limit of about 4.8 fm was obtained.
Guthrie, M.; Myers, C.E.; Gluck, M.A.
2015-01-01
The striatal dopamine signal has multiple facets; tonic level, phasic rise and fall, and variation of the phasic rise/fall depending on the expectation of reward/punishment. We have developed a network model of the striatal direct pathway using an ionic current level model of the medium spiny neuron that incorporates currents sensitive to changes in the tonic level of dopamine. The model neurons in the network learn action selection based on a novel set of mathematical rules that incorporate the phasic change in the dopamine signal. This network model is capable of learning to perform a sequence learning task that in humans is thought to be dependent on the basal ganglia. When both tonic and phasic levels of dopamine are decreased, as would be expected in unmedicated Parkinson’s disease (PD), the model reproduces the deficits seen in a human PD group off medication. When the tonic level is increased to normal, but with reduced phasic increases and decreases in response to reward and punishment respectively, as would be expected in PD medicated with L-Dopa, the model again reproduces the human data. These findings support the view that the cognitive dysfunctions seen in Parkinson’s disease are not solely due to either the decreased tonic level of dopamine or to the decreased responsiveness of the phasic dopamine signal to reward and punishment, but to a combination of the two factors that varies dependent on disease stage and medication status. PMID:19162084
Posa, Mihalj; Pilipović, Ana; Lalić, Mladena; Popović, Jovan
2011-02-15
Linear dependence between temperature (t) and retention coefficient (k, reversed phase HPLC) of bile acids is obtained. Parameters (a, intercept and b, slope) of the linear function k=f(t) highly correlate with bile acids' structures. Investigated bile acids form linear congeneric groups on a principal component (calculated from k=f(t)) score plot that are in accordance with conformations of the hydroxyl and oxo groups in a bile acid steroid skeleton. Partition coefficient (K(p)) of nitrazepam in bile acids' micelles is investigated. Nitrazepam molecules incorporated in micelles show modified bioavailability (depo effect, higher permeability, etc.). Using multiple linear regression method QSAR models of nitrazepams' partition coefficient, K(p) are derived on the temperatures of 25°C and 37°C. For deriving linear regression models on both temperatures experimentally obtained lipophilicity parameters are included (PC1 from data k=f(t)) and in silico descriptors of the shape of a molecule while on the higher temperature molecular polarisation is introduced. This indicates the fact that the incorporation mechanism of nitrazepam in BA micelles changes on the higher temperatures. QSAR models are derived using partial least squares method as well. Experimental parameters k=f(t) are shown to be significant predictive variables. Both QSAR models are validated using cross validation and internal validation method. PLS models have slightly higher predictive capability than MLR models. Copyright © 2010 Elsevier B.V. All rights reserved.
A regional-scale ecological risk framework for environmental flow evaluations
NASA Astrophysics Data System (ADS)
O'Brien, Gordon C.; Dickens, Chris; Hines, Eleanor; Wepener, Victor; Stassen, Retha; Quayle, Leo; Fouchy, Kelly; MacKenzie, James; Graham, P. Mark; Landis, Wayne G.
2018-02-01
Environmental flow (E-flow) frameworks advocate holistic, regional-scale, probabilistic E-flow assessments that consider flow and non-flow drivers of change in a socio-ecological context as best practice. Regional-scale ecological risk assessments of multiple stressors to social and ecological endpoints, which address ecosystem dynamism, have been undertaken internationally at different spatial scales using the relative-risk model since the mid-1990s. With the recent incorporation of Bayesian belief networks into the relative-risk model, a robust regional-scale ecological risk assessment approach is available that can contribute to achieving the best practice recommendations of E-flow frameworks. PROBFLO is a holistic E-flow assessment method that incorporates the relative-risk model and Bayesian belief networks (BN-RRM) into a transparent probabilistic modelling tool that addresses uncertainty explicitly. PROBFLO has been developed to evaluate the socio-ecological consequences of historical, current and future water resource use scenarios and generate E-flow requirements on regional spatial scales. The approach has been implemented in two regional-scale case studies in Africa where its flexibility and functionality has been demonstrated. In both case studies the evidence-based outcomes facilitated informed environmental management decision making, with trade-off considerations in the context of social and ecological aspirations. This paper presents the PROBFLO approach as applied to the Senqu River catchment in Lesotho and further developments and application in the Mara River catchment in Kenya and Tanzania. The 10 BN-RRM procedural steps incorporated in PROBFLO are demonstrated with examples from both case studies. PROBFLO can contribute to the adaptive management of water resources and contribute to the allocation of resources for sustainable use of resources and address protection requirements.
Development of a Detailed Surface Chemistry Framework in DSMC
NASA Technical Reports Server (NTRS)
Swaminathan-Gopalan, K.; Borner, A.; Stephani, K. A.
2017-01-01
Many of the current direct simulation Monte Carlo (DSMC) codes still employ only simple surface catalysis models. These include only basic mechanisms such as dissociation, recombination, and exchange reactions, without any provision for adsorption and finite rate kinetics. Incorporating finite rate chemistry at the surface is increasingly becoming a necessity for various applications such as high speed re-entry flows over thermal protection systems (TPS), micro-electro-mechanical systems (MEMS), surface catalysis, etc. In the recent years, relatively few works have examined finite-rate surface reaction modeling using the DSMC method.In this work, a generalized finite-rate surface chemistry framework incorporating a comprehensive list of reaction mechanisms is developed and implemented into the DSMC solver SPARTA. The various mechanisms include adsorption, desorption, Langmuir-Hinshelwood (LH), Eley-Rideal (ER), Collision Induced (CI), condensation, sublimation, etc. The approach is to stochastically model the various competing reactions occurring on a set of active sites. Both gas-surface (e.g., ER, CI) and pure-surface (e.g., LH, desorption) reaction mechanisms are incorporated. The reaction mechanisms could also be catalytic or surface altering based on the participation of the bulk-phase species (e.g., bulk carbon atoms). Marschall and MacLean developed a general formulation in which multiple phases and surface sites are used and we adopt a similar convention in the current work. Microscopic parameters of reaction probabilities (for gas-surface reactions) and frequencies (for pure-surface reactions) that are require for DSMC are computed from the surface properties and macroscopic parameters such as rate constants, sticking coefficients, etc. The energy and angular distributions of the products are decided based on the reaction type and input parameters. Thus, the user has the capability to model various surface reactions via user-specified reaction rate constants, surface properties and parameters.
NASA Technical Reports Server (NTRS)
Goldberg, Robert K.; Carney, Kelly S.; Dubois, Paul; Hoffarth, Canio; Khaled, Bilal; Shyamsunder, Loukham; Rajan, Subramaniam; Blankenhorn, Gunther
2017-01-01
The need for accurate material models to simulate the deformation, damage and failure of polymer matrix composites under impact conditions is becoming critical as these materials are gaining increased use in the aerospace and automotive communities. The aerospace community has identified several key capabilities which are currently lacking in the available material models in commercial transient dynamic finite element codes. To attempt to improve the predictive capability of composite impact simulations, a next generation material model is being developed for incorporation within the commercial transient dynamic finite element code LS-DYNA. The material model, which incorporates plasticity, damage and failure, utilizes experimentally based tabulated input to define the evolution of plasticity and damage and the initiation of failure as opposed to specifying discrete input parameters such as modulus and strength. The plasticity portion of the orthotropic, three-dimensional, macroscopic composite constitutive model is based on an extension of the Tsai-Wu composite failure model into a generalized yield function with a non-associative flow rule. For the damage model, a strain equivalent formulation is used to allow for the uncoupling of the deformation and damage analyses. In the damage model, a semi-coupled approach is employed where the overall damage in a particular coordinate direction is assumed to be a multiplicative combination of the damage in that direction resulting from the applied loads in various coordinate directions. For the failure model, a tabulated approach is utilized in which a stress or strain based invariant is defined as a function of the location of the current stress state in stress space to define the initiation of failure. Failure surfaces can be defined with any arbitrary shape, unlike traditional failure models where the mathematical functions used to define the failure surface impose a specific shape on the failure surface. In the current paper, the complete development of the failure model is described and the generation of a tabulated failure surface for a representative composite material is discussed.
Novel microbial fuel cell design to operate with different wastewaters simultaneously.
Mathuriya, Abhilasha Singh
2016-04-01
A novel single cathode chamber and multiple anode chamber microbial fuel cell design (MAC-MFC) was developed by incorporating multiple anode chambers into a single unit and its performance was checked. During 60 days of operation, performance of MAC-MFC was assessed and compared with standard single anode/cathode chamber microbial fuel cell (SC-MFC). The tests showed that MAC-MFC generated stable and higher power outputs compared with SC-MFC and each anode chamber contributed efficiently. Further, MAC-MFCs were incorporated with different wastewaters in different anode chambers and their behavior in MFC performance was observed. MAC-MFC efficiently treated multiple wastewaters simultaneously at low cost and small space, which claims its candidature for future possible scale-up applications. Copyright © 2015. Published by Elsevier B.V.
Anarjan, Navideh; Jafarizadeh-Malmiri, Hoda; Nehdi, Imededdine Arbi; Sbihi, Hassen Mohamed; Al-Resayes, Saud Ibrahim; Tan, Chin Ping
2015-01-01
Nanodispersion systems allow incorporation of lipophilic bioactives, such as astaxanthin (a fat soluble carotenoid) into aqueous systems, which can improve their solubility, bioavailability, and stability, and widen their uses in water-based pharmaceutical and food products. In this study, response surface methodology was used to investigate the influences of homogenization time (0.5–20 minutes) and speed (1,000–9,000 rpm) in the formation of astaxanthin nanodispersions via the solvent-diffusion process. The product was characterized for particle size and astaxanthin concentration using laser diffraction particle size analysis and high performance liquid chromatography, respectively. Relatively high determination coefficients (ranging from 0.896 to 0.969) were obtained for all suggested polynomial regression models. The overall optimal homogenization conditions were determined by multiple response optimization analysis to be 6,000 rpm for 7 minutes. In vitro cellular uptake of astaxanthin from the suggested individual and multiple optimized astaxanthin nanodispersions was also evaluated. The cellular uptake of astaxanthin was found to be considerably increased (by more than five times) as it became incorporated into optimum nanodispersion systems. The lack of a significant difference between predicted and experimental values confirms the suitability of the regression equations connecting the response variables studied to the independent parameters. PMID:25709435
Defense Mechanisms: Discussions and Bibliographies; General or Multiple, and Specific.
ERIC Educational Resources Information Center
Pedrini, D. T.; Pedrini, Bonnie C.
This publication considers some Freudian ego mechanisms. The first discussion and bibliography concerns defense mechanisms, in general or in multiple; after which, the discussions and bibliographies concern specific defense mechanisms: denial; displacement, substitution, sublimation; fixation; identification, introjection, incorporation,…
Cost-effective solutions to maintaining smart grid reliability
NASA Astrophysics Data System (ADS)
Qin, Qiu
As the aging power systems are increasingly working closer to the capacity and thermal limits, maintaining an sufficient reliability has been of great concern to the government agency, utility companies and users. This dissertation focuses on improving the reliability of transmission and distribution systems. Based on the wide area measurements, multiple model algorithms are developed to diagnose transmission line three-phase short to ground faults in the presence of protection misoperations. The multiple model algorithms utilize the electric network dynamics to provide prompt and reliable diagnosis outcomes. Computational complexity of the diagnosis algorithm is reduced by using a two-step heuristic. The multiple model algorithm is incorporated into a hybrid simulation framework, which consist of both continuous state simulation and discrete event simulation, to study the operation of transmission systems. With hybrid simulation, line switching strategy for enhancing the tolerance to protection misoperations is studied based on the concept of security index, which involves the faulted mode probability and stability coverage. Local measurements are used to track the generator state and faulty mode probabilities are calculated in the multiple model algorithms. FACTS devices are considered as controllers for the transmission system. The placement of FACTS devices into power systems is investigated with a criterion of maintaining a prescribed level of control reconfigurability. Control reconfigurability measures the small signal combined controllability and observability of a power system with an additional requirement on fault tolerance. For the distribution systems, a hierarchical framework, including a high level recloser allocation scheme and a low level recloser placement scheme, is presented. The impacts of recloser placement on the reliability indices is analyzed. Evaluation of reliability indices in the placement process is carried out via discrete event simulation. The reliability requirements are described with probabilities and evaluated from the empirical distributions of reliability indices.
Microbial dormancy improves development and experimental validation of ecosystem model
Wang, Gangsheng; Jagadamma, Sindhu; Mayes, Melanie; ...
2014-07-11
Climate feedbacks from soils can result from environmental change followed by response of plant and microbial communities, and/or associated changes in nutrient cycling. Explicit consideration of microbial life history traits and functions may be necessary to predict climate feedbacks due to changes in the physiology and community composition of microbes and their associated effect on carbon cycling. Here, we enhanced the Microbial-Enzyme-mediated Decomposition (MEND) model by incorporating microbial dormancy and the ability to track multiple isotopes of carbon. We tested two versions of MEND, i.e., MEND with dormancy and MEND without dormancy, against long-term (270 d) lab incubations of fourmore » soils with isotopically-labeled substrates. MEND without dormancy adequately fitted multiple observations (total and 14C respiration, and dissolved organic carbon), but at the cost of significantly underestimating the total microbial biomass. The MEND with dormancy improved estimates of microbial biomass by 20 71% over the MEND without dormancy. We observed large differences for two fitted model parameters, the specific maintenance and growth rates for active microbes, depending on whether dormancy was considered. Together our model extrapolations of the incubation study show that long-term soil incubations with observations in multiple carbon pools are necessary to estimate both decomposition and microbial parameters. These efforts should provide essential support to future field- and global-scale simulations and enable more confident predictions of feedbacks between environmental change and carbon cycling.« less
Grossberg, Stephen; Markowitz, Jeffrey; Cao, Yongqiang
2011-12-01
Visual object recognition is an essential accomplishment of advanced brains. Object recognition needs to be tolerant, or invariant, with respect to changes in object position, size, and view. In monkeys and humans, a key area for recognition is the anterior inferotemporal cortex (ITa). Recent neurophysiological data show that ITa cells with high object selectivity often have low position tolerance. We propose a neural model whose cells learn to simulate this tradeoff, as well as ITa responses to image morphs, while explaining how invariant recognition properties may arise in stages due to processes across multiple cortical areas. These processes include the cortical magnification factor, multiple receptive field sizes, and top-down attentive matching and learning properties that may be tuned by task requirements to attend to either concrete or abstract visual features with different levels of vigilance. The model predicts that data from the tradeoff and image morph tasks emerge from different levels of vigilance in the animals performing them. This result illustrates how different vigilance requirements of a task may change the course of category learning, notably the critical features that are attended and incorporated into learned category prototypes. The model outlines a path for developing an animal model of how defective vigilance control can lead to symptoms of various mental disorders, such as autism and amnesia. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
So, Byung-Jin; Kim, Jin-Young; Kwon, Hyun-Han; Lima, Carlos H. R.
2017-10-01
A conditional copula function based downscaling model in a fully Bayesian framework is developed in this study to evaluate future changes in intensity-duration frequency (IDF) curves in South Korea. The model incorporates a quantile mapping approach for bias correction while integrated Bayesian inference allows accounting for parameter uncertainties. The proposed approach is used to temporally downscale expected changes in daily rainfall, inferred from multiple CORDEX-RCMs based on Representative Concentration Pathways (RCPs) 4.5 and 8.5 scenarios, into sub-daily temporal scales. Among the CORDEX-RCMs, a noticeable increase in rainfall intensity is observed in the HadGem3-RA (9%), RegCM (28%), and SNU_WRF (13%) on average, whereas no noticeable changes are observed in the GRIMs (-2%) for the period 2020-2050. More specifically, a 5-30% increase in rainfall intensity is expected in all of the CORDEX-RCMs for 50-year return values under the RCP 8.5 scenario. Uncertainty in simulated rainfall intensity gradually decreases toward the longer durations, which is largely associated with the enhanced strength of the relationship with the 24-h annual maximum rainfalls (AMRs). A primary advantage of the proposed model is that projected changes in future rainfall intensities are well preserved.
Locally adaptive, spatially explicit projection of US population for 2030 and 2050.
McKee, Jacob J; Rose, Amy N; Bright, Edward A; Huynh, Timmy; Bhaduri, Budhendra L
2015-02-03
Localized adverse events, including natural hazards, epidemiological events, and human conflict, underscore the criticality of quantifying and mapping current population. Building on the spatial interpolation technique previously developed for high-resolution population distribution data (LandScan Global and LandScan USA), we have constructed an empirically informed spatial distribution of projected population of the contiguous United States for 2030 and 2050, depicting one of many possible population futures. Whereas most current large-scale, spatially explicit population projections typically rely on a population gravity model to determine areas of future growth, our projection model departs from these by accounting for multiple components that affect population distribution. Modeled variables, which included land cover, slope, distances to larger cities, and a moving average of current population, were locally adaptive and geographically varying. The resulting weighted surface was used to determine which areas had the greatest likelihood for future population change. Population projections of county level numbers were developed using a modified version of the US Census's projection methodology, with the US Census's official projection as the benchmark. Applications of our model include incorporating multiple various scenario-driven events to produce a range of spatially explicit population futures for suitability modeling, service area planning for governmental agencies, consequence assessment, mitigation planning and implementation, and assessment of spatially vulnerable populations.
Network hydraulics inclusion in water quality event detection using multiple sensor stations data.
Oliker, Nurit; Ostfeld, Avi
2015-09-01
Event detection is one of the current most challenging topics in water distribution systems analysis: how regular on-line hydraulic (e.g., pressure, flow) and water quality (e.g., pH, residual chlorine, turbidity) measurements at different network locations can be efficiently utilized to detect water quality contamination events. This study describes an integrated event detection model which combines multiple sensor stations data with network hydraulics. To date event detection modelling is likely limited to single sensor station location and dataset. Single sensor station models are detached from network hydraulics insights and as a result might be significantly exposed to false positive alarms. This work is aimed at decreasing this limitation through integrating local and spatial hydraulic data understanding into an event detection model. The spatial analysis complements the local event detection effort through discovering events with lower signatures by exploring the sensors mutual hydraulic influences. The unique contribution of this study is in incorporating hydraulic simulation information into the overall event detection process of spatially distributed sensors. The methodology is demonstrated on two example applications using base runs and sensitivity analyses. Results show a clear advantage of the suggested model over single-sensor event detection schemes. Copyright © 2015 Elsevier Ltd. All rights reserved.
Restoration of acidic mine spoils with sewage sludge: II measurement of solids applied
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stucky, D.J.; Zoeller, A.L.
1980-01-01
Sewage sludge was incorporated in acidic strip mine spoils at rates equivalent to 0, 224, 336, and 448 dry metric tons (dmt)/ha and placed in pots in a greenhouse. Spoil parameters were determined 48 hours after sludge incorporation, Time Planting (P), and five months after orchardgrass (Dactylis glomerata L.) was planted, Time Harvest (H), in the pots. Parameters measured were: pH, organic matter content (OM), cation exchange capacity (CEC), electrical conductivity (EC) and yield. Values for each parameter were significantly different at the two sampling times. Correlation coefficient values were calculated for all parameters versus rates of applied sewage sludgemore » and all parameters versus each other. Multiple regressions were performed, stepwise, for all parameters versus rates of applied sewage sludge. Equations to predict amounts of sewage sludge incorporated in spoils were derived for individual and multiple parameters. Generally, measurements made at Time P achieved the highest correlation coefficient and multiple correlation coefficient values; therefore, the authors concluded data from Time P had the greatest predictability value. The most important value measured to predict rate of applied sewage sludge was pH and some additional accuracy was obtained by including CEC in equation. This experiment indicated that soil properties can be used to estimate amounts of sewage sludge solids required to reclaim acidic mine spoils and to estimate quantities incorporated.« less
Multielevation calibration of frequency-domain electromagnetic data
Minsley, Burke J.; Kass, M. Andy; Hodges, Greg; Smith, Bruce D.
2014-01-01
Systematic calibration errors must be taken into account because they can substantially impact the accuracy of inverted subsurface resistivity models derived from frequency-domain electromagnetic data, resulting in potentially misleading interpretations. We have developed an approach that uses data acquired at multiple elevations over the same location to assess calibration errors. A significant advantage is that this method does not require prior knowledge of subsurface properties from borehole or ground geophysical data (though these can be readily incorporated if available), and is, therefore, well suited to remote areas. The multielevation data were used to solve for calibration parameters and a single subsurface resistivity model that are self consistent over all elevations. The deterministic and Bayesian formulations of the multielevation approach illustrate parameter sensitivity and uncertainty using synthetic- and field-data examples. Multiplicative calibration errors (gain and phase) were found to be better resolved at high frequencies and when data were acquired over a relatively conductive area, whereas additive errors (bias) were reasonably resolved over conductive and resistive areas at all frequencies. The Bayesian approach outperformed the deterministic approach when estimating calibration parameters using multielevation data at a single location; however, joint analysis of multielevation data at multiple locations using the deterministic algorithm yielded the most accurate estimates of calibration parameters. Inversion results using calibration-corrected data revealed marked improvement in misfit, lending added confidence to the interpretation of these models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baca, Renee Nicole; Congdon, Michael L.; Brake, Matthew Robert
In 2012, a Matlab GUI for the prediction of the coefficient of restitution was developed in order to enable the formulation of more accurate Finite Element Analysis (FEA) models of components. This report details the development of a new Rebound Dynamics GUI, and how it differs from the previously developed program. The new GUI includes several new features, such as source and citation documentation for the material database, as well as a multiple materials impact modeler for use with LMS Virtual.Lab Motion (LMS VLM), and a rigid body dynamics modeling software. The Rebound Dynamics GUI has been designed to workmore » with LMS VLM to enable straightforward incorporation of velocity-dependent coefficients of restitution in rigid body dynamics simulations.« less
NASA Astrophysics Data System (ADS)
Das, Debottam; Ghosh, Kirtiman; Mitra, Manimala; Mondal, Subhadeep
2018-01-01
We consider an extension of the standard model (SM) augmented by two neutral singlet fermions per generation and a leptoquark. In order to generate the light neutrino masses and mixing, we incorporate inverse seesaw mechanism. The right-handed neutrino production in this model is significantly larger than the conventional inverse seesaw scenario. We analyze the different collider signatures of this model and find that the final states associated with three or more leptons, multijet and at least one b -tagged and (or) τ -tagged jet can probe larger RH neutrino mass scale. We have also proposed a same-sign dilepton signal region associated with multiple jets and missing energy that can be used to distinguish the present scenario from the usual inverse seesaw extended SM.
Calibrated Multiple Event Relocations of the Central and Eastern United States
NASA Astrophysics Data System (ADS)
Yeck, W. L.; Benz, H.; McNamara, D. E.; Bergman, E.; Herrmann, R. B.; Myers, S. C.
2015-12-01
Earthquake locations are a first-order observable which form the basis of a wide range of seismic analyses. Currently, the ANSS catalog primarily contains published single-event earthquake locations that rely on assumed 1D velocity models. Increasing the accuracy of cataloged earthquake hypocenter locations and origin times and constraining their associated errors can improve our understanding of Earth structure and have a fundamental impact on subsequent seismic studies. Multiple-event relocation algorithms often increase the precision of relative earthquake hypocenters but are hindered by their limited ability to provide realistic location uncertainties for individual earthquakes. Recently, a Bayesian approach to the multiple event relocation problem has proven to have many benefits including the ability to: (1) handle large data sets; (2) easily incorporate a priori hypocenter information; (3) model phase assignment errors; and, (4) correct for errors in the assumed travel time model. In this study we employ bayseloc [Myers et al., 2007, 2009] to relocate earthquakes in the Central and Eastern United States from 1964-present. We relocate ~11,000 earthquakes with a dataset of ~439,000 arrival time observations. Our dataset includes arrival-time observations from the ANSS catalog supplemented with arrival-time data from the Reviewed ISC Bulletin (prior to 1981), targeted local studies, and arrival-time data from the TA Array. One significant benefit of the bayesloc algorithm is its ability to incorporate a priori constraints on the probability distributions of specific earthquake locations parameters. To constrain the inversion, we use high-quality calibrated earthquake locations from local studies, including studies from: Raton Basin, Colorado; Mineral, Virginia; Guy, Arkansas; Cheneville, Quebec; Oklahoma; and Mt. Carmel, Illinois. We also add depth constraints to 232 earthquakes from regional moment tensors. Finally, we add constraints from four historic (1964-1973) ground truth events from a verification database. We (1) evaluate our ability to improve our location estimations, (2) use improved locations to evaluate Earth structure in seismically active regions, and (3) examine improvements to the estimated locations of historic large magnitude earthquakes.
Surrogate modeling of deformable joint contact using artificial neural networks.
Eskinazi, Ilan; Fregly, Benjamin J
2015-09-01
Deformable joint contact models can be used to estimate loading conditions for cartilage-cartilage, implant-implant, human-orthotic, and foot-ground interactions. However, contact evaluations are often so expensive computationally that they can be prohibitive for simulations or optimizations requiring thousands or even millions of contact evaluations. To overcome this limitation, we developed a novel surrogate contact modeling method based on artificial neural networks (ANNs). The method uses special sampling techniques to gather input-output data points from an original (slow) contact model in multiple domains of input space, where each domain represents a different physical situation likely to be encountered. For each contact force and torque output by the original contact model, a multi-layer feed-forward ANN is defined, trained, and incorporated into a surrogate contact model. As an evaluation problem, we created an ANN-based surrogate contact model of an artificial tibiofemoral joint using over 75,000 evaluations of a fine-grid elastic foundation (EF) contact model. The surrogate contact model computed contact forces and torques about 1000 times faster than a less accurate coarse grid EF contact model. Furthermore, the surrogate contact model was seven times more accurate than the coarse grid EF contact model within the input domain of a walking motion. For larger input domains, the surrogate contact model showed the expected trend of increasing error with increasing domain size. In addition, the surrogate contact model was able to identify out-of-contact situations with high accuracy. Computational contact models created using our proposed ANN approach may remove an important computational bottleneck from musculoskeletal simulations or optimizations incorporating deformable joint contact models. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.
Surrogate Modeling of Deformable Joint Contact using Artificial Neural Networks
Eskinazi, Ilan; Fregly, Benjamin J.
2016-01-01
Deformable joint contact models can be used to estimate loading conditions for cartilage-cartilage, implant-implant, human-orthotic, and foot-ground interactions. However, contact evaluations are often so expensive computationally that they can be prohibitive for simulations or optimizations requiring thousands or even millions of contact evaluations. To overcome this limitation, we developed a novel surrogate contact modeling method based on artificial neural networks (ANNs). The method uses special sampling techniques to gather input-output data points from an original (slow) contact model in multiple domains of input space, where each domain represents a different physical situation likely to be encountered. For each contact force and torque output by the original contact model, a multi-layer feed-forward ANN is defined, trained, and incorporated into a surrogate contact model. As an evaluation problem, we created an ANN-based surrogate contact model of an artificial tibiofemoral joint using over 75,000 evaluations of a fine-grid elastic foundation (EF) contact model. The surrogate contact model computed contact forces and torques about 1000 times faster than a less accurate coarse grid EF contact model. Furthermore, the surrogate contact model was seven times more accurate than the coarse grid EF contact model within the input domain of a walking motion. For larger input domains, the surrogate contact model showed the expected trend of increasing error with increasing domain size. In addition, the surrogate contact model was able to identify out-of-contact situations with high accuracy. Computational contact models created using our proposed ANN approach may remove an important computational bottleneck from musculoskeletal simulations or optimizations incorporating deformable joint contact models. PMID:26220591
Fortes, Nara Lúcia Perondi; Navas-Cortés, Juan A; Silva, Carlos Alberto; Bettiol, Wagner
2016-01-01
The objectives of this study were to evaluate the combined effects of soil biotic and abiotic factors on the incidence of Fusarium corn stalk rot, during four annual incorporations of two types of sewage sludge into soil in a 5-years field assay under tropical conditions and to predict the effects of these variables on the disease. For each type of sewage sludge, the following treatments were included: control with mineral fertilization recommended for corn; control without fertilization; sewage sludge based on the nitrogen concentration that provided the same amount of nitrogen as in the mineral fertilizer treatment; and sewage sludge that provided two, four and eight times the nitrogen concentration recommended for corn. Increasing dosages of both types of sewage sludge incorporated into soil resulted in increased corn stalk rot incidence, being negatively correlated with corn yield. A global analysis highlighted the effect of the year of the experiment, followed by the sewage sludge dosages. The type of sewage sludge did not affect the disease incidence. A multiple logistic model using a stepwise procedure was fitted based on the selection of a model that included the three explanatory parameters for disease incidence: electrical conductivity, magnesium and Fusarium population. In the selected model, the probability of higher disease incidence increased with an increase of these three explanatory parameters. When the explanatory parameters were compared, electrical conductivity presented a dominant effect and was the main variable to predict the probability distribution curves of Fusarium corn stalk rot, after sewage sludge application into the soil. PMID:27176597
Development of a three dimensional numerical water quality model for continental shelf applications
NASA Technical Reports Server (NTRS)
Spaulding, M.; Hunter, D.
1975-01-01
A model to predict the distribution of water quality parameters in three dimensions was developed. The mass transport equation was solved using a non-dimensional vertical axis and an alternating-direction-implicit finite difference technique. The reaction kinetics of the constituents were incorporated into a matrix method which permits computation of the interactions of multiple constituents. Methods for the computation of dispersion coefficients and coliform bacteria decay rates were determined. Numerical investigations of dispersive and dissipative effects showed that the three-dimensional model performs as predicted by analysis of simpler cases. The model was then applied to a two dimensional vertically averaged tidal dynamics model for the Providence River. It was also extended to a steady state application by replacing the time step with an iteration sequence. This modification was verified by comparison to analytical solutions and applied to a river confluence situation.
A UML approach to process modelling of clinical practice guidelines for enactment.
Knape, T; Hederman, L; Wade, V P; Gargan, M; Harris, C; Rahman, Y
2003-01-01
Although clinical practice guidelines (CPGs) have been suggested as a means of encapsulating best practice in evidence-based medical treatment, their usage in clinical environments has been disappointing. Criticisms of guideline representations have been that they are predominantly narrative and are difficult to incorporate into clinical information systems. This paper analyses the use of UML process modelling techniques for guideline representation and proposes the automated generation of executable guidelines using XMI. This hybrid UML-XMI approach provides flexible authoring of guideline decision and control structures whilst integrating appropriate data flow. It also uses an open XMI standard interface to allow the use of authoring tools and process control systems from multiple vendors. The paper first surveys CPG modelling formalisms followed by a brief introduction to process modelling in UMI. Furthermore, the modelling of CPGs in UML is presented leading to a case study of encoding a diabetes mellitus CPG using UML.
An Empirical Model for Vane-Type Vortex Generators in a Navier-Stokes Code
NASA Technical Reports Server (NTRS)
Dudek, Julianne C.
2005-01-01
An empirical model which simulates the effects of vane-type vortex generators in ducts was incorporated into the Wind-US Navier-Stokes computational fluid dynamics code. The model enables the effects of the vortex generators to be simulated without defining the details of the geometry within the grid, and makes it practical for researchers to evaluate multiple combinations of vortex generator arrangements. The model determines the strength of each vortex based on the generator geometry and the local flow conditions. Validation results are presented for flow in a straight pipe with a counter-rotating vortex generator arrangement, and the results are compared with experimental data and computational simulations using a gridded vane generator. Results are also presented for vortex generator arrays in two S-duct diffusers, along with accompanying experimental data. The effects of grid resolution and turbulence model are also examined.
Hidden dynamics in models of discontinuity and switching
NASA Astrophysics Data System (ADS)
Jeffrey, Mike R.
2014-04-01
Sharp switches in behaviour, like impacts, stick-slip motion, or electrical relays, can be modelled by differential equations with discontinuities. A discontinuity approximates fine details of a switching process that lie beyond a bulk empirical model. The theory of piecewise-smooth dynamics describes what happens assuming we can solve the system of equations across its discontinuity. What this typically neglects is that effects which are vanishingly small outside the discontinuity can have an arbitrarily large effect at the discontinuity itself. Here we show that such behaviour can be incorporated within the standard theory through nonlinear terms, and these introduce multiple sliding modes. We show that the nonlinear terms persist in more precise models, for example when the discontinuity is smoothed out. The nonlinear sliding can be eliminated, however, if the model contains an irremovable level of unknown error, which provides a criterion for systems to obey the standard Filippov laws for sliding dynamics at a discontinuity.
Hay, L.; Knapp, L.
1996-01-01
Investigating natural, potential, and man-induced impacts on hydrological systems commonly requires complex modelling with overlapping data requirements, and massive amounts of one- to four-dimensional data at multiple scales and formats. Given the complexity of most hydrological studies, the requisite software infrastructure must incorporate many components including simulation modelling, spatial analysis and flexible, intuitive displays. There is a general requirement for a set of capabilities to support scientific analysis which, at this time, can only come from an integration of several software components. Integration of geographic information systems (GISs) and scientific visualization systems (SVSs) is a powerful technique for developing and analysing complex models. This paper describes the integration of an orographic precipitation model, a GIS and a SVS. The combination of these individual components provides a robust infrastructure which allows the scientist to work with the full dimensionality of the data and to examine the data in a more intuitive manner.
NASA Astrophysics Data System (ADS)
Wu, Y.; Luo, Z.; Zhou, H.; Xu, C.
2017-12-01
Regional gravity field recovery is of great importance for understanding ocean circulation and currents in oceanography and investigating the structure of the lithosphere in geophysics. Under the framework of remove-compute-restore methodology (RCR), a regional approach using spherical radial basis functions (SRBFs) is set up for gravity field determination using the GOCE (Gravity Field and Steady-State Ocean Circulation Explorer) gravity gradient tensor, heterogeneous gravimetry and altimetry measurements. The additional value on regional model introduced by GOCE data is validated and quantified. Numerical experiments in a western European region show that the effects introduced by GOCE data display as long-wavelength patterns on the centimeter scale in terms of quasi-geoid heights, which may allow to highlight and reduce the remaining long-wavelength errors and biases in ground-based data and improve the regional model. The accuracy of the gravimetric quasi-geoid computed with a combination of three diagonal components is improved by 0.6 cm (0.5 cm) in the Netherlands (Belgium), compared to that derived from gravimetry and altimetry data alone, when GOCO05s is used as the reference model. Performances of different diagonal components and their combinations are not identical; the solution with vertical gradients shows highest quality when a single component is used. Incorporation of multiple components further improves the model, and the combination of three components shows the best fit to GPS/leveling data. Moreover, the contributions introduced by different components are heterogeneous in terms of spatial coverage and magnitude, although similar structures occur in the spatial domain. Contributions introduced by the vertical components have the most significant effects when a single component is applied. Combination of multiple components further magnifies these effects and improves the solutions, and the incorporation of three components has the most prominent effects. This work is supported by the State Scholarship Fund from Chinese Scholarship Council (201306270014), China Postdoctoral Science Foundation (No.2016M602301), and the National Natural Science Foundation of China (No. 41374023).
Yang, Yi-Feng
2013-12-01
The present paper evaluates the relation between transformational leadership and market orientation along with the mediating and moderating effects of change commitment for employees in customer centers in Taiwan. 327 questionnaires were returned by personnel at several customer centers in four different insurance companies. Inter-rater agreement was acceptable based on the multiple raters (i.e., the consumer-related employees from the division groups) of one individual (i.e., a manager)--indicating the aggregated measures were acceptable. The multi-source sample comprised data taken from the four division centers: phone services, customer representatives, financial specialists, and front-line salespeople. The relations were assessed using a multiple mediation procedure incorporating bootstrap techniques and PRODCLIN2 with structural equation modeling analysis. The results reflect a mediating role for change commitment.
Comparing multiple turbulence restoration algorithms performance on noisy anisoplanatic imagery
NASA Astrophysics Data System (ADS)
Rucci, Michael A.; Hardie, Russell C.; Dapore, Alexander J.
2017-05-01
In this paper, we compare the performance of multiple turbulence mitigation algorithms to restore imagery degraded by atmospheric turbulence and camera noise. In order to quantify and compare algorithm performance, imaging scenes were simulated by applying noise and varying levels of turbulence. For the simulation, a Monte-Carlo wave optics approach is used to simulate the spatially and temporally varying turbulence in an image sequence. A Poisson-Gaussian noise mixture model is then used to add noise to the observed turbulence image set. These degraded image sets are processed with three separate restoration algorithms: Lucky Look imaging, bispectral speckle imaging, and a block matching method with restoration filter. These algorithms were chosen because they incorporate different approaches and processing techniques. The results quantitatively show how well the algorithms are able to restore the simulated degraded imagery.
The motivation, skills, and decision-making model of "drug abuse" prevention.
Sussman, Steve; Earleywine, Mitchell; Wills, Thomas; Cody, Christine; Biglan, Tony; Dent, Clyde W; Newcomb, Michael D
2004-01-01
This article summarizes the theoretical basis for targeted prevention programs as they apply to different high-risk groups. We explain the advantages and disadvantages of different definitions of risk and discuss strategies for preventing drug use related problems in high-risk youth. Productive prevention programs for many at-risk groups share similar components, including those that address motivation, skills, and decision making. We present key aspects of these three components and link them to theories in clinical psychology, social psychology, sociology, and chemical dependence treatment. Among a total of 29 promising targeted prevention programs, we describe examples of empirically evaluated, intensive interventions that have made a positive impact on the attitudes and behavior of multiple problem youth. Incorporating the perspectives of multiple disciplines appears essential for progress in drug abuse and other problem behavior prevention.
Simulating Cancer Growth with Multiscale Agent-Based Modeling
Wang, Zhihui; Butner, Joseph D.; Kerketta, Romica; Cristini, Vittorio; Deisboeck, Thomas S.
2014-01-01
There have been many techniques developed in recent years to in silico model a variety of cancer behaviors. Agent-based modeling is a specific discrete-based hybrid modeling approach that allows simulating the role of diversity in cell populations as well as within each individual cell; it has therefore become a powerful modeling method widely used by computational cancer researchers. Many aspects of tumor morphology including phenotype-changing mutations, the adaptation to microenvironment, the process of angiogenesis, the influence of extracellular matrix, reactions to chemotherapy or surgical intervention, the effects of oxygen and nutrient availability, and metastasis and invasion of healthy tissues have been incorporated and investigated in agent-based models. In this review, we introduce some of the most recent agent-based models that have provided insight into the understanding of cancer growth and invasion, spanning multiple biological scales in time and space, and we further describe several experimentally testable hypotheses generated by those models. We also discuss some of the current challenges of multiscale agent-based cancer models. PMID:24793698
Lin, Jr-Jiun; Weng, Tzu-Hua; Tseng, Wen-Pin; Chen, Shang-Yu; Fu, Chia-Ming; Lin, Hui-Wen; Liao, Chun-Hsing; Lee, Tai-Fen; Hsueh, Po-Ren; Chen, Shey-Ying
2018-02-21
Vascular infections (VI) are potentially catastrophic complications of nontyphoid Salmonella (NTS). We aimed to develop a scoring model incorporating information from blood culture time to positivity (TTP-NTSVI) and compared the prediction capability for VI among adults with NTS bacteremia between TTP-NTSVI and a previously published score (Chen-NTSVI). This retrospective cohort study enrolled 217 adults with NTS bacteremia ≧ 50 years old. We developed a TTP-NTSVI score by multiple logistic regression modeling to identify independent predictors for imaging-confirmed VI and assigned a point value weighting by the corresponding natural logarithm of the odds ratio for each model predictor. Chen-NTSVI score includes hypertension, male sex, serogroup C1, coronary arterial disease (CAD) as positive predictors, and malignancy and immunosuppressive therapy as negative predictors. The prediction capability of the two scores was compared by area under the receiver operating characteristic curve (AUC). The mean age was 68.3 ± 11.2 years-old. Serogroup D was the predominant isolate (155/217, 71.4%). Seventeen (7.8%) patients had VI. Four independent predictors for VI were identified: male sex (24.9 [2.59-239.60]; 6) (odds ratio [95% confidence interval]; assigned score point), peripheral arterial occlusive disease (9.41 [2.21-40.02]; 4), CAD (4.0 [1.16-13.86]; 3), and TTP <10 h (4.67 [1.42-15.39]; 3). Youden's index showed best cutoff value of ≧7 with 70.6% sensitivity and 82.5% specificity. TTP-NTSVI score had higher AUC than Chen-NTSVI (0.851 vs 0.741, P = 0.039). While the previously reported scoring model performed well, a TTP-incorporated scoring model was associated with improved capability in predicting NTSVI. Copyright © 2018. Published by Elsevier B.V.
Wuellner, M R; Bramblett, R G; Guy, C S; Zale, A V; Roberts, D R; Johnson, J
2013-05-01
The objectives of this study were (1) to determine whether the presence or absence of prairie fishes can be modelled using habitat and biotic characteristics measured at the reach and catchment scales and (2) to identify which scale (i.e. reach, catchment or a combination of variables measured at both scales) best explains the presence or absence of fishes. Reach and catchment information from 120 sites sampled from 1999 to 2004 were incorporated into tree classifiers for 20 prairie fish species, and multiple criteria were used to evaluate models. Fewer than six models were considered significant when modelling individual fish occurrences at the reach, catchment or combined scale, and only one species was successfully modelled at all three scales. The scarcity of significant models is probably related to the rigorous criteria by which these models were evaluated as well as the prevalence of tolerant, generalist fishes in these stochastic and intermittent streams. No significant differences in the amount of reduced deviance, mean misclassification error rates (MER), and mean improvement in MER metrics was detected among the three scales. Results from this study underscore the importance of continued habitat assessment at smaller scales to further understand prairie-fish occurrences as well as further evaluations of modelling methods to examine habitat relationships for tolerant, ubiquitous species. Incorporation of such suggestions in the future may help provide more accurate models that will allow for better management and conservation of prairie-fish species. © 2013 The Authors. Journal of Fish Biology © 2013 The Fisheries Society of the British Isles.
Scott, Gregory G; Margulies, Susan S; Coats, Brittany
2016-10-01
Traumatic brain injury (TBI) is a leading cause of death and disability in the USA. To help understand and better predict TBI, researchers have developed complex finite element (FE) models of the head which incorporate many biological structures such as scalp, skull, meninges, brain (with gray/white matter differentiation), and vasculature. However, most models drastically simplify the membranes and substructures between the pia and arachnoid membranes. We hypothesize that substructures in the pia-arachnoid complex (PAC) contribute substantially to brain deformation following head rotation, and that when included in FE models accuracy of extra-axial hemorrhage prediction improves. To test these hypotheses, microscale FE models of the PAC were developed to span the variability of PAC substructure anatomy and regional density. The constitutive response of these models were then integrated into an existing macroscale FE model of the immature piglet brain to identify changes in cortical stress distribution and predictions of extra-axial hemorrhage (EAH). Incorporating regional variability of PAC substructures substantially altered the distribution of principal stress on the cortical surface of the brain compared to a uniform representation of the PAC. Simulations of 24 non-impact rapid head rotations in an immature piglet animal model resulted in improved accuracy of EAH prediction (to 94 % sensitivity, 100 % specificity), as well as a high accuracy in regional hemorrhage prediction (to 82-100 % sensitivity, 100 % specificity). We conclude that including a biofidelic PAC substructure variability in FE models of the head is essential for improved predictions of hemorrhage at the brain/skull interface.
NASA Astrophysics Data System (ADS)
Wildhaber, M. L.; Wikle, C. K.; Anderson, C. J.; Franz, K. J.; Moran, E. H.; Dey, R.
2012-12-01
Recent decades have brought substantive changes in land use and climate across the earth, prompting a need to think of population and community ecology not as a static entity, but as a dynamic process. Increasingly there is evidence of ecological changes due to climate change. Although much of this evidence comes from ground-truth observations of biogeographic data, there is increasing reliance on models that relate climate variables to biological systems. Such models can then be used to explore potential changes to population and community level ecological systems in response to climate scenarios as obtained from global climate models (GCMs). A key issue associated with modeling ecosystem response to climate is GCM downscaling to regional and local ecological/biological response models that can be used in vulnerability and risk assessments of the potential effects of climate change. The need is for an explicit means for scaling results up or down multiple hierarchical levels and an effective assessment of the level of uncertainty surrounding current knowledge, data, and data collection methods with these goals identified as in need of acceleration in the U.S. Climate Change Science Program FY2009 Implementation Priorities. In the end, such work should provide the information needed to develop adaptation and mitigation methodologies to minimize the effects of directional and nonlinear climate change on the Nation's land, water, ecosystems, and biological populations. We are working to develop an approach that includes multi-scale and hierarchical Bayesian modeling of Missouri River sturgeon population dynamics. Statistical linkages are defined to quantify implications of climate on fish populations of the Missouri River ecosystem. This approach is a hybrid between physical (deterministic) downscaling and statistical downscaling, recognizing that there is uncertainty in both. The model must include linkages between climate and habitat, and between habitat and population. A key advantage of the hierarchical approach used in this study is that it incorporates various sources of observations and includes established scientific knowledge, and associated uncertainties. The goal is to evaluate the potential distributional changes in an ecological system, given distributional changes implied by a series of linked climate and system models under various emissions/use scenarios. The predictive modeling system being developed will be a powerful tool for evaluating management options for coping with global change consequences and assessing uncertainty of those evaluations. Specifically for the endangered pallid sturgeon (Scaphirhynchus albus), we are already able to assess potential effects of any climate scenario on growth and population size distribution. Future models will incorporate survival and reproduction. Ultimately, these models provide guidance for successful recovery and conservation of the pallid sturgeon. Here we present a basic outline of the approach we are developing and a simple pallid sturgeon example to demonstrate how multiple scales and parameter uncertainty are incorporated.
Study on formation of step bunching on 6H-SiC (0001) surface by kinetic Monte Carlo method
NASA Astrophysics Data System (ADS)
Li, Yuan; Chen, Xuejiang; Su, Juan
2016-05-01
The formation and evolution of step bunching during step-flow growth of 6H-SiC (0001) surfaces were studied by three-dimensional kinetic Monte Carlo (KMC) method and compared with the analytic model based on the theory of Burton-Cabera-Frank (BCF). In the KMC model the crystal lattice was represented by a structured mesh which fixed the position of atoms and interatomic bonding. The events considered in the model were adatoms adsorption and diffusion on the terrace, and adatoms attachment, detachment and interlayer transport at the step edges. In addition, effects of Ehrlich-Schwoebel (ES) barriers at downward step edges and incorporation barriers at upwards step edges were also considered. In order to obtain more elaborate information for the behavior of atoms in the crystal surface, silicon and carbon atoms were treated as the minimal diffusing species. KMC simulation results showed that multiple-height steps were formed on the vicinal surface oriented toward [ 1 1 bar 00 ] or [ 11 2 bar 0 ] directions. And then the formation mechanism of the step bunching was analyzed. Finally, to further analyze the formation processes of step bunching, a one-dimensional BCF analytic model with ES and incorporation barriers was used, and then it was solved numerically. In the BCF model, the periodic boundary conditions (PBC) were applied, and the parameters were corresponded to those used in the KMC model. The evolution character of step bunching was consistent with the results obtained by KMC simulation.
Praveen, Paurush; Fröhlich, Holger
2013-01-01
Inferring regulatory networks from experimental data via probabilistic graphical models is a popular framework to gain insights into biological systems. However, the inherent noise in experimental data coupled with a limited sample size reduces the performance of network reverse engineering. Prior knowledge from existing sources of biological information can address this low signal to noise problem by biasing the network inference towards biologically plausible network structures. Although integrating various sources of information is desirable, their heterogeneous nature makes this task challenging. We propose two computational methods to incorporate various information sources into a probabilistic consensus structure prior to be used in graphical model inference. Our first model, called Latent Factor Model (LFM), assumes a high degree of correlation among external information sources and reconstructs a hidden variable as a common source in a Bayesian manner. The second model, a Noisy-OR, picks up the strongest support for an interaction among information sources in a probabilistic fashion. Our extensive computational studies on KEGG signaling pathways as well as on gene expression data from breast cancer and yeast heat shock response reveal that both approaches can significantly enhance the reconstruction accuracy of Bayesian Networks compared to other competing methods as well as to the situation without any prior. Our framework allows for using diverse information sources, like pathway databases, GO terms and protein domain data, etc. and is flexible enough to integrate new sources, if available. PMID:23826291
Lime-amended growing medium causes seedling growth distortions
R. Kasten Dumroese; Gale Thompson; David L. Wenny
1990-01-01
Although a commercial growing medium with incorporated agricultural lime had been successfully used for years, it caused growth distortion of coniferous and deciduous seedlings during 1988. Seedlings grown in the amended medium were stunted and chlorotic, often with disfigured needles and multiple tops. Seedlings grown in the same medium without incorporated lime grew...
ERIC Educational Resources Information Center
Hall, M. Kennedy; Mirjalili, S. Ali; Moore, Christopher L.; Rizzolo, Lawrence J.
2015-01-01
Anatomy students are often confused by multiple names ascribed to the same structure by different clinical disciplines. Increasingly, sonography is being incorporated into clinical anatomical education, but ultrasound textbooks often use names unfamiliar to the anatomist. Confusion is worsened when ultrasound names ascribed to the same structure…
NASA Astrophysics Data System (ADS)
Goff, P.; Hulse, A.; Harder, H. R.; Pierce, L. A.; Rizzo, D.; Hanley, J.; Orantes, L.; Stevens, L.; Justi, S.; Monroy, C.
2015-12-01
A computational simulation has been designed as an investigative case study by high school students to introduce system dynamics modeling into high school curriculum. This case study approach leads users through the forensics necessary to diagnose an unknown disease in a Central American village. This disease, Chagas, is endemic to 21 Latin American countries. The CDC estimates that of the 110 million people living in areas with the disease, 8 million are infected, with as many as 300,000 US cases. Chagas is caused by the protozoan parasite, Trypanosoma cruzi, and is spread via blood feeding insect (vectors), that feed on vertebrates and live in crevasses in the walls and roofs of adobe homes. One-third of the infected people will develop chronic Chagas who are asymptomatic for years before their heart or GI tract become enlarged resulting in death. The case study has three parts. Students play the role of WHO field investigators and work collaboratively to: 1) use genetics to identify the host(s) and vector of the disease 2) use a STELLA™ SIR (Susceptible, Infected, Recovered) system dynamics model to study Chagas at the village scale and 3) develop management strategies. The simulations identify mitigation strategies known as Ecohealth Interventions (e.g., home improvements using local materials) to help stakeholders test and compare multiple optima. High school students collaborated with researchers from the University of Vermont, Loyola University and Universidad de San Carlos, Guatemala, working in labs, interviewing researchers, and incorporating mulitple field data as part of a NSF-funded multiyear grant. The model displays stable equilibria of hosts, vectors, and disease-states. Sensitivity analyses show measures of household condition and presence of vertebrates were significant leverage points, supporting other findings by the University research team. The village-scale model explores multiple solutions to disease mitigation for the purpose of producing students who can think long-term, better understand feedbacks, and anticipate unexpected consequences associated with non-linear systems. This case study enables high school teachers to incorporate ongoing research, systems modeling, and engineering design, three core goals Next Generation Science Standards and STEM initiatives.
NASA Technical Reports Server (NTRS)
Bauer, S.; Hussmann, H.; Oberst, J.; Dirkx, D.; Mao, D.; Neumann, G. A.; Mazarico, E.; Torrence, M. H.; McGarry, J. F.; Smith, D. E.;
2016-01-01
We used one-way laser ranging data from International Laser Ranging Service (ILRS) ground stations to NASA's Lunar Reconnaissance Orbiter (LRO) for a demonstration of orbit determination. In the one-way setup, the state of LRO and the parameters of the spacecraft and all involved ground station clocks must be estimated simultaneously. This setup introduces many correlated parameters that are resolved by using a priori constraints. More over the observation data coverage and errors accumulating from the dynamical and the clock modeling limit the maximum arc length. The objective of this paper is to investigate the effect of the arc length, the dynamical and modeling accuracy and the observation data coverage on the accuracy of the results. We analyzed multiple arcs using lengths of 2 and 7 days during a one-week period in Science Mission phase 02 (SM02,November2010) and compared the trajectories, the post-fit measurement residuals and the estimated clock parameters. We further incorporated simultaneous passes from multiple stations within the observation data to investigate the expected improvement in positioning. The estimated trajectories were compared to the nominal LRO trajectory and the clock parameters (offset, rate and aging) to the results found in the literature. Arcs estimated with one-way ranging data had differences of 5-30 m compared to the nominal LRO trajectory. While the estimated LRO clock rates agreed closely with the a priori constraints, the aging parameters absorbed clock modeling errors with increasing clock arc length. Because of high correlations between the different ground station clocks and due to limited clock modeling accuracy, their differences only agreed at the order of magnitude with the literature. We found that the incorporation of simultaneous passes requires improved modeling in particular to enable the expected improvement in positioning. We found that gaps in the observation data coverage over 12h (approximately equals 6 successive LRO orbits) prevented the successful estimation of arcs with lengths shorter or longer than 2 or 7 days with our given modeling.
Matching Teaching and Learning Styles.
ERIC Educational Resources Information Center
Caudill, Gil
1998-01-01
Outlines three basic learning modalities--auditory, visual, and tactile--and notes that technology can help incorporate multiple modalities within each lesson, to meet the needs of most students. Discusses the importance in multiple modality teaching of effectively assessing students. Presents visual, auditory and tactile activity suggestions.…
Multiple-input multiple-output causal strategies for gene selection.
Bontempi, Gianluca; Haibe-Kains, Benjamin; Desmedt, Christine; Sotiriou, Christos; Quackenbush, John
2011-11-25
Traditional strategies for selecting variables in high dimensional classification problems aim to find sets of maximally relevant variables able to explain the target variations. If these techniques may be effective in generalization accuracy they often do not reveal direct causes. The latter is essentially related to the fact that high correlation (or relevance) does not imply causation. In this study, we show how to efficiently incorporate causal information into gene selection by moving from a single-input single-output to a multiple-input multiple-output setting. We show in synthetic case study that a better prioritization of causal variables can be obtained by considering a relevance score which incorporates a causal term. In addition we show, in a meta-analysis study of six publicly available breast cancer microarray datasets, that the improvement occurs also in terms of accuracy. The biological interpretation of the results confirms the potential of a causal approach to gene selection. Integrating causal information into gene selection algorithms is effective both in terms of prediction accuracy and biological interpretation.
Speech recognition: Acoustic-phonetic knowledge acquisition and representation
NASA Astrophysics Data System (ADS)
Zue, Victor W.
1988-09-01
The long-term research goal is to develop and implement speaker-independent continuous speech recognition systems. It is believed that the proper utilization of speech-specific knowledge is essential for such advanced systems. This research is thus directed toward the acquisition, quantification, and representation, of acoustic-phonetic and lexical knowledge, and the application of this knowledge to speech recognition algorithms. In addition, we are exploring new speech recognition alternatives based on artificial intelligence and connectionist techniques. We developed a statistical model for predicting the acoustic realization of stop consonants in various positions in the syllable template. A unification-based grammatical formalism was developed for incorporating this model into the lexical access algorithm. We provided an information-theoretic justification for the hierarchical structure of the syllable template. We analyzed segmented duration for vowels and fricatives in continuous speech. Based on contextual information, we developed durational models for vowels and fricatives that account for over 70 percent of the variance, using data from multiple, unknown speakers. We rigorously evaluated the ability of human spectrogram readers to identify stop consonants spoken by many talkers and in a variety of phonetic contexts. Incorporating the declarative knowledge used by the readers, we developed a knowledge-based system for stop identification. We achieved comparable system performance to that to the readers.
Indocyanine green-incorporating nanoparticles for cancer theranostics
Wang, Haolu; Li, Xinxing; Tse, Brian Wan-Chi; Yang, Haotian; Thorling, Camilla A.; Liu, Yuxin; Touraud, Margaux; Chouane, Jean Batiste; Liu, Xin; Roberts, Michael S.; Liang, Xiaowen
2018-01-01
Indocyanine green (ICG) is a near-infrared dye that has been used in the clinic for retinal angiography, and defining cardiovascular and liver function for over 50 years. Recently, there has been an increasing interest in the incorporation of ICG into nanoparticles (NPs) for cancer theranostic applications. Various types of ICG-incorporated NPs have been developed and strategically functionalised to embrace multiple imaging and therapeutic techniques for cancer diagnosis and treatment. This review systematically summaries the biodistribution of various types of ICG-incorporated NPs for the first time, and discusses the principles, opportunities, limitations, and application of ICG-incorporated NPs for cancer theranostics. We believe that ICG-incorporated NPs would be a promising multifunctional theranostic platform in oncology and facilitate significant advancements in this research-active area. PMID:29507616
A Perspective on Multiple Waves of Influenza Pandemics
Mummert, Anna; Weiss, Howard; Long, Li-Ping; Amigó, José M.; Wan, Xiu-Feng
2013-01-01
Background A striking characteristic of the past four influenza pandemic outbreaks in the United States has been the multiple waves of infections. However, the mechanisms responsible for the multiple waves of influenza or other acute infectious diseases are uncertain. Understanding these mechanisms could provide knowledge for health authorities to develop and implement prevention and control strategies. Materials and Methods We exhibit five distinct mechanisms, each of which can generate two waves of infections for an acute infectious disease. The first two mechanisms capture changes in virus transmissibility and behavioral changes. The third mechanism involves population heterogeneity (e.g., demography, geography), where each wave spreads through one sub-population. The fourth mechanism is virus mutation which causes delayed susceptibility of individuals. The fifth mechanism is waning immunity. Each mechanism is incorporated into separate mathematical models, and outbreaks are then simulated. We use the models to examine the effects of the initial number of infected individuals (e.g., border control at the beginning of the outbreak) and the timing of and amount of available vaccinations. Results Four models, individually or in any combination, reproduce the two waves of the 2009 H1N1 pandemic in the United States, both qualitatively and quantitatively. One model reproduces the two waves only qualitatively. All models indicate that significantly reducing or delaying the initial numbers of infected individuals would have little impact on the attack rate. Instead, this reduction or delay results in a single wave as opposed to two waves. Furthermore, four of these models also indicate that a vaccination program started earlier than October 2009 (when the H1N1 vaccine was initially distributed) could have eliminated the second wave of infection, while more vaccine available starting in October would not have eliminated the second wave. PMID:23637746
A perspective on multiple waves of influenza pandemics.
Mummert, Anna; Weiss, Howard; Long, Li-Ping; Amigó, José M; Wan, Xiu-Feng
2013-01-01
A striking characteristic of the past four influenza pandemic outbreaks in the United States has been the multiple waves of infections. However, the mechanisms responsible for the multiple waves of influenza or other acute infectious diseases are uncertain. Understanding these mechanisms could provide knowledge for health authorities to develop and implement prevention and control strategies. We exhibit five distinct mechanisms, each of which can generate two waves of infections for an acute infectious disease. The first two mechanisms capture changes in virus transmissibility and behavioral changes. The third mechanism involves population heterogeneity (e.g., demography, geography), where each wave spreads through one sub-population. The fourth mechanism is virus mutation which causes delayed susceptibility of individuals. The fifth mechanism is waning immunity. Each mechanism is incorporated into separate mathematical models, and outbreaks are then simulated. We use the models to examine the effects of the initial number of infected individuals (e.g., border control at the beginning of the outbreak) and the timing of and amount of available vaccinations. Four models, individually or in any combination, reproduce the two waves of the 2009 H1N1 pandemic in the United States, both qualitatively and quantitatively. One model reproduces the two waves only qualitatively. All models indicate that significantly reducing or delaying the initial numbers of infected individuals would have little impact on the attack rate. Instead, this reduction or delay results in a single wave as opposed to two waves. Furthermore, four of these models also indicate that a vaccination program started earlier than October 2009 (when the H1N1 vaccine was initially distributed) could have eliminated the second wave of infection, while more vaccine available starting in October would not have eliminated the second wave.
NASA Astrophysics Data System (ADS)
Crabtree, B.; Brooks, E.; Ostrowski, K.; Elliot, W. J.; Boll, J.
2006-12-01
We incorporated saturation excess overland flow processes in the Water Erosion Prediction Project (WEPP) model for the evaluation of human disturbances in watersheds. In this presentation, we present results of the modified WEPP model to two watersheds: an agricultural watershed with mixed land use, and a forested watershed. The agricultural watershed is Paradise Creek, an intensively monitored watershed with continuous climate, flow and sediment data collection at multiple locations. Restoration efforts in Paradise Creek watershed include changing to minimal tillage or no-tillage sytems, and implementation of structural practices. The forested watershed is the 28 km2 Mica Creek Experimental Watershed (MCEW) where disturbances include clear and partial cutting, and road building. The MCEW has a nested study design, which allows for the analysis of cumulative effects as well as the traditional comparison of treatment versus control. Mica Creek watershed is a high elevation watershed where streamflow is generated mostly by snowmelt. Treatments include road building in 1997, and clearcut and partial-cut logging in 2001. Our results include the simulation of streamflow and sediment delivery at multiple locations within each watershed, and evaluation of the human disturbances.
Zota, Ami R.; Fabian, M. Patricia; Chahine, Teresa; Julien, Rhona; Spengler, John D.; Levy, Jonathan I.
2011-01-01
Objectives. The indoor environment has not been fully incorporated into the environmental justice dialogue. To inform strategies to reduce disparities, we developed a framework to identify the individual and place-based drivers of indoor environment quality. Methods. We reviewed empirical evidence of socioeconomic disparities in indoor exposures and key determinants of these exposures for air pollutants, lead, allergens, and semivolatile organic compounds. We also used an indoor air quality model applied to multifamily housing to illustrate how nitrogen dioxide (NO2) and fine particulate matter (PM2.5) vary as a function of factors known to be influenced by socioeconomic status. Results. Indoor concentrations of multiple pollutants are elevated in low-socioeconomic status households. Differences in these exposures are driven by the combined influences of indoor sources, outdoor sources, physical structures, and residential activity patterns. Simulation models confirmed indoor sources’ importance in determining indoor NO2 and PM2.5 exposures and showed the influence of household-specific determinants. Conclusions. Both theoretical models and empirical evidence emphasized that disparities in indoor environmental exposure can be significant. Understanding key determinants of multiple indoor exposures can aid in developing policies to reduce these disparities. PMID:21836112
Comparing biomarkers as principal surrogate endpoints.
Huang, Ying; Gilbert, Peter B
2011-12-01
Recently a new definition of surrogate endpoint, the "principal surrogate," was proposed based on causal associations between treatment effects on the biomarker and on the clinical endpoint. Despite its appealing interpretation, limited research has been conducted to evaluate principal surrogates, and existing methods focus on risk models that consider a single biomarker. How to compare principal surrogate value of biomarkers or general risk models that consider multiple biomarkers remains an open research question. We propose to characterize a marker or risk model's principal surrogate value based on the distribution of risk difference between interventions. In addition, we propose a novel summary measure (the standardized total gain) that can be used to compare markers and to assess the incremental value of a new marker. We develop a semiparametric estimated-likelihood method to estimate the joint surrogate value of multiple biomarkers. This method accommodates two-phase sampling of biomarkers and is more widely applicable than existing nonparametric methods by incorporating continuous baseline covariates to predict the biomarker(s), and is more robust than existing parametric methods by leaving the error distribution of markers unspecified. The methodology is illustrated using a simulated example set and a real data set in the context of HIV vaccine trials. © 2011, The International Biometric Society.
A probabilistic fatigue analysis of multiple site damage
NASA Technical Reports Server (NTRS)
Rohrbaugh, S. M.; Ruff, D.; Hillberry, B. M.; Mccabe, G.; Grandt, A. F., Jr.
1994-01-01
The variability in initial crack size and fatigue crack growth is incorporated in a probabilistic model that is used to predict the fatigue lives for unstiffened aluminum alloy panels containing multiple site damage (MSD). The uncertainty of the damage in the MSD panel is represented by a distribution of fatigue crack lengths that are analytically derived from equivalent initial flaw sizes. The variability in fatigue crack growth rate is characterized by stochastic descriptions of crack growth parameters for a modified Paris crack growth law. A Monte-Carlo simulation explicitly describes the MSD panel by randomly selecting values from the stochastic variables and then grows the MSD cracks with a deterministic fatigue model until the panel fails. Different simulations investigate the influences of the fatigue variability on the distributions of remaining fatigue lives. Six cases that consider fixed and variable conditions of initial crack size and fatigue crack growth rate are examined. The crack size distribution exhibited a dominant effect on the remaining fatigue life distribution, and the variable crack growth rate exhibited a lesser effect on the distribution. In addition, the probabilistic model predicted that only a small percentage of the life remains after a lead crack develops in the MSD panel.
Comparing multiple imputation methods for systematically missing subject-level data.
Kline, David; Andridge, Rebecca; Kaizar, Eloise
2017-06-01
When conducting research synthesis, the collection of studies that will be combined often do not measure the same set of variables, which creates missing data. When the studies to combine are longitudinal, missing data can occur on the observation-level (time-varying) or the subject-level (non-time-varying). Traditionally, the focus of missing data methods for longitudinal data has been on missing observation-level variables. In this paper, we focus on missing subject-level variables and compare two multiple imputation approaches: a joint modeling approach and a sequential conditional modeling approach. We find the joint modeling approach to be preferable to the sequential conditional approach, except when the covariance structure of the repeated outcome for each individual has homogenous variance and exchangeable correlation. Specifically, the regression coefficient estimates from an analysis incorporating imputed values based on the sequential conditional method are attenuated and less efficient than those from the joint method. Remarkably, the estimates from the sequential conditional method are often less efficient than a complete case analysis, which, in the context of research synthesis, implies that we lose efficiency by combining studies. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Hutton, C.; Wagener, T.; Freer, J. E.; Duffy, C.; Han, D.
2015-12-01
Distributed models offer the potential to resolve catchment systems in more detail, and therefore simulate the hydrological impacts of spatial changes in catchment forcing (e.g. landscape change). Such models may contain a large number of model parameters which are computationally expensive to calibrate. Even when calibration is possible, insufficient data can result in model parameter and structural equifinality. In order to help reduce the space of feasible models and supplement traditional outlet discharge calibration data, semi-quantitative information (e.g. knowledge of relative groundwater levels), may also be used to identify behavioural models when applied to constrain spatially distributed predictions of states and fluxes. The challenge is to combine these different sources of information together to identify a behavioural region of state-space, and efficiently search a large, complex parameter space to identify behavioural parameter sets that produce predictions that fall within this behavioural region. Here we present a methodology to incorporate different sources of data to efficiently calibrate distributed catchment models. Metrics of model performance may be derived from multiple sources of data (e.g. perceptual understanding and measured or regionalised hydrologic signatures). For each metric, an interval or inequality is used to define the behaviour of the catchment system, accounting for data uncertainties. These intervals are then combined to produce a hyper-volume in state space. The state space is then recast as a multi-objective optimisation problem, and the Borg MOEA is applied to first find, and then populate the hyper-volume, thereby identifying acceptable model parameter sets. We apply the methodology to calibrate the PIHM model at Plynlimon, UK by incorporating perceptual and hydrologic data into the calibration problem. Furthermore, we explore how to improve calibration efficiency through search initialisation from shorter model runs.
Virtual reality neurosurgery: a simulator blueprint.
Spicer, Mark A; van Velsen, Martin; Caffrey, John P; Apuzzo, Michael L J
2004-04-01
This article details preliminary studies undertaken to integrate the most relevant advancements across multiple disciplines in an effort to construct a highly realistic neurosurgical simulator based on a distributed computer architecture. Techniques based on modified computational modeling paradigms incorporating finite element analysis are presented, as are current and projected efforts directed toward the implementation of a novel bidirectional haptic device. Patient-specific data derived from noninvasive magnetic resonance imaging sequences are used to construct a computational model of the surgical region of interest. Magnetic resonance images of the brain may be coregistered with those obtained from magnetic resonance angiography, magnetic resonance venography, and diffusion tensor imaging to formulate models of varying anatomic complexity. The majority of the computational burden is encountered in the presimulation reduction of the computational model and allows realization of the required threshold rates for the accurate and realistic representation of real-time visual animations. Intracranial neurosurgical procedures offer an ideal testing site for the development of a totally immersive virtual reality surgical simulator when compared with the simulations required in other surgical subspecialties. The material properties of the brain as well as the typically small volumes of tissue exposed in the surgical field, coupled with techniques and strategies to minimize computational demands, provide unique opportunities for the development of such a simulator. Incorporation of real-time haptic and visual feedback is approached here and likely will be accomplished soon.
NoSQL data model for semi-automatic integration of ethnomedicinal plant data from multiple sources.
Ningthoujam, Sanjoy Singh; Choudhury, Manabendra Dutta; Potsangbam, Kumar Singh; Chetia, Pankaj; Nahar, Lutfun; Sarker, Satyajit D; Basar, Norazah; Das Talukdar, Anupam
2014-01-01
Sharing traditional knowledge with the scientific community could refine scientific approaches to phytochemical investigation and conservation of ethnomedicinal plants. As such, integration of traditional knowledge with scientific data using a single platform for sharing is greatly needed. However, ethnomedicinal data are available in heterogeneous formats, which depend on cultural aspects, survey methodology and focus of the study. Phytochemical and bioassay data are also available from many open sources in various standards and customised formats. To design a flexible data model that could integrate both primary and curated ethnomedicinal plant data from multiple sources. The current model is based on MongoDB, one of the Not only Structured Query Language (NoSQL) databases. Although it does not contain schema, modifications were made so that the model could incorporate both standard and customised ethnomedicinal plant data format from different sources. The model presented can integrate both primary and secondary data related to ethnomedicinal plants. Accommodation of disparate data was accomplished by a feature of this database that supported a different set of fields for each document. It also allowed storage of similar data having different properties. The model presented is scalable to a highly complex level with continuing maturation of the database, and is applicable for storing, retrieving and sharing ethnomedicinal plant data. It can also serve as a flexible alternative to a relational and normalised database. Copyright © 2014 John Wiley & Sons, Ltd.
Liu, Xiang; Saat, Mohd Rapik; Barkan, Christopher P L
2014-07-15
Railroads play a key role in the transportation of hazardous materials in North America. Rail transport differs from highway transport in several aspects, an important one being that rail transport involves trains in which many railcars carrying hazardous materials travel together. By contrast to truck accidents, it is possible that a train accident may involve multiple hazardous materials cars derailing and releasing contents with consequently greater potential impact on human health, property and the environment. In this paper, a probabilistic model is developed to estimate the probability distribution of the number of tank cars releasing contents in a train derailment. Principal operational characteristics considered include train length, derailment speed, accident cause, position of the first car derailed, number and placement of tank cars in a train and tank car safety design. The effect of train speed, tank car safety design and tank car positions in a train were evaluated regarding the number of cars that release their contents in a derailment. This research provides insights regarding the circumstances affecting multiple-tank-car release incidents and potential strategies to reduce their occurrences. The model can be incorporated into a larger risk management framework to enable better local, regional and national safety management of hazardous materials transportation by rail. Copyright © 2014 Elsevier B.V. All rights reserved.
Noise Modeling From Conductive Shields Using Kirchhoff Equations.
Sandin, Henrik J; Volegov, Petr L; Espy, Michelle A; Matlashov, Andrei N; Savukov, Igor M; Schultz, Larry J
2010-10-09
Progress in the development of high-sensitivity magnetic-field measurements has stimulated interest in understanding the magnetic noise of conductive materials, especially of magnetic shields based on high-permeability materials and/or high-conductivity materials. For example, SQUIDs and atomic magnetometers have been used in many experiments with mu-metal shields, and additionally SQUID systems frequently have radio frequency shielding based on thin conductive materials. Typical existing approaches to modeling noise only work with simple shield and sensor geometries while common experimental setups today consist of multiple sensor systems with complex shield geometries. With complex sensor arrays used in, for example, MEG and Ultra Low Field MRI studies, knowledge of the noise correlation between sensors is as important as knowledge of the noise itself. This is crucial for incorporating efficient noise cancelation schemes for the system. We developed an approach that allows us to calculate the Johnson noise for arbitrary shaped shields and multiple sensor systems. The approach is efficient enough to be able to run on a single PC system and return results on a minute scale. With a multiple sensor system our approach calculates not only the noise for each sensor but also the noise correlation matrix between sensors. Here we will show how the algorithm can be implemented.
Tang, Min; Zhao, Rui; van de Velde, Helgi; Tross, Jennifer G; Mitsiades, Constantine; Viselli, Suzanne; Neuwirth, Rachel; Esseltine, Dixie-Lee; Anderson, Kenneth; Ghobrial, Irene M; San Miguel, Jesús F; Richardson, Paul G; Tomasson, Michael H; Michor, Franziska
2016-08-15
Since the pioneering work of Salmon and Durie, quantitative measures of tumor burden in multiple myeloma have been used to make clinical predictions and model tumor growth. However, such quantitative analyses have not yet been performed on large datasets from trials using modern chemotherapy regimens. We analyzed a large set of tumor response data from three randomized controlled trials of bortezomib-based chemotherapy regimens (total sample size n = 1,469 patients) to establish and validate a novel mathematical model of multiple myeloma cell dynamics. Treatment dynamics in newly diagnosed patients were most consistent with a model postulating two tumor cell subpopulations, "progenitor cells" and "differentiated cells." Differential treatment responses were observed with significant tumoricidal effects on differentiated cells and less clear effects on progenitor cells. We validated this model using a second trial of newly diagnosed patients and a third trial of refractory patients. When applying our model to data of relapsed patients, we found that a hybrid model incorporating both a differentiation hierarchy and clonal evolution best explains the response patterns. The clinical data, together with mathematical modeling, suggest that bortezomib-based therapy exerts a selection pressure on myeloma cells that can shape the disease phenotype, thereby generating further inter-patient variability. This model may be a useful tool for improving our understanding of disease biology and the response to chemotherapy regimens. Clin Cancer Res; 22(16); 4206-14. ©2016 AACR. ©2016 American Association for Cancer Research.
Ionically Cross-Linked Polymer Networks for the Multiple-Month Release of Small Molecules
2016-01-01
Long-term (multiple-week or -month) release of small, water-soluble molecules from hydrogels remains a significant pharmaceutical challenge, which is typically overcome at the expense of more-complicated drug carrier designs. Such approaches are payload-specific and include covalent conjugation of drugs to base materials or incorporation of micro- and nanoparticles. As a simpler alternative, here we report a mild and simple method for achieving multiple-month release of small molecules from gel-like polymer networks. Densely cross-linked matrices were prepared through ionotropic gelation of poly(allylamine hydrochloride) (PAH) with either pyrophosphate (PPi) or tripolyphosphate (TPP), all of which are commonly available commercial molecules. The loading of model small molecules (Fast Green FCF and Rhodamine B dyes) within these polymer networks increases with the payload/network binding strength and with the PAH and payload concentrations used during encapsulation. Once loaded into the PAH/PPi and PAH/TPP ionic networks, only a few percent of the payload is released over multiple months. This extended release is achieved regardless of the payload/network binding strength and likely reflects the small hydrodynamic mesh size within the gel-like matrices. Furthermore, the PAH/TPP networks show promising in vitro cytocompatibility with model cells (human dermal fibroblasts), though slight cytotoxic effects were exhibited by the PAH/PPi networks. Taken together, the above findings suggest that PAH/PPi and (especially) PAH/TPP networks might be attractive materials for the multiple-month delivery of drugs and other active molecules (e.g., fragrances or disinfectants). PMID:26811936
Evaluating acetate metabolism for imaging and targeting in multiple myeloma
Fontana, Francesca; Ge, Xia; Su, Xinming; Hathi, Deep; Xiang, Jingyu; Cenci, Simone; Civitelli, Roberto; Shoghi, Kooresh I.; Akers, Walter J.; D’avignon, Andre
2016-01-01
Purpose We hypothesized that in multiple myeloma cells (MMC), high membrane biosynthesis will induce acetate uptake in vitro and in vivo. Here, we studied acetate metabolism and targeting in MMC in vitro and tested the efficacy of 11C-acetate-PET (positron emission tomography) to detect and quantitatively image myeloma treatment response in vivo. Experimental design Acetate fate tracking using 13C-edited-1H NMR (nuclear magnetic resonance) was performed to study in vitro acetate uptake and metabolism in MMC. Effects of pharmacological modulation of acetate transport or acetate incorporation into lipids on MMC cell survival and viability were assessed. Preclinical mouse MM models of subcutaneous and bone tumors were evaluated using 11C-acetate-PET/CT imaging and tissue biodistribution. Results In vitro, NMR showed significant uptake of acetate by MMC, and acetate incorporation into intracellular metabolites and membrane lipids. Inhibition of lipid synthesis and acetate transport was toxic to MMC, while sparing resident bone cells or normal B cells. In vivo, 11C-acetate uptake by PET imaging was significantly enhanced in subcutaneous and bone MMC tumors compared to unaffected bone or muscle tissue. Likewise, 11C-acetate uptake was significantly reduced in MM tumors after treatment. Conclusions Uptake of acetate from the extracellular environment was enhanced in MMC and was critical to cellular viability. 11C-acetate-PET detected the presence of myeloma cells in vivo, including uptake in intramedullary bone disease. 11C-acetate-PET also detected response to therapy in vivo. Our data suggested that acetate metabolism and incorporation into lipids was crucial to MM cell biology and that 11C-acetate-PET is a promising imaging modality for MM. PMID:27486177
Xiao, Jian; Cao, Hongyuan; Chen, Jun
2017-09-15
Next generation sequencing technologies have enabled the study of the human microbiome through direct sequencing of microbial DNA, resulting in an enormous amount of microbiome sequencing data. One unique characteristic of microbiome data is the phylogenetic tree that relates all the bacterial species. Closely related bacterial species have a tendency to exhibit a similar relationship with the environment or disease. Thus, incorporating the phylogenetic tree information can potentially improve the detection power for microbiome-wide association studies, where hundreds or thousands of tests are conducted simultaneously to identify bacterial species associated with a phenotype of interest. Despite much progress in multiple testing procedures such as false discovery rate (FDR) control, methods that take into account the phylogenetic tree are largely limited. We propose a new FDR control procedure that incorporates the prior structure information and apply it to microbiome data. The proposed procedure is based on a hierarchical model, where a structure-based prior distribution is designed to utilize the phylogenetic tree. By borrowing information from neighboring bacterial species, we are able to improve the statistical power of detecting associated bacterial species while controlling the FDR at desired levels. When the phylogenetic tree is mis-specified or non-informative, our procedure achieves a similar power as traditional procedures that do not take into account the tree structure. We demonstrate the performance of our method through extensive simulations and real microbiome datasets. We identified far more alcohol-drinking associated bacterial species than traditional methods. R package StructFDR is available from CRAN. chen.jun2@mayo.edu. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Organic light emitting device having multiple separate emissive layers
Forrest, Stephen R [Ann Arbor, MI
2012-03-27
An organic light emitting device having multiple separate emissive layers is provided. Each emissive layer may define an exciton formation region, allowing exciton formation to occur across the entire emissive region. By aligning the energy levels of each emissive layer with the adjacent emissive layers, exciton formation in each layer may be improved. Devices incorporating multiple emissive layers with multiple exciton formation regions may exhibit improved performance, including internal quantum efficiencies of up to 100%.
Effects of Swept Tips on V-22 Whirl Flutter and Loads
NASA Technical Reports Server (NTRS)
Acree, C. W., Jr.
2005-01-01
A CAMRAD II model of the V-22 Osprey tiltrotor was constructed for the purpose of analyzing the effects of blade design changes on whirl flutter. The model incorporated a dual load-path grip/yoke assembly, a swashplate coupled to the transmission case, and a drive train. A multiple-trailer free wake was used for loads calculations. The effects of rotor design changes on whirl-mode stability were calculated for swept blades and offset tip masses. A rotor with swept tips and inboard tuning masses was examined in detail to reveal the mechanisms by which these design changes affect stability and loads. Certain combinations of design features greatly increased whirl-mode stability, with (at worst) moderate increases to loads.
Comulang: towards a collaborative e-learning system that supports student group modeling.
Troussas, Christos; Virvou, Maria; Alepis, Efthimios
2013-01-01
This paper describes an e-learning system that is expected to further enhance the educational process in computer-based tutoring systems by incorporating collaboration between students and work in groups. The resulting system is called "Comulang" while as a test bed for its effectiveness a multiple language learning system is used. Collaboration is supported by a user modeling module that is responsible for the initial creation of student clusters, where, as a next step, working groups of students are created. A machine learning clustering algorithm works towards group formatting, so that co-operations between students from different clusters are attained. One of the resulting system's basic aims is to provide efficient student groups whose limitations and capabilities are well balanced.
75 FR 20265 - Airworthiness Directives; Liberty Aerospace Incorporated Model XL-2 Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-19
... Airworthiness Directives; Liberty Aerospace Incorporated Model XL-2 Airplanes AGENCY: Federal Aviation...-08- 05, which applies to certain Liberty Aerospace Incorporated Model XL-2 airplanes. AD 2009-08-05...), the Director of the Federal Register approved the incorporation by reference of Liberty Aerospace, Inc...
NASA Astrophysics Data System (ADS)
Adam, J.; Adamová, D.; Aggarwal, M. M.; Aglieri Rinella, G.; Agnello, M.; Agrawal, N.; Ahammed, Z.; Ahn, S. U.; Aiola, S.; Akindinov, A.; Alam, S. N.; Aleksandrov, D.; Alessandro, B.; Alexandre, D.; Alfaro Molina, R.; Alici, A.; Alkin, A.; Almaraz, J. R. M.; Alme, J.; Alt, T.; Altinpinar, S.; Altsybeev, I.; Alves Garcia Prado, C.; Andrei, C.; Andronic, A.; Anguelov, V.; Anielski, J.; Antičić, T.; Antinori, F.; Antonioli, P.; Aphecetche, L.; Appelshäuser, H.; Arcelli, S.; Arnaldi, R.; Arnold, O. W.; Arsene, I. C.; Arslandok, M.; Audurier, B.; Augustinus, A.; Averbeck, R.; Azmi, M. D.; Badalà, A.; Baek, Y. W.; Bagnasco, S.; Bailhache, R.; Bala, R.; Baldisseri, A.; Baral, R. C.; Barbano, A. M.; Barbera, R.; Barile, F.; Barnaföldi, G. G.; Barnby, L. S.; Barret, V.; Bartalini, P.; Barth, K.; Bartke, J.; Bartsch, E.; Basile, M.; Bastid, N.; Basu, S.; Bathen, B.; Batigne, G.; Batista Camejo, A.; Batyunya, B.; Batzing, P. C.; Bearden, I. G.; Beck, H.; Bedda, C.; Behera, N. K.; Belikov, I.; Bellini, F.; Bello Martinez, H.; Bellwied, R.; Belmont, R.; Belmont-Moreno, E.; Belyaev, V.; Bencedi, G.; Beole, S.; Berceanu, I.; Bercuci, A.; Berdnikov, Y.; Berenyi, D.; Bertens, R. A.; Berzano, D.; Betev, L.; Bhasin, A.; Bhat, I. R.; Bhati, A. K.; Bhattacharjee, B.; Bhom, J.; Bianchi, L.; Bianchi, N.; Bianchin, C.; Bielčík, J.; Bielčíková, J.; Bilandzic, A.; Biswas, R.; Biswas, S.; Bjelogrlic, S.; Blair, J. T.; Blau, D.; Blume, C.; Bock, F.; Bogdanov, A.; Bøggild, H.; Boldizsár, L.; Bombara, M.; Book, J.; Borel, H.; Borissov, A.; Borri, M.; Bossú, F.; Botta, E.; Böttger, S.; Bourjau, C.; Braun-Munzinger, P.; Bregant, M.; Breitner, T.; Broker, T. A.; Browning, T. A.; Broz, M.; Brucken, E. J.; Bruna, E.; Bruno, G. E.; Budnikov, D.; Buesching, H.; Bufalino, S.; Buncic, P.; Busch, O.; Buthelezi, Z.; Butt, J. B.; Buxton, J. T.; Caffarri, D.; Cai, X.; Caines, H.; Calero Diaz, L.; Caliva, A.; Calvo Villar, E.; Camerini, P.; Carena, F.; Carena, W.; Carnesecchi, F.; Castillo Castellanos, J.; Castro, A. J.; Casula, E. A. R.; Ceballos Sanchez, C.; Cepila, J.; Cerello, P.; Cerkala, J.; Chang, B.; Chapeland, S.; Chartier, M.; Charvet, J. L.; Chattopadhyay, S.; Chattopadhyay, S.; Chelnokov, V.; Cherney, M.; Cheshkov, C.; Cheynis, B.; Chibante Barroso, V.; Chinellato, D. D.; Cho, S.; Chochula, P.; Choi, K.; Chojnacki, M.; Choudhury, S.; Christakoglou, P.; Christensen, C. H.; Christiansen, P.; Chujo, T.; Chung, S. U.; Cicalo, C.; Cifarelli, L.; Cindolo, F.; Cleymans, J.; Colamaria, F.; Colella, D.; Collu, A.; Colocci, M.; Conesa Balbastre, G.; Conesa del Valle, Z.; Connors, M. E.; Contreras, J. G.; Cormier, T. M.; Corrales Morales, Y.; Cortés Maldonado, I.; Cortese, P.; Cosentino, M. R.; Costa, F.; Crochet, P.; Cruz Albino, R.; Cuautle, E.; Cunqueiro, L.; Dahms, T.; Dainese, A.; Danu, A.; Das, D.; Das, I.; Das, S.; Dash, A.; Dash, S.; De, S.; De Caro, A.; de Cataldo, G.; de Conti, C.; de Cuveland, J.; De Falco, A.; De Gruttola, D.; De Marco, N.; De Pasquale, S.; Deisting, A.; Deloff, A.; Dénes, E.; Deplano, C.; Dhankher, P.; Di Bari, D.; Di Mauro, A.; Di Nezza, P.; Diaz Corchero, M. A.; Dietel, T.; Dillenseger, P.; Divià, R.; Djuvsland, Ø.; Dobrin, A.; Domenicis Gimenez, D.; Dönigus, B.; Dordic, O.; Drozhzhova, T.; Dubey, A. K.; Dubla, A.; Ducroux, L.; Dupieux, P.; Ehlers, R. J.; Elia, D.; Engel, H.; Epple, E.; Erazmus, B.; Erdemir, I.; Erhardt, F.; Espagnon, B.; Estienne, M.; Esumi, S.; Eum, J.; Evans, D.; Evdokimov, S.; Eyyubova, G.; Fabbietti, L.; Fabris, D.; Faivre, J.; Fantoni, A.; Fasel, M.; Feldkamp, L.; Feliciello, A.; Feofilov, G.; Ferencei, J.; Fernández Téllez, A.; Ferreiro, E. G.; Ferretti, A.; Festanti, A.; Feuillard, V. J. G.; Figiel, J.; Figueredo, M. A. S.; Filchagin, S.; Finogeev, D.; Fionda, F. M.; Fiore, E. M.; Fleck, M. G.; Floris, M.; Foertsch, S.; Foka, P.; Fokin, S.; Fragiacomo, E.; Francescon, A.; Frankenfeld, U.; Fuchs, U.; Furget, C.; Furs, A.; Fusco Girard, M.; Gaardhøje, J. J.; Gagliardi, M.; Gago, A. M.; Gallio, M.; Gangadharan, D. R.; Ganoti, P.; Gao, C.; Garabatos, C.; Garcia-Solis, E.; Gargiulo, C.; Gasik, P.; Gauger, E. F.; Germain, M.; Gheata, A.; Gheata, M.; Ghosh, P.; Ghosh, S. K.; Gianotti, P.; Giubellino, P.; Giubilato, P.; Gladysz-Dziadus, E.; Glässel, P.; Goméz Coral, D. M.; Gomez Ramirez, A.; Gonzalez, V.; González-Zamora, P.; Gorbunov, S.; Görlich, L.; Gotovac, S.; Grabski, V.; Grachov, O. A.; Graczykowski, L. K.; Graham, K. L.; Grelli, A.; Grigoras, A.; Grigoras, C.; Grigoriev, V.; Grigoryan, A.; Grigoryan, S.; Grinyov, B.; Grion, N.; Gronefeld, J. M.; Grosse-Oetringhaus, J. F.; Grossiord, J.-Y.; Grosso, R.; Guber, F.; Guernane, R.; Guerzoni, B.; Gulbrandsen, K.; Gunji, T.; Gupta, A.; Gupta, R.; Haake, R.; Haaland, Ø.; Hadjidakis, C.; Haiduc, M.; Hamagaki, H.; Hamar, G.; Harris, J. W.; Harton, A.; Hatzifotiadou, D.; Hayashi, S.; Heckel, S. T.; Heide, M.; Helstrup, H.; Herghelegiu, A.; Herrera Corral, G.; Hess, B. A.; Hetland, K. F.; Hillemanns, H.; Hippolyte, B.; Hosokawa, R.; Hristov, P.; Huang, M.; Humanic, T. J.; Hussain, N.; Hussain, T.; Hutter, D.; Hwang, D. S.; Ilkaev, R.; Inaba, M.; Ippolitov, M.; Irfan, M.; Ivanov, M.; Ivanov, V.; Izucheev, V.; Jacobs, P. M.; Jadhav, M. B.; Jadlovska, S.; Jadlovsky, J.; Jahnke, C.; Jakubowska, M. J.; Jang, H. J.; Janik, M. A.; Jayarathna, P. H. S. Y.; Jena, C.; Jena, S.; Jimenez Bustamante, R. T.; Jones, P. G.; Jung, H.; Jusko, A.; Kalinak, P.; Kalweit, A.; Kamin, J.; Kang, J. H.; Kaplin, V.; Kar, S.; Karasu Uysal, A.; Karavichev, O.; Karavicheva, T.; Karayan, L.; Karpechev, E.; Kebschull, U.; Keidel, R.; Keijdener, D. L. D.; Keil, M.; Mohisin Khan, M.; Khan, P.; Khan, S. A.; Khanzadeev, A.; Kharlov, Y.; Kileng, B.; Kim, D. W.; Kim, D. J.; Kim, D.; Kim, H.; Kim, J. S.; Kim, M.; Kim, M.; Kim, S.; Kim, T.; Kirsch, S.; Kisel, I.; Kiselev, S.; Kisiel, A.; Kiss, G.; Klay, J. L.; Klein, C.; Klein, J.; Klein-Bösing, C.; Klewin, S.; Kluge, A.; Knichel, M. L.; Knospe, A. G.; Kobayashi, T.; Kobdaj, C.; Kofarago, M.; Kollegger, T.; Kolojvari, A.; Kondratiev, V.; Kondratyeva, N.; Kondratyuk, E.; Konevskikh, A.; Kopcik, M.; Kour, M.; Kouzinopoulos, C.; Kovalenko, O.; Kovalenko, V.; Kowalski, M.; Koyithatta Meethaleveedu, G.; Králik, I.; Kravčáková, A.; Kretz, M.; Krivda, M.; Krizek, F.; Kryshen, E.; Krzewicki, M.; Kubera, A. M.; Kučera, V.; Kuhn, C.; Kuijer, P. G.; Kumar, A.; Kumar, J.; Kumar, L.; Kumar, S.; Kurashvili, P.; Kurepin, A.; Kurepin, A. B.; Kuryakin, A.; Kweon, M. J.; Kwon, Y.; La Pointe, S. L.; La Rocca, P.; Ladron de Guevara, P.; Lagana Fernandes, C.; Lakomov, I.; Langoy, R.; Lara, C.; Lardeux, A.; Lattuca, A.; Laudi, E.; Lea, R.; Leardini, L.; Lee, G. R.; Lee, S.; Lehas, F.; Lemmon, R. C.; Lenti, V.; Leogrande, E.; León Monzón, I.; León Vargas, H.; Leoncino, M.; Lévai, P.; Li, S.; Li, X.; Lien, J.; Lietava, R.; Lindal, S.; Lindenstruth, V.; Lippmann, C.; Lisa, M. A.; Ljunggren, H. M.; Lodato, D. F.; Loenne, P. I.; Loginov, V.; Loizides, C.; Lopez, X.; López Torres, E.; Lowe, A.; Luettig, P.; Lunardon, M.; Luparello, G.; Maevskaya, A.; Mager, M.; Mahajan, S.; Mahmood, S. M.; Maire, A.; Majka, R. D.; Malaev, M.; Maldonado Cervantes, I.; Malinina, L.; Mal'Kevich, D.; Malzacher, P.; Mamonov, A.; Manko, V.; Manso, F.; Manzari, V.; Marchisone, M.; Mareš, J.; Margagliotti, G. V.; Margotti, A.; Margutti, J.; Marín, A.; Markert, C.; Marquard, M.; Martin, N. A.; Martin Blanco, J.; Martinengo, P.; Martínez, M. I.; Martínez García, G.; Martinez Pedreira, M.; Mas, A.; Masciocchi, S.; Masera, M.; Masoni, A.; Massacrier, L.; Mastroserio, A.; Matyja, A.; Mayer, C.; Mazer, J.; Mazzoni, M. A.; Mcdonald, D.; Meddi, F.; Melikyan, Y.; Menchaca-Rocha, A.; Meninno, E.; Mercado Pérez, J.; Meres, M.; Miake, Y.; Mieskolainen, M. M.; Mikhaylov, K.; Milano, L.; Milosevic, J.; Minervini, L. M.; Mischke, A.; Mishra, A. N.; Miśkowiec, D.; Mitra, J.; Mitu, C. M.; Mohammadi, N.; Mohanty, B.; Molnar, L.; Montaño Zetina, L.; Montes, E.; Moreira De Godoy, D. A.; Moreno, L. A. P.; Moretto, S.; Morreale, A.; Morsch, A.; Muccifora, V.; Mudnic, E.; Mühlheim, D.; Muhuri, S.; Mukherjee, M.; Mulligan, J. D.; Munhoz, M. G.; Munzer, R. H.; Murray, S.; Musa, L.; Musinsky, J.; Naik, B.; Nair, R.; Nandi, B. K.; Nania, R.; Nappi, E.; Naru, M. U.; Natal da Luz, H.; Nattrass, C.; Nayak, K.; Nayak, T. K.; Nazarenko, S.; Nedosekin, A.; Nellen, L.; Ng, F.; Nicassio, M.; Niculescu, M.; Niedziela, J.; Nielsen, B. S.; Nikolaev, S.; Nikulin, S.; Nikulin, V.; Noferini, F.; Nomokonov, P.; Nooren, G.; Noris, J. C. C.; Norman, J.; Nyanin, A.; Nystrand, J.; Oeschler, H.; Oh, S.; Oh, S. K.; Ohlson, A.; Okatan, A.; Okubo, T.; Olah, L.; Oleniacz, J.; Oliveira Da Silva, A. C.; Oliver, M. H.; Onderwaater, J.; Oppedisano, C.; Orava, R.; Ortiz Velasquez, A.; Oskarsson, A.; Otwinowski, J.; Oyama, K.; Ozdemir, M.; Pachmayer, Y.; Pagano, P.; Paić, G.; Pal, S. K.; Pan, J.; Pandey, A. K.; Papcun, P.; Papikyan, V.; Pappalardo, G. S.; Pareek, P.; Park, W. J.; Parmar, S.; Passfeld, A.; Paticchio, V.; Patra, R. N.; Paul, B.; Peitzmann, T.; Pereira Da Costa, H.; Pereira De Oliveira Filho, E.; Peresunko, D.; Pérez Lara, C. E.; Perez Lezama, E.; Peskov, V.; Pestov, Y.; Petráček, V.; Petrov, V.; Petrovici, M.; Petta, C.; Piano, S.; Pikna, M.; Pillot, P.; Pinazza, O.; Pinsky, L.; Piyarathna, D. B.; Płoskoń, M.; Planinic, M.; Pluta, J.; Pochybova, S.; Podesta-Lerma, P. L. M.; Poghosyan, M. G.; Polichtchouk, B.; Poljak, N.; Poonsawat, W.; Pop, A.; Porteboeuf-Houssais, S.; Porter, J.; Pospisil, J.; Prasad, S. K.; Preghenella, R.; Prino, F.; Pruneau, C. A.; Pshenichnov, I.; Puccio, M.; Puddu, G.; Pujahari, P.; Punin, V.; Putschke, J.; Qvigstad, H.; Rachevski, A.; Raha, S.; Rajput, S.; Rak, J.; Rakotozafindrabe, A.; Ramello, L.; Rami, F.; Raniwala, R.; Raniwala, S.; Räsänen, S. S.; Rascanu, B. T.; Rathee, D.; Read, K. F.; Redlich, K.; Reed, R. J.; Rehman, A.; Reichelt, P.; Reidt, F.; Ren, X.; Renfordt, R.; Reolon, A. R.; Reshetin, A.; Revol, J.-P.; Reygers, K.; Riabov, V.; Ricci, R. A.; Richert, T.; Richter, M.; Riedler, P.; Riegler, W.; Riggi, F.; Ristea, C.; Rocco, E.; Rodríguez Cahuantzi, M.; Rodriguez Manso, A.; Røed, K.; Rogochaya, E.; Rohr, D.; Röhrich, D.; Romita, R.; Ronchetti, F.; Ronflette, L.; Rosnet, P.; Rossi, A.; Roukoutakis, F.; Roy, A.; Roy, C.; Roy, P.; Rubio Montero, A. J.; Rui, R.; Russo, R.; Ryabinkin, E.; Ryabov, Y.; Rybicki, A.; Sadovsky, S.; Šafařík, K.; Sahlmuller, B.; Sahoo, P.; Sahoo, R.; Sahoo, S.; Sahu, P. K.; Saini, J.; Sakai, S.; Saleh, M. A.; Salzwedel, J.; Sambyal, S.; Samsonov, V.; Šándor, L.; Sandoval, A.; Sano, M.; Sarkar, D.; Scapparone, E.; Scarlassara, F.; Schiaua, C.; Schicker, R.; Schmidt, C.; Schmidt, H. R.; Schuchmann, S.; Schukraft, J.; Schulc, M.; Schuster, T.; Schutz, Y.; Schwarz, K.; Schweda, K.; Scioli, G.; Scomparin, E.; Scott, R.; Šefčík, M.; Seger, J. E.; Sekiguchi, Y.; Sekihata, D.; Selyuzhenkov, I.; Senosi, K.; Senyukov, S.; Serradilla, E.; Sevcenco, A.; Shabanov, A.; Shabetai, A.; Shadura, O.; Shahoyan, R.; Shangaraev, A.; Sharma, A.; Sharma, M.; Sharma, M.; Sharma, N.; Shigaki, K.; Shtejer, K.; Sibiriak, Y.; Siddhanta, S.; Sielewicz, K. M.; Siemiarczuk, T.; Silvermyr, D.; Silvestre, C.; Simatovic, G.; Simonetti, G.; Singaraju, R.; Singh, R.; Singha, S.; Singhal, V.; Sinha, B. C.; Sinha, T.; Sitar, B.; Sitta, M.; Skaali, T. B.; Slupecki, M.; Smirnov, N.; Snellings, R. J. M.; Snellman, T. W.; Søgaard, C.; Song, J.; Song, M.; Song, Z.; Soramel, F.; Sorensen, S.; Sozzi, F.; Spacek, M.; Spiriti, E.; Sputowska, I.; Spyropoulou-Stassinaki, M.; Stachel, J.; Stan, I.; Stefanek, G.; Stenlund, E.; Steyn, G.; Stiller, J. H.; Stocco, D.; Strmen, P.; Suaide, A. A. P.; Sugitate, T.; Suire, C.; Suleymanov, M.; Suljic, M.; Sultanov, R.; Šumbera, M.; Szabo, A.; Szanto de Toledo, A.; Szarka, I.; Szczepankiewicz, A.; Szymanski, M.; Tabassam, U.; Takahashi, J.; Tambave, G. J.; Tanaka, N.; Tangaro, M. A.; Tarhini, M.; Tariq, M.; Tarzila, M. G.; Tauro, A.; Tejeda Muñoz, G.; Telesca, A.; Terasaki, K.; Terrevoli, C.; Teyssier, B.; Thäder, J.; Thomas, D.; Tieulent, R.; Timmins, A. R.; Toia, A.; Trogolo, S.; Trombetta, G.; Trubnikov, V.; Trzaska, W. H.; Tsuji, T.; Tumkin, A.; Turrisi, R.; Tveter, T. S.; Ullaland, K.; Uras, A.; Usai, G. L.; Utrobicic, A.; Vajzer, M.; Vala, M.; Valencia Palomo, L.; Vallero, S.; Van Der Maarel, J.; Van Hoorne, J. W.; van Leeuwen, M.; Vanat, T.; Vande Vyvre, P.; Varga, D.; Vargas, A.; Vargyas, M.; Varma, R.; Vasileiou, M.; Vasiliev, A.; Vauthier, A.; Vechernin, V.; Veen, A. M.; Veldhoen, M.; Velure, A.; Venaruzzo, M.; Vercellin, E.; Vergara Limón, S.; Vernet, R.; Verweij, M.; Vickovic, L.; Viesti, G.; Viinikainen, J.; Vilakazi, Z.; Villalobos Baillie, O.; Villatoro Tello, A.; Vinogradov, A.; Vinogradov, L.; Vinogradov, Y.; Virgili, T.; Vislavicius, V.; Viyogi, Y. P.; Vodopyanov, A.; Völkl, M. A.; Voloshin, K.; Voloshin, S. A.; Volpe, G.; von Haller, B.; Vorobyev, I.; Vranic, D.; Vrláková, J.; Vulpescu, B.; Vyushin, A.; Wagner, B.; Wagner, J.; Wang, H.; Wang, M.; Watanabe, D.; Watanabe, Y.; Weber, M.; Weber, S. G.; Weiser, D. F.; Wessels, J. P.; Westerhoff, U.; Whitehead, A. M.; Wiechula, J.; Wikne, J.; Wilde, M.; Wilk, G.; Wilkinson, J.; Williams, M. C. S.; Windelband, B.; Winn, M.; Yaldo, C. G.; Yang, H.; Yang, P.; Yano, S.; Yasar, C.; Yin, Z.; Yokoyama, H.; Yoo, I.-K.; Yoon, J. H.; Yurchenko, V.; Yushmanov, I.; Zaborowska, A.; Zaccolo, V.; Zaman, A.; Zampolli, C.; Zanoli, H. J. C.; Zaporozhets, S.; Zardoshti, N.; Zarochentsev, A.; Závada, P.; Zaviyalov, N.; Zbroszczyk, H.; Zgura, I. S.; Zhalov, M.; Zhang, H.; Zhang, X.; Zhang, Y.; Zhang, C.; Zhang, Z.; Zhao, C.; Zhigareva, N.; Zhou, D.; Zhou, Y.; Zhou, Z.; Zhu, H.; Zhu, J.; Zichichi, A.; Zimmermann, A.; Zimmermann, M. B.; Zinovjev, G.; Zyzak, M.
2016-02-01
We report on two-particle charge-dependent correlations in pp, p-Pb, and Pb-Pb collisions as a function of the pseudorapidity and azimuthal angle difference, Δ η and Δ \\varphi respectively. These correlations are studied using the balance function that probes the charge creation time and the development of collectivity in the produced system. The dependence of the balance function on the event multiplicity as well as on the trigger and associated particle transverse momentum (p_{{T}}) in pp, p-Pb, and Pb-Pb collisions at √{s_{NN}}= 7, 5.02, and 2.76 TeV, respectively, are presented. In the low transverse momentum region, for 0.2 < p_{{T}} < 2.0 GeV/ c, the balance function becomes narrower in both Δ η and Δ \\varphi directions in all three systems for events with higher multiplicity. The experimental findings favor models that either incorporate some collective behavior (e.g. AMPT) or different mechanisms that lead to effects that resemble collective behavior (e.g. PYTHIA8 with color reconnection). For higher values of transverse momenta the balance function becomes even narrower but exhibits no multiplicity dependence, indicating that the observed narrowing with increasing multiplicity at low p_{{T}} is a feature of bulk particle production.
Physiologically relevant organs on chips.
Yum, Kyungsuk; Hong, Soon Gweon; Healy, Kevin E; Lee, Luke P
2014-01-01
Recent advances in integrating microengineering and tissue engineering have generated promising microengineered physiological models for experimental medicine and pharmaceutical research. Here we review the recent development of microengineered physiological systems, or also known as "ogans-on-chips", that reconstitute the physiologically critical features of specific human tissues and organs and their interactions. This technology uses microengineering approaches to construct organ-specific microenvironments, reconstituting tissue structures, tissue-tissue interactions and interfaces, and dynamic mechanical and biochemical stimuli found in specific organs, to direct cells to assemble into functional tissues. We first discuss microengineering approaches to reproduce the key elements of physiologically important, dynamic mechanical microenvironments, biochemical microenvironments, and microarchitectures of specific tissues and organs in microfluidic cell culture systems. This is followed by examples of microengineered individual organ models that incorporate the key elements of physiological microenvironments into single microfluidic cell culture systems to reproduce organ-level functions. Finally, microengineered multiple organ systems that simulate multiple organ interactions to better represent human physiology, including human responses to drugs, is covered in this review. This emerging organs-on-chips technology has the potential to become an alternative to 2D and 3D cell culture and animal models for experimental medicine, human disease modeling, drug development, and toxicology. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Peck, Steven L
2014-10-01
It is becoming clear that handling the inherent complexity found in ecological systems is an essential task for finding ways to control insect pests of tropical livestock such as tsetse flies, and old and new world screwworms. In particular, challenging multivalent management programs, such as Area Wide Integrated Pest Management (AW-IPM), face daunting problems of complexity at multiple spatial scales, ranging from landscape level processes to those of smaller scales such as the parasite loads of individual animals. Daunting temporal challenges also await resolution, such as matching management time frames to those found on ecological and even evolutionary temporal scales. How does one deal with representing processes with models that involve multiple spatial and temporal scales? Agent-based models (ABM), combined with geographic information systems (GIS), may allow for understanding, predicting and managing pest control efforts in livestock pests. This paper argues that by incorporating digital ecologies in our management efforts clearer and more informed decisions can be made. I also point out the power of these models in making better predictions in order to anticipate the range of outcomes possible or likely. Copyright © 2014 International Atomic Energy Agency 2014. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Leroux, Romain; Chatellier, Ludovic; David, Laurent
2018-01-01
This article is devoted to the estimation of time-resolved particle image velocimetry (TR-PIV) flow fields using a time-resolved point measurements of a voltage signal obtained by hot-film anemometry. A multiple linear regression model is first defined to map the TR-PIV flow fields onto the voltage signal. Due to the high temporal resolution of the signal acquired by the hot-film sensor, the estimates of the TR-PIV flow fields are obtained with a multiple linear regression method called orthonormalized partial least squares regression (OPLSR). Subsequently, this model is incorporated as the observation equation in an ensemble Kalman filter (EnKF) applied on a proper orthogonal decomposition reduced-order model to stabilize it while reducing the effects of the hot-film sensor noise. This method is assessed for the reconstruction of the flow around a NACA0012 airfoil at a Reynolds number of 1000 and an angle of attack of {20}°. Comparisons with multi-time delay-modified linear stochastic estimation show that both the OPLSR and EnKF combined with OPLSR are more accurate as they produce a much lower relative estimation error, and provide a faithful reconstruction of the time evolution of the velocity flow fields.