Generalized Multilevel Structural Equation Modeling
ERIC Educational Resources Information Center
Rabe-Hesketh, Sophia; Skrondal, Anders; Pickles, Andrew
2004-01-01
A unifying framework for generalized multilevel structural equation modeling is introduced. The models in the framework, called generalized linear latent and mixed models (GLLAMM), combine features of generalized linear mixed models (GLMM) and structural equation models (SEM) and consist of a response model and a structural model for the latent…
DOT National Transportation Integrated Search
2015-12-01
We develop an econometric framework for incorporating spatial dependence in integrated model systems of latent variables and multidimensional mixed data outcomes. The framework combines Bhats Generalized Heterogeneous Data Model (GHDM) with a spat...
A mixed model framework for teratology studies.
Braeken, Johan; Tuerlinckx, Francis
2009-10-01
A mixed model framework is presented to model the characteristic multivariate binary anomaly data as provided in some teratology studies. The key features of the model are the incorporation of covariate effects, a flexible random effects distribution by means of a finite mixture, and the application of copula functions to better account for the relation structure of the anomalies. The framework is motivated by data of the Boston Anticonvulsant Teratogenesis study and offers an integrated approach to investigate substantive questions, concerning general and anomaly-specific exposure effects of covariates, interrelations between anomalies, and objective diagnostic measurement.
Debray, Thomas P A; Vergouwe, Yvonne; Koffijberg, Hendrik; Nieboer, Daan; Steyerberg, Ewout W; Moons, Karel G M
2015-03-01
It is widely acknowledged that the performance of diagnostic and prognostic prediction models should be assessed in external validation studies with independent data from "different but related" samples as compared with that of the development sample. We developed a framework of methodological steps and statistical methods for analyzing and enhancing the interpretation of results from external validation studies of prediction models. We propose to quantify the degree of relatedness between development and validation samples on a scale ranging from reproducibility to transportability by evaluating their corresponding case-mix differences. We subsequently assess the models' performance in the validation sample and interpret the performance in view of the case-mix differences. Finally, we may adjust the model to the validation setting. We illustrate this three-step framework with a prediction model for diagnosing deep venous thrombosis using three validation samples with varying case mix. While one external validation sample merely assessed the model's reproducibility, two other samples rather assessed model transportability. The performance in all validation samples was adequate, and the model did not require extensive updating to correct for miscalibration or poor fit to the validation settings. The proposed framework enhances the interpretation of findings at external validation of prediction models. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Jayachandrababu, Krishna C; Verploegh, Ross J; Leisen, Johannes; Nieuwendaal, Ryan C; Sholl, David S; Nair, Sankar
2016-06-15
Mixed-linker zeolitic imidazolate frameworks (ZIFs) are nanoporous materials that exhibit continuous and controllable tunability of properties like effective pore size, hydrophobicity, and organophilicity. The structure of mixed-linker ZIFs has been studied on macroscopic scales using gravimetric and spectroscopic techniques. However, it has so far not been possible to obtain information on unit-cell-level linker distribution, an understanding of which is key to predicting and controlling their adsorption and diffusion properties. We demonstrate the use of (1)H combined rotation and multiple pulse spectroscopy (CRAMPS) NMR spin exchange measurements in combination with computational modeling to elucidate potential structures of mixed-linker ZIFs, particularly the ZIF 8-90 series. All of the compositions studied have structures that have linkers mixed at a unit-cell-level as opposed to separated or highly clustered phases within the same crystal. Direct experimental observations of linker mixing were accomplished by measuring the proton spin exchange behavior between functional groups on the linkers. The data were then fitted to a kinetic spin exchange model using proton positions from candidate mixed-linker ZIF structures that were generated computationally using the short-range order (SRO) parameter as a measure of the ordering, clustering, or randomization of the linkers. The present method offers the advantages of sensitivity without requiring isotope enrichment, a straightforward NMR pulse sequence, and an analysis framework that allows one to relate spin diffusion behavior to proposed atomic positions. We find that structures close to equimolar composition of the two linkers show a greater tendency for linker clustering than what would be predicted based on random models. Using computational modeling we have also shown how the window-type distribution in experimentally synthesized mixed-linker ZIF-8-90 materials varies as a function of their composition. The structural information thus obtained can be further used for predicting, screening, or understanding the tunable adsorption and diffusion behavior of mixed-linker ZIFs, for which the knowledge of linker distributions in the framework is expected to be important.
Heterogeneity, Mixing, and the Spatial Scales of Mosquito-Borne Pathogen Transmission
Perkins, T. Alex; Scott, Thomas W.; Le Menach, Arnaud; Smith, David L.
2013-01-01
The Ross-Macdonald model has dominated theory for mosquito-borne pathogen transmission dynamics and control for over a century. The model, like many other basic population models, makes the mathematically convenient assumption that populations are well mixed; i.e., that each mosquito is equally likely to bite any vertebrate host. This assumption raises questions about the validity and utility of current theory because it is in conflict with preponderant empirical evidence that transmission is heterogeneous. Here, we propose a new dynamic framework that is realistic enough to describe biological causes of heterogeneous transmission of mosquito-borne pathogens of humans, yet tractable enough to provide a basis for developing and improving general theory. The framework is based on the ecological context of mosquito blood meals and the fine-scale movements of individual mosquitoes and human hosts that give rise to heterogeneous transmission. Using this framework, we describe pathogen dispersion in terms of individual-level analogues of two classical quantities: vectorial capacity and the basic reproductive number, . Importantly, this framework explicitly accounts for three key components of overall heterogeneity in transmission: heterogeneous exposure, poor mixing, and finite host numbers. Using these tools, we propose two ways of characterizing the spatial scales of transmission—pathogen dispersion kernels and the evenness of mixing across scales of aggregation—and demonstrate the consequences of a model's choice of spatial scale for epidemic dynamics and for estimation of , both by a priori model formulas and by inference of the force of infection from time-series data. PMID:24348223
Bayesian stable isotope mixing models
In this paper we review recent advances in Stable Isotope Mixing Models (SIMMs) and place them into an over-arching Bayesian statistical framework which allows for several useful extensions. SIMMs are used to quantify the proportional contributions of various sources to a mixtur...
Uriu, Koichiro; Bhavna, Rajasekaran; Oates, Andrew C; Morelli, Luis G
2017-08-15
In development and disease, cells move as they exchange signals. One example is found in vertebrate development, during which the timing of segment formation is set by a 'segmentation clock', in which oscillating gene expression is synchronized across a population of cells by Delta-Notch signaling. Delta-Notch signaling requires local cell-cell contact, but in the zebrafish embryonic tailbud, oscillating cells move rapidly, exchanging neighbors. Previous theoretical studies proposed that this relative movement or cell mixing might alter signaling and thereby enhance synchronization. However, it remains unclear whether the mixing timescale in the tissue is in the right range for this effect, because a framework to reliably measure the mixing timescale and compare it with signaling timescale is lacking. Here, we develop such a framework using a quantitative description of cell mixing without the need for an external reference frame and constructing a physical model of cell movement based on the data. Numerical simulations show that mixing with experimentally observed statistics enhances synchronization of coupled phase oscillators, suggesting that mixing in the tailbud is fast enough to affect the coherence of rhythmic gene expression. Our approach will find general application in analyzing the relative movements of communicating cells during development and disease. © 2017. Published by The Company of Biologists Ltd.
Bhavna, Rajasekaran; Oates, Andrew C.; Morelli, Luis G.
2017-01-01
ABSTRACT In development and disease, cells move as they exchange signals. One example is found in vertebrate development, during which the timing of segment formation is set by a ‘segmentation clock’, in which oscillating gene expression is synchronized across a population of cells by Delta-Notch signaling. Delta-Notch signaling requires local cell-cell contact, but in the zebrafish embryonic tailbud, oscillating cells move rapidly, exchanging neighbors. Previous theoretical studies proposed that this relative movement or cell mixing might alter signaling and thereby enhance synchronization. However, it remains unclear whether the mixing timescale in the tissue is in the right range for this effect, because a framework to reliably measure the mixing timescale and compare it with signaling timescale is lacking. Here, we develop such a framework using a quantitative description of cell mixing without the need for an external reference frame and constructing a physical model of cell movement based on the data. Numerical simulations show that mixing with experimentally observed statistics enhances synchronization of coupled phase oscillators, suggesting that mixing in the tailbud is fast enough to affect the coherence of rhythmic gene expression. Our approach will find general application in analyzing the relative movements of communicating cells during development and disease. PMID:28652318
Dynamic Infinite Mixed-Membership Stochastic Blockmodel.
Fan, Xuhui; Cao, Longbing; Xu, Richard Yi Da
2015-09-01
Directional and pairwise measurements are often used to model interactions in a social network setting. The mixed-membership stochastic blockmodel (MMSB) was a seminal work in this area, and its ability has been extended. However, models such as MMSB face particular challenges in modeling dynamic networks, for example, with the unknown number of communities. Accordingly, this paper proposes a dynamic infinite mixed-membership stochastic blockmodel, a generalized framework that extends the existing work to potentially infinite communities inside a network in dynamic settings (i.e., networks are observed over time). Additional model parameters are introduced to reflect the degree of persistence among one's memberships at consecutive time stamps. Under this framework, two specific models, namely mixture time variant and mixture time invariant models, are proposed to depict two different time correlation structures. Two effective posterior sampling strategies and their results are presented, respectively, using synthetic and real-world data.
Implementing Restricted Maximum Likelihood Estimation in Structural Equation Models
ERIC Educational Resources Information Center
Cheung, Mike W.-L.
2013-01-01
Structural equation modeling (SEM) is now a generic modeling framework for many multivariate techniques applied in the social and behavioral sciences. Many statistical models can be considered either as special cases of SEM or as part of the latent variable modeling framework. One popular extension is the use of SEM to conduct linear mixed-effects…
The estimation of branching curves in the presence of subject-specific random effects.
Elmi, Angelo; Ratcliffe, Sarah J; Guo, Wensheng
2014-12-20
Branching curves are a technique for modeling curves that change trajectory at a change (branching) point. Currently, the estimation framework is limited to independent data, and smoothing splines are used for estimation. This article aims to extend the branching curve framework to the longitudinal data setting where the branching point varies by subject. If the branching point is modeled as a random effect, then the longitudinal branching curve framework is a semiparametric nonlinear mixed effects model. Given existing issues with using random effects within a smoothing spline, we express the model as a B-spline based semiparametric nonlinear mixed effects model. Simple, clever smoothness constraints are enforced on the B-splines at the change point. The method is applied to Women's Health data where we model the shape of the labor curve (cervical dilation measured longitudinally) before and after treatment with oxytocin (a labor stimulant). Copyright © 2014 John Wiley & Sons, Ltd.
Comparison and Contrast of Two General Functional Regression Modeling Frameworks
Morris, Jeffrey S.
2017-01-01
In this article, Greven and Scheipl describe an impressively general framework for performing functional regression that builds upon the generalized additive modeling framework. Over the past number of years, my collaborators and I have also been developing a general framework for functional regression, functional mixed models, which shares many similarities with this framework, but has many differences as well. In this discussion, I compare and contrast these two frameworks, to hopefully illuminate characteristics of each, highlighting their respecitve strengths and weaknesses, and providing recommendations regarding the settings in which each approach might be preferable. PMID:28736502
Comparison and Contrast of Two General Functional Regression Modeling Frameworks.
Morris, Jeffrey S
2017-02-01
In this article, Greven and Scheipl describe an impressively general framework for performing functional regression that builds upon the generalized additive modeling framework. Over the past number of years, my collaborators and I have also been developing a general framework for functional regression, functional mixed models, which shares many similarities with this framework, but has many differences as well. In this discussion, I compare and contrast these two frameworks, to hopefully illuminate characteristics of each, highlighting their respecitve strengths and weaknesses, and providing recommendations regarding the settings in which each approach might be preferable.
A Second-Order Conditionally Linear Mixed Effects Model with Observed and Latent Variable Covariates
ERIC Educational Resources Information Center
Harring, Jeffrey R.; Kohli, Nidhi; Silverman, Rebecca D.; Speece, Deborah L.
2012-01-01
A conditionally linear mixed effects model is an appropriate framework for investigating nonlinear change in a continuous latent variable that is repeatedly measured over time. The efficacy of the model is that it allows parameters that enter the specified nonlinear time-response function to be stochastic, whereas those parameters that enter in a…
A mixed integer program to model spatial wildfire behavior and suppression placement decisions
Erin J. Belval; Yu Wei; Michael Bevers
2015-01-01
Wildfire suppression combines multiple objectives and dynamic fire behavior to form a complex problem for decision makers. This paper presents a mixed integer program designed to explore integrating spatial fire behavior and suppression placement decisions into a mathematical programming framework. Fire behavior and suppression placement decisions are modeled using...
Towards a Theory-Based Design Framework for an Effective E-Learning Computer Programming Course
ERIC Educational Resources Information Center
McGowan, Ian S.
2016-01-01
Built on Dabbagh (2005), this paper presents a four component theory-based design framework for an e-learning session in introductory computer programming. The framework, driven by a body of exemplars component, emphasizes the transformative interaction between the knowledge building community (KBC) pedagogical model, a mixed instructional…
Functional Additive Mixed Models
Scheipl, Fabian; Staicu, Ana-Maria; Greven, Sonja
2014-01-01
We propose an extensive framework for additive regression models for correlated functional responses, allowing for multiple partially nested or crossed functional random effects with flexible correlation structures for, e.g., spatial, temporal, or longitudinal functional data. Additionally, our framework includes linear and nonlinear effects of functional and scalar covariates that may vary smoothly over the index of the functional response. It accommodates densely or sparsely observed functional responses and predictors which may be observed with additional error and includes both spline-based and functional principal component-based terms. Estimation and inference in this framework is based on standard additive mixed models, allowing us to take advantage of established methods and robust, flexible algorithms. We provide easy-to-use open source software in the pffr() function for the R-package refund. Simulations show that the proposed method recovers relevant effects reliably, handles small sample sizes well and also scales to larger data sets. Applications with spatially and longitudinally observed functional data demonstrate the flexibility in modeling and interpretability of results of our approach. PMID:26347592
Functional Additive Mixed Models.
Scheipl, Fabian; Staicu, Ana-Maria; Greven, Sonja
2015-04-01
We propose an extensive framework for additive regression models for correlated functional responses, allowing for multiple partially nested or crossed functional random effects with flexible correlation structures for, e.g., spatial, temporal, or longitudinal functional data. Additionally, our framework includes linear and nonlinear effects of functional and scalar covariates that may vary smoothly over the index of the functional response. It accommodates densely or sparsely observed functional responses and predictors which may be observed with additional error and includes both spline-based and functional principal component-based terms. Estimation and inference in this framework is based on standard additive mixed models, allowing us to take advantage of established methods and robust, flexible algorithms. We provide easy-to-use open source software in the pffr() function for the R-package refund. Simulations show that the proposed method recovers relevant effects reliably, handles small sample sizes well and also scales to larger data sets. Applications with spatially and longitudinally observed functional data demonstrate the flexibility in modeling and interpretability of results of our approach.
Mixed-order phase transition in a one-dimensional model.
Bar, Amir; Mukamel, David
2014-01-10
We introduce and analyze an exactly soluble one-dimensional Ising model with long range interactions that exhibits a mixed-order transition, namely a phase transition in which the order parameter is discontinuous as in first order transitions while the correlation length diverges as in second order transitions. Such transitions are known to appear in a diverse classes of models that are seemingly unrelated. The model we present serves as a link between two classes of models that exhibit a mixed-order transition in one dimension, namely, spin models with a coupling constant that decays as the inverse distance squared and models of depinning transitions, thus making a step towards a unifying framework.
van Rijn, Peter W; Ali, Usama S
2017-05-01
We compare three modelling frameworks for accuracy and speed of item responses in the context of adaptive testing. The first framework is based on modelling scores that result from a scoring rule that incorporates both accuracy and speed. The second framework is the hierarchical modelling approach developed by van der Linden (2007, Psychometrika, 72, 287) in which a regular item response model is specified for accuracy and a log-normal model for speed. The third framework is the diffusion framework in which the response is assumed to be the result of a Wiener process. Although the three frameworks differ in the relation between accuracy and speed, one commonality is that the marginal model for accuracy can be simplified to the two-parameter logistic model. We discuss both conditional and marginal estimation of model parameters. Models from all three frameworks were fitted to data from a mathematics and spelling test. Furthermore, we applied a linear and adaptive testing mode to the data off-line in order to determine differences between modelling frameworks. It was found that a model from the scoring rule framework outperformed a hierarchical model in terms of model-based reliability, but the results were mixed with respect to correlations with external measures. © 2017 The British Psychological Society.
Tripathy, Shreepada; Miller, Karen H; Berkenbosch, John W; McKinley, Tara F; Boland, Kimberly A; Brown, Seth A; Calhoun, Aaron W
2016-06-01
Controversy exists in the simulation community as to the emotional and educational ramifications of mannequin death due to learner action or inaction. No theoretical framework to guide future investigations of learner actions currently exists. The purpose of our study was to generate a model of the learner experience of mannequin death using a mixed methods approach. The study consisted of an initial focus group phase composed of 11 learners who had previously experienced mannequin death due to action or inaction on the part of learners as defined by Leighton (Clin Simul Nurs. 2009;5(2):e59-e62). Transcripts were analyzed using grounded theory to generate a list of relevant themes that were further organized into a theoretical framework. With the use of this framework, a survey was generated and distributed to additional learners who had experienced mannequin death due to action or inaction. Results were analyzed using a mixed methods approach. Forty-one clinicians completed the survey. A correlation was found between the emotional experience of mannequin death and degree of presession anxiety (P < 0.001). Debriefing was found to significantly reduce negative emotion and enhance satisfaction. Sixty-nine percent of respondents indicated that mannequin death enhanced learning. These results were used to modify our framework. Using the previous approach, we created a model of the effect of mannequin death on the educational and psychological state of learners. We offer the final model as a guide to future research regarding the learner experience of mannequin death.
A Lagrangian mixing frequency model for transported PDF modeling
NASA Astrophysics Data System (ADS)
Turkeri, Hasret; Zhao, Xinyu
2017-11-01
In this study, a Lagrangian mixing frequency model is proposed for molecular mixing models within the framework of transported probability density function (PDF) methods. The model is based on the dissipations of mixture fraction and progress variables obtained from Lagrangian particles in PDF methods. The new model is proposed as a remedy to the difficulty in choosing the optimal model constant parameters when using conventional mixing frequency models. The model is implemented in combination with the Interaction by exchange with the mean (IEM) mixing model. The performance of the new model is examined by performing simulations of Sandia Flame D and the turbulent premixed flame from the Cambridge stratified flame series. The simulations are performed using the pdfFOAM solver which is a LES/PDF solver developed entirely in OpenFOAM. A 16-species reduced mechanism is used to represent methane/air combustion, and in situ adaptive tabulation is employed to accelerate the finite-rate chemistry calculations. The results are compared with experimental measurements as well as with the results obtained using conventional mixing frequency models. Dynamic mixing frequencies are predicted using the new model without solving additional transport equations, and good agreement with experimental data is observed.
Uncertainty in mixing models: a blessing in disguise?
NASA Astrophysics Data System (ADS)
Delsman, J. R.; Oude Essink, G. H. P.
2012-04-01
Despite the abundance of tracer-based studies in catchment hydrology over the past decades, relatively few studies have addressed the uncertainty associated with these studies in much detail. This uncertainty stems from analytical error, spatial and temporal variance in end-member composition, and from not incorporating all relevant processes in the necessarily simplistic mixing models. Instead of applying standard EMMA methodology, we used end-member mixing model analysis within a Monte Carlo framework to quantify the uncertainty surrounding our analysis. Borrowing from the well-known GLUE methodology, we discarded mixing models that could not satisfactorily explain sample concentrations and analyzed the posterior parameter set. This use of environmental tracers aided in disentangling hydrological pathways in a Dutch polder catchment. This 10 km2 agricultural catchment is situated in the coastal region of the Netherlands. Brackish groundwater seepage, originating from Holocene marine transgressions, adversely affects water quality in this catchment. Current water management practice is aimed at improving water quality by flushing the catchment with fresh water from the river Rhine. Climate change is projected to decrease future fresh water availability, signifying the need for a more sustainable water management practice and a better understanding of the functioning of the catchment. The end-member mixing analysis increased our understanding of the hydrology of the studied catchment. The use of a GLUE-like framework for applying the end-member mixing analysis not only quantified the uncertainty associated with the analysis, the analysis of the posterior parameter set also identified the existence of catchment processes otherwise overlooked.
Traveltime-based descriptions of transport and mixing in heterogeneous domains
NASA Astrophysics Data System (ADS)
Luo, Jian; Cirpka, Olaf A.
2008-09-01
Modeling mixing-controlled reactive transport using traditional spatial discretization of the domain requires identifying the spatial distributions of hydraulic and reactive parameters including mixing-related quantities such as dispersivities and kinetic mass transfer coefficients. In most applications, breakthrough curves (BTCs) of conservative and reactive compounds are measured at only a few locations and spatially explicit models are calibrated by matching these BTCs. A common difficulty in such applications is that the individual BTCs differ too strongly to justify the assumption of spatial homogeneity, whereas the number of observation points is too small to identify the spatial distribution of the decisive parameters. The key objective of the current study is to characterize physical transport by the analysis of conservative tracer BTCs and predict the macroscopic BTCs of compounds that react upon mixing from the interpretation of conservative tracer BTCs and reactive parameters determined in the laboratory. We do this in the framework of traveltime-based transport models which do not require spatially explicit, costly aquifer characterization. By considering BTCs of a conservative tracer measured on different scales, one can distinguish between mixing, which is a prerequisite for reactions, and spreading, which per se does not foster reactions. In the traveltime-based framework, the BTC of a solute crossing an observation plane, or ending in a well, is interpreted as the weighted average of concentrations in an ensemble of non-interacting streamtubes, each of which is characterized by a distinct traveltime value. Mixing is described by longitudinal dispersion and/or kinetic mass transfer along individual streamtubes, whereas spreading is characterized by the distribution of traveltimes, which also determines the weights associated with each stream tube. Key issues in using the traveltime-based framework include the description of mixing mechanisms and the estimation of the traveltime distribution. In this work, we account for both apparent longitudinal dispersion and kinetic mass transfer as mixing mechanisms, thus generalizing the stochastic-convective model with or without inter-phase mass transfer and the advective-dispersive streamtube model. We present a nonparametric approach of determining the traveltime distribution, given a BTC integrated over an observation plane and estimated mixing parameters. The latter approach is superior to fitting parametric models in cases wherein the true traveltime distribution exhibits multiple peaks or long tails. It is demonstrated that there is freedom for the combinations of mixing parameters and traveltime distributions to fit conservative BTCs and describe the tailing. A reactive transport case of a dual Michaelis-Menten problem demonstrates that the reactive mixing introduced by local dispersion and mass transfer may be described by apparent mean mass transfer with coefficients evaluated by local BTCs.
Validating for Use and Interpretation: A Mixed Methods Contribution Illustrated
ERIC Educational Resources Information Center
Morell, Linda; Tan, Rachael Jin Bee
2009-01-01
Researchers in the areas of psychology and education strive to understand the intersections among validity, educational measurement, and cognitive theory. Guided by a mixed model conceptual framework, this study investigates how respondents' opinions inform the validation argument. Validity evidence for a science assessment was collected through…
ERIC Educational Resources Information Center
Huang, Yifen
2010-01-01
Mixed-initiative clustering is a task where a user and a machine work collaboratively to analyze a large set of documents. We hypothesize that a user and a machine can both learn better clustering models through enriched communication and interactive learning from each other. The first contribution or this thesis is providing a framework of…
Wang, Xulong; Philip, Vivek M.; Ananda, Guruprasad; White, Charles C.; Malhotra, Ankit; Michalski, Paul J.; Karuturi, Krishna R. Murthy; Chintalapudi, Sumana R.; Acklin, Casey; Sasner, Michael; Bennett, David A.; De Jager, Philip L.; Howell, Gareth R.; Carter, Gregory W.
2018-01-01
Recent technical and methodological advances have greatly enhanced genome-wide association studies (GWAS). The advent of low-cost, whole-genome sequencing facilitates high-resolution variant identification, and the development of linear mixed models (LMM) allows improved identification of putatively causal variants. While essential for correcting false positive associations due to sample relatedness and population stratification, LMMs have commonly been restricted to quantitative variables. However, phenotypic traits in association studies are often categorical, coded as binary case-control or ordered variables describing disease stages. To address these issues, we have devised a method for genomic association studies that implements a generalized LMM (GLMM) in a Bayesian framework, called Bayes-GLMM. Bayes-GLMM has four major features: (1) support of categorical, binary, and quantitative variables; (2) cohesive integration of previous GWAS results for related traits; (3) correction for sample relatedness by mixed modeling; and (4) model estimation by both Markov chain Monte Carlo sampling and maximal likelihood estimation. We applied Bayes-GLMM to the whole-genome sequencing cohort of the Alzheimer’s Disease Sequencing Project. This study contains 570 individuals from 111 families, each with Alzheimer’s disease diagnosed at one of four confidence levels. Using Bayes-GLMM we identified four variants in three loci significantly associated with Alzheimer’s disease. Two variants, rs140233081 and rs149372995, lie between PRKAR1B and PDGFA. The coded proteins are localized to the glial-vascular unit, and PDGFA transcript levels are associated with Alzheimer’s disease-related neuropathology. In summary, this work provides implementation of a flexible, generalized mixed-model approach in a Bayesian framework for association studies. PMID:29507048
NASA Astrophysics Data System (ADS)
Mudunuru, M. K.; Karra, S.; Nakshatrala, K. B.
2016-12-01
Fundamental to enhancement and control of the macroscopic spreading, mixing, and dilution of solute plumes in porous media structures is the topology of flow field and underlying heterogeneity and anisotropy contrast of porous media. Traditionally, in literature, the main focus was limited to the shearing effects of flow field (i.e., flow has zero helical density, meaning that flow is always perpendicular to vorticity vector) on scalar mixing [2]. However, the combined effect of anisotropy of the porous media and the helical structure (or chaotic nature) of the flow field on the species reactive-transport and mixing has been rarely studied. Recently, it has been experimentally shown that there is an irrefutable evidence that chaotic advection and helical flows are inherent in porous media flows [1,2]. In this poster presentation, we present a non-intrusive physics-based model-order reduction framework to quantify the effects of species mixing in-terms of reduced-order models (ROMs) and scaling laws. The ROM framework is constructed based on the recent advancements in non-negative formulations for reactive-transport in heterogeneous anisotropic porous media [3] and non-intrusive ROM methods [4]. The objective is to generate computationally efficient and accurate ROMs for species mixing for different values of input data and reactive-transport model parameters. This is achieved by using multiple ROMs, which is a way to determine the robustness of the proposed framework. Sensitivity analysis is performed to identify the important parameters. Representative numerical examples from reactive-transport are presented to illustrate the importance of the proposed ROMs to accurately describe mixing process in porous media. [1] Lester, Metcalfe, and Trefry, "Is chaotic advection inherent to porous media flow?," PRL, 2013. [2] Ye, Chiogna, Cirpka, Grathwohl, and Rolle, "Experimental evidence of helical flow in porous media," PRL, 2015. [3] Mudunuru, and Nakshatrala, "On enforcing maximum principles and achieving element-wise species balance for advection-diffusion-reaction equations under the finite element method," JCP, 2016. [4] Quarteroni, Manzoni, and Negri. "Reduced Basis Methods for Partial Differential Equations: An Introduction," Springer, 2016.
ERIC Educational Resources Information Center
Thurgood, Larry L.
2010-01-01
A mixed methods study examined how a newly developed campus-wide framework for learning and teaching, called the Learning Model, was accepted and embraced by faculty members at Brigham Young University-Idaho from September 2007 to January 2009. Data from two administrations of the Approaches to Teaching Inventory showed that (a) faculty members…
Mixing-controlled reactive transport on travel times in heterogeneous media
NASA Astrophysics Data System (ADS)
Luo, J.; Cirpka, O.
2008-05-01
Modeling mixing-controlled reactive transport using traditional spatial discretization of the domain requires identifying the spatial distributions of hydraulic and reactive parameters including mixing-related quantities such as dispersivities and kinetic mass-transfer coefficients. In most applications, breakthrough curves of conservative and reactive compounds are measured at only a few locations and models are calibrated by matching these breakthrough curves, which is an ill posed inverse problem. By contrast, travel-time based transport models avoid costly aquifer characterization. By considering breakthrough curves measured on different scales, one can distinguish between mixing, which is a prerequisite for reactions, and spreading, which per se does not foster reactions. In the travel-time based framework, the breakthrough curve of a solute crossing an observation plane, or ending in a well, is interpreted as the weighted average of concentrations in an ensemble of non-interacting streamtubes, each of which is characterized by a distinct travel-time value. Mixing is described by longitudinal dispersion and/or kinetic mass transfer along individual streamtubes, whereas spreading is characterized by the distribution of travel times which also determines the weights associated to each stream tube. Key issues in using the travel-time based framework include the description of mixing mechanisms and the estimation of the travel-time distribution. In this work, we account for both apparent longitudinal dispersion and kinetic mass transfer as mixing mechanisms, thus generalizing the stochastic-convective model with or without inter-phase mass transfer and the advective-dispersive streamtube model. We present a nonparametric approach of determining the travel-time distribution, given a breakthrough curve integrated over an observation plane and estimated mixing parameters. The latter approach is superior to fitting parametric models in cases where the true travel-time distribution exhibits multiple peaks or long tails. It is demonstrated that there is freedom for the combinations of mixing parameters and travel-time distributions to fit conservative breakthrough curves and describe the tailing. Reactive transport cases with a bimolecular instantaneous irreversible reaction and a dual Michaelis-Menten problem demonstrate that the mixing introduced by local dispersion and mass transfer may be described by apparent mean mass transfer with coefficients evaluated by local breakthrough curves.
Li, Haocheng; Zhang, Yukun; Carroll, Raymond J; Keadle, Sarah Kozey; Sampson, Joshua N; Matthews, Charles E
2017-11-10
A mixed effect model is proposed to jointly analyze multivariate longitudinal data with continuous, proportion, count, and binary responses. The association of the variables is modeled through the correlation of random effects. We use a quasi-likelihood type approximation for nonlinear variables and transform the proposed model into a multivariate linear mixed model framework for estimation and inference. Via an extension to the EM approach, an efficient algorithm is developed to fit the model. The method is applied to physical activity data, which uses a wearable accelerometer device to measure daily movement and energy expenditure information. Our approach is also evaluated by a simulation study. Copyright © 2017 John Wiley & Sons, Ltd.
Xing, Dongyuan; Huang, Yangxin; Chen, Henian; Zhu, Yiliang; Dagne, Getachew A; Baldwin, Julie
2017-08-01
Semicontinuous data featured with an excessive proportion of zeros and right-skewed continuous positive values arise frequently in practice. One example would be the substance abuse/dependence symptoms data for which a substantial proportion of subjects investigated may report zero. Two-part mixed-effects models have been developed to analyze repeated measures of semicontinuous data from longitudinal studies. In this paper, we propose a flexible two-part mixed-effects model with skew distributions for correlated semicontinuous alcohol data under the framework of a Bayesian approach. The proposed model specification consists of two mixed-effects models linked by the correlated random effects: (i) a model on the occurrence of positive values using a generalized logistic mixed-effects model (Part I); and (ii) a model on the intensity of positive values using a linear mixed-effects model where the model errors follow skew distributions including skew- t and skew-normal distributions (Part II). The proposed method is illustrated with an alcohol abuse/dependence symptoms data from a longitudinal observational study, and the analytic results are reported by comparing potential models under different random-effects structures. Simulation studies are conducted to assess the performance of the proposed models and method.
The role of simulation in mixed-methods research: a framework & application to patient safety.
Guise, Jeanne-Marie; Hansen, Matthew; Lambert, William; O'Brien, Kerth
2017-05-04
Research in patient safety is an important area of health services research and is a national priority. It is challenging to investigate rare occurrences, explore potential causes, and account for the complex, dynamic context of healthcare - yet all are required in patient safety research. Simulation technologies have become widely accepted as education and clinical tools, but have yet to become a standard tool for research. We developed a framework for research that integrates accepted patient safety models with mixed-methods research approaches and describe the performance of the framework in a working example of a large National Institutes of Health (NIH)-funded R01 investigation. This worked example of a framework in action, identifies the strengths and limitations of qualitative and quantitative research approaches commonly used in health services research. Each approach builds essential layers of knowledge. We describe how the use of simulation ties these layers of knowledge together and adds new and unique dimensions of knowledge. A mixed-methods research approach that includes simulation provides a broad multi-dimensional approach to health services and patient safety research.
ERIC Educational Resources Information Center
Koskey, Kristin L. K.; Sondergeld, Toni A.; Stewart, Victoria C.; Pugh, Kevin J.
2018-01-01
Onwuegbuzie and colleagues proposed the Instrument Development and Construct Validation (IDCV) process as a mixed methods framework for creating and validating measures. Examples applying IDCV are lacking. We provide an illustrative case integrating the Rasch model and cognitive interviews applied to the development of the Transformative…
Skew-t partially linear mixed-effects models for AIDS clinical studies.
Lu, Tao
2016-01-01
We propose partially linear mixed-effects models with asymmetry and missingness to investigate the relationship between two biomarkers in clinical studies. The proposed models take into account irregular time effects commonly observed in clinical studies under a semiparametric model framework. In addition, commonly assumed symmetric distributions for model errors are substituted by asymmetric distribution to account for skewness. Further, informative missing data mechanism is accounted for. A Bayesian approach is developed to perform parameter estimation simultaneously. The proposed model and method are applied to an AIDS dataset and comparisons with alternative models are performed.
Fermion hierarchy from sfermion anarchy
Altmannshofer, Wolfgang; Frugiuele, Claudia; Harnik, Roni
2014-12-31
We present a framework to generate the hierarchical flavor structure of Standard Model quarks and leptons from loops of superpartners. The simplest model consists of the minimal supersymmetric standard model with tree level Yukawa couplings for the third generation only and anarchic squark and slepton mass matrices. Agreement with constraints from low energy flavor observables, in particular Kaon mixing, is obtained for supersymmetric particles with masses at the PeV scale or above. In our framework both the second and the first generation fermion masses are generated at 1-loop. Despite this, a novel mechanism generates a hierarchy among the first andmore » second generations without imposing a symmetry or small parameters. A second-to-first generation mass ratio of order 100 is typical. The minimal supersymmetric standard model thus includes all the necessary ingredients to realize a fermion spectrum that is qualitatively similar to observation, with hierarchical masses and mixing. The minimal framework produces only a few quantitative discrepancies with observation, most notably the muon mass is too low. Furthermore, we discuss simple modifications which resolve this and also investigate the compatibility of our model with gauge and Yukawa coupling Unification.« less
NASA Astrophysics Data System (ADS)
Goodson, Matthew D.; Heitsch, Fabian; Eklund, Karl; Williams, Virginia A.
2017-07-01
Turbulence models attempt to account for unresolved dynamics and diffusion in hydrodynamical simulations. We develop a common framework for two-equation Reynolds-averaged Navier-Stokes turbulence models, and we implement six models in the athena code. We verify each implementation with the standard subsonic mixing layer, although the level of agreement depends on the definition of the mixing layer width. We then test the validity of each model into the supersonic regime, showing that compressibility corrections can improve agreement with experiment. For models with buoyancy effects, we also verify our implementation via the growth of the Rayleigh-Taylor instability in a stratified medium. The models are then applied to the ubiquitous astrophysical shock-cloud interaction in three dimensions. We focus on the mixing of shock and cloud material, comparing results from turbulence models to high-resolution simulations (up to 200 cells per cloud radius) and ensemble-averaged simulations. We find that the turbulence models lead to increased spreading and mixing of the cloud, although no two models predict the same result. Increased mixing is also observed in inviscid simulations at resolutions greater than 100 cells per radius, which suggests that the turbulent mixing begins to be resolved.
Formulation and Application of the Generalized Multilevel Facets Model
ERIC Educational Resources Information Center
Wang, Wen-Chung; Liu, Chih-Yu
2007-01-01
In this study, the authors develop a generalized multilevel facets model, which is not only a multilevel and two-parameter generalization of the facets model, but also a multilevel and facet generalization of the generalized partial credit model. Because the new model is formulated within a framework of nonlinear mixed models, no efforts are…
An ideal-typical model for comparing interprofessional relations and skill mix in health care.
Schönfelder, Walter; Nilsen, Elin Anita
2016-11-08
Comparisons of health system performance, including the regulations of interprofessional relations and the skill mix between health professions are challenging. National strategies for regulating interprofessional relations vary widely across European health care systems. Unambiguously defined and generally accepted performance indicators have to remain generic, with limited power for recognizing the organizational structures regulating interprofessional relations in different health systems. A coherent framework for in-depth comparisons of different models for organizing interprofessional relations and the skill mix between professional groups is currently not available. This study aims to develop an ideal-typical framework for categorizing skill mix and interprofessional relations in health care, and to assess the potential impact for different ideal types on care coordination and integrated service delivery. A document analysis of the Health Systems in Transition (HiT) reports published by the European Observatory on Health Systems and Policies was conducted. The HiT reports to 31 European health systems were analyzed using a qualitative content analysis and a process of meaning condensation. The educational tracks available to nurses have an impact on the professional autonomy for nurses, the hierarchy between professional groups, the emphasis given to negotiating skill mix, interdisciplinary teamwork and the extent of cooperation across the health and social service interface. Based on the results of the document analysis, three ideal types for regulating interprofessional relations and skill mix in health care are delimited. For each ideal type, outcomes on service coordination and holistic service delivery are described. Comparisons of interprofessional relations are necessary for proactive health human resource policies. The proposed ideal-typical framework provides the means for in-depth comparisons of interprofessional relations in the health care workforce beyond of what is possible with directly comparable, but generic performance indicators.
Mixing {Xi}--{Xi}' Effects and Static Properties of Heavy {Xi}'s
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aliev, T. M.; Ozpineci, A.; Zamiralov, V. S.
It is shown the importance of mixing of heavy baryons {Xi}--{Xi}' with the new quantum numbers for analysis of its characteristics. The quark model of Ono is used as an example. Masses of new baryons as well as mixing angles of the states {Xi}--{Xi}' are obtained. The same reasoning is shown to be valid for the interpolating currents of these baryons in the framework of the QCD sum rules.
Trending in Probability of Collision Measurements via a Bayesian Zero-Inflated Beta Mixed Model
NASA Technical Reports Server (NTRS)
Vallejo, Jonathon; Hejduk, Matt; Stamey, James
2015-01-01
We investigate the performance of a generalized linear mixed model in predicting the Probabilities of Collision (Pc) for conjunction events. Specifically, we apply this model to the log(sub 10) transformation of these probabilities and argue that this transformation yields values that can be considered bounded in practice. Additionally, this bounded random variable, after scaling, is zero-inflated. Consequently, we model these values using the zero-inflated Beta distribution, and utilize the Bayesian paradigm and the mixed model framework to borrow information from past and current events. This provides a natural way to model the data and provides a basis for answering questions of interest, such as what is the likelihood of observing a probability of collision equal to the effective value of zero on a subsequent observation.
Hua, Carol; Doheny, Patrick William; Ding, Bowen; Chan, Bun; Yu, Michelle; Kepert, Cameron J; D'Alessandro, Deanna M
2018-05-04
Understanding the nature of charge transfer mechanisms in 3-dimensional Metal-Organic Frameworks (MOFs) is an important goal owing to the possibility of harnessing this knowledge to design conductive frameworks. These materials have been implicated as the basis for the next generation of technological devices for applications in energy storage and conversion, including electrochromic devices, electrocatalysts, and battery materials. After nearly two decades of intense research into MOFs, the mechanisms of charge transfer remain relatively poorly understood, and new strategies to achieve charge mobility remain elusive and challenging to experimentally explore, validate and model. We now demonstrate that aromatic stacking interactions in Zn(II) frameworks containing cofacial thiazolo[5,4-d]thiazole units lead to a mixed-valence state upon electrochemical or chemical reduction. This through-space Intervalence Charge Transfer (IVCT) phenomenon represents a new mechanism for charge delocalisation in MOFs. Computational modelling of the optical data combined with application of Marcus-Hush theory to the IVCT bands for the mixed-valence framework has enabled quantification of the degree of delocalisation using both in situ and ex situ electro- and spectro-electrochemical methods. A distance dependence for the through-space electron transfer has also been identified on the basis of experimental studies and computational calculations. This work provides a new window into electron transfer phenomena in 3-dimensional coordination space, of relevance to electroactive MOFs where new mechanisms for charge transfer are highly sought after, and to understanding biological light harvesting systems where through-space mixed-valence interactions are operative.
Physics of automated driving in framework of three-phase traffic theory.
Kerner, Boris S
2018-04-01
We have revealed physical features of automated driving in the framework of the three-phase traffic theory for which there is no fixed time headway to the preceding vehicle. A comparison with the classical model approach to automated driving for which an automated driving vehicle tries to reach a fixed (desired or "optimal") time headway to the preceding vehicle has been made. It turns out that automated driving in the framework of the three-phase traffic theory can exhibit the following advantages in comparison with the classical model of automated driving: (i) The absence of string instability. (ii) Considerably smaller speed disturbances at road bottlenecks. (iii) Automated driving vehicles based on the three-phase theory can decrease the probability of traffic breakdown at the bottleneck in mixed traffic flow consisting of human driving and automated driving vehicles; on the contrary, even a single automated driving vehicle based on the classical approach can provoke traffic breakdown at the bottleneck in mixed traffic flow.
Physics of automated driving in framework of three-phase traffic theory
NASA Astrophysics Data System (ADS)
Kerner, Boris S.
2018-04-01
We have revealed physical features of automated driving in the framework of the three-phase traffic theory for which there is no fixed time headway to the preceding vehicle. A comparison with the classical model approach to automated driving for which an automated driving vehicle tries to reach a fixed (desired or "optimal") time headway to the preceding vehicle has been made. It turns out that automated driving in the framework of the three-phase traffic theory can exhibit the following advantages in comparison with the classical model of automated driving: (i) The absence of string instability. (ii) Considerably smaller speed disturbances at road bottlenecks. (iii) Automated driving vehicles based on the three-phase theory can decrease the probability of traffic breakdown at the bottleneck in mixed traffic flow consisting of human driving and automated driving vehicles; on the contrary, even a single automated driving vehicle based on the classical approach can provoke traffic breakdown at the bottleneck in mixed traffic flow.
NASA Technical Reports Server (NTRS)
Randall, David A.
1990-01-01
A bulk planetary boundary layer (PBL) model was developed with a simple internal vertical structure and a simple second-order closure, designed for use as a PBL parameterization in a large-scale model. The model allows the mean fields to vary with height within the PBL, and so must address the vertical profiles of the turbulent fluxes, going beyond the usual mixed-layer assumption that the fluxes of conservative variables are linear with height. This is accomplished using the same convective mass flux approach that has also been used in cumulus parameterizations. The purpose is to show that such a mass flux model can include, in a single framework, the compensating subsidence concept, downgradient mixing, and well-mixed layers.
The MIXED framework: A novel approach to evaluating mixed-methods rigor.
Eckhardt, Ann L; DeVon, Holli A
2017-10-01
Evaluation of rigor in mixed-methods (MM) research is a persistent challenge due to the combination of inconsistent philosophical paradigms, the use of multiple research methods which require different skill sets, and the need to combine research at different points in the research process. Researchers have proposed a variety of ways to thoroughly evaluate MM research, but each method fails to provide a framework that is useful for the consumer of research. In contrast, the MIXED framework is meant to bridge the gap between an academic exercise and practical assessment of a published work. The MIXED framework (methods, inference, expertise, evaluation, and design) borrows from previously published frameworks to create a useful tool for the evaluation of a published study. The MIXED framework uses an experimental eight-item scale that allows for comprehensive integrated assessment of MM rigor in published manuscripts. Mixed methods are becoming increasingly prevalent in nursing and healthcare research requiring researchers and consumers to address issues unique to MM such as evaluation of rigor. © 2017 John Wiley & Sons Ltd.
Modeling Multiple Human-Automation Distributed Systems using Network-form Games
NASA Technical Reports Server (NTRS)
Brat, Guillaume
2012-01-01
The paper describes at a high-level the network-form game framework (based on Bayes net and game theory), which can be used to model and analyze safety issues in large, distributed, mixed human-automation systems such as NextGen.
Three novel approaches to structural identifiability analysis in mixed-effects models.
Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D
2016-05-06
Structural identifiability is a concept that considers whether the structure of a model together with a set of input-output relations uniquely determines the model parameters. In the mathematical modelling of biological systems, structural identifiability is an important concept since biological interpretations are typically made from the parameter estimates. For a system defined by ordinary differential equations, several methods have been developed to analyse whether the model is structurally identifiable or otherwise. Another well-used modelling framework, which is particularly useful when the experimental data are sparsely sampled and the population variance is of interest, is mixed-effects modelling. However, established identifiability analysis techniques for ordinary differential equations are not directly applicable to such models. In this paper, we present and apply three different methods that can be used to study structural identifiability in mixed-effects models. The first method, called the repeated measurement approach, is based on applying a set of previously established statistical theorems. The second method, called the augmented system approach, is based on augmenting the mixed-effects model to an extended state-space form. The third method, called the Laplace transform mixed-effects extension, is based on considering the moment invariants of the systems transfer function as functions of random variables. To illustrate, compare and contrast the application of the three methods, they are applied to a set of mixed-effects models. Three structural identifiability analysis methods applicable to mixed-effects models have been presented in this paper. As method development of structural identifiability techniques for mixed-effects models has been given very little attention, despite mixed-effects models being widely used, the methods presented in this paper provides a way of handling structural identifiability in mixed-effects models previously not possible. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Stochastic nonlinear mixed effects: a metformin case study.
Matzuka, Brett; Chittenden, Jason; Monteleone, Jonathan; Tran, Hien
2016-02-01
In nonlinear mixed effect (NLME) modeling, the intra-individual variability is a collection of errors due to assay sensitivity, dosing, sampling, as well as model misspecification. Utilizing stochastic differential equations (SDE) within the NLME framework allows the decoupling of the measurement errors from the model misspecification. This leads the SDE approach to be a novel tool for model refinement. Using Metformin clinical pharmacokinetic (PK) data, the process of model development through the use of SDEs in population PK modeling was done to study the dynamics of absorption rate. A base model was constructed and then refined by using the system noise terms of the SDEs to track model parameters and model misspecification. This provides the unique advantage of making no underlying assumptions about the structural model for the absorption process while quantifying insufficiencies in the current model. This article focuses on implementing the extended Kalman filter and unscented Kalman filter in an NLME framework for parameter estimation and model development, comparing the methodologies, and illustrating their challenges and utility. The Kalman filter algorithms were successfully implemented in NLME models using MATLAB with run time differences between the ODE and SDE methods comparable to the differences found by Kakhi for their stochastic deconvolution.
A modular approach for item response theory modeling with the R package flirt.
Jeon, Minjeong; Rijmen, Frank
2016-06-01
The new R package flirt is introduced for flexible item response theory (IRT) modeling of psychological, educational, and behavior assessment data. flirt integrates a generalized linear and nonlinear mixed modeling framework with graphical model theory. The graphical model framework allows for efficient maximum likelihood estimation. The key feature of flirt is its modular approach to facilitate convenient and flexible model specifications. Researchers can construct customized IRT models by simply selecting various modeling modules, such as parametric forms, number of dimensions, item and person covariates, person groups, link functions, etc. In this paper, we describe major features of flirt and provide examples to illustrate how flirt works in practice.
Jeazet, Harold B. Tanh; Koschine, Tönjes; Staudt, Claudia; Raetzke, Klaus; Janiak, Christoph
2013-01-01
Hydrothermally stable particles of the metal-organic framework MIL-101(Cr) were incorporated into a polysulfone (PSF) matrix to produce mixed-matrix or composite membranes with excellent dispersion of MIL-101 particles and good adhesion within the polymer matrix. Pure gas (O2, N2, CO2 and CH4) permeation tests showed a significant increase of gas permeabilities of the mixed-matrix membranes without any loss in selectivity. Positron annihilation lifetime spectroscopy (PALS) indicated that the increased gas permeability is due to the free volume in the PSF polymer and the added large free volume inside the MIL-101 particles. The trend of the gas transport properties of the composite membranes could be reproduced by a Maxwell model. PMID:24957061
Dynamic Latent Trait Models with Mixed Hidden Markov Structure for Mixed Longitudinal Outcomes.
Zhang, Yue; Berhane, Kiros
2016-01-01
We propose a general Bayesian joint modeling approach to model mixed longitudinal outcomes from the exponential family for taking into account any differential misclassification that may exist among categorical outcomes. Under this framework, outcomes observed without measurement error are related to latent trait variables through generalized linear mixed effect models. The misclassified outcomes are related to the latent class variables, which represent unobserved real states, using mixed hidden Markov models (MHMM). In addition to enabling the estimation of parameters in prevalence, transition and misclassification probabilities, MHMMs capture cluster level heterogeneity. A transition modeling structure allows the latent trait and latent class variables to depend on observed predictors at the same time period and also on latent trait and latent class variables at previous time periods for each individual. Simulation studies are conducted to make comparisons with traditional models in order to illustrate the gains from the proposed approach. The new approach is applied to data from the Southern California Children Health Study (CHS) to jointly model questionnaire based asthma state and multiple lung function measurements in order to gain better insight about the underlying biological mechanism that governs the inter-relationship between asthma state and lung function development.
Numerical Coupling and Simulation of Point-Mass System with the Turbulent Fluid Flow
NASA Astrophysics Data System (ADS)
Gao, Zheng
A computational framework that combines the Eulerian description of the turbulence field with a Lagrangian point-mass ensemble is proposed in this dissertation. Depending on the Reynolds number, the turbulence field is simulated using Direct Numerical Simulation (DNS) or eddy viscosity model. In the meanwhile, the particle system, such as spring-mass system and cloud droplets, are modeled using the ordinary differential system, which is stiff and hence poses a challenge to the stability of the entire system. This computational framework is applied to the numerical study of parachute deceleration and cloud microphysics. These two distinct problems can be uniformly modeled with Partial Differential Equations (PDEs) and Ordinary Differential Equations (ODEs), and numerically solved in the same framework. For the parachute simulation, a novel porosity model is proposed to simulate the porous effects of the parachute canopy. This model is easy to implement with the projection method and is able to reproduce Darcy's law observed in the experiment. Moreover, the impacts of using different versions of k-epsilon turbulence model in the parachute simulation have been investigated and conclude that the standard and Re-Normalisation Group (RNG) model may overestimate the turbulence effects when Reynolds number is small while the Realizable model has a consistent performance with both large and small Reynolds number. For another application, cloud microphysics, the cloud entrainment-mixing problem is studied in the same numerical framework. Three sets of DNS are carried out with both decaying and forced turbulence. The numerical result suggests a new way parameterize the cloud mixing degree using the dynamical measures. The numerical experiments also verify the negative relationship between the droplets number concentration and the vorticity field. The results imply that the gravity has fewer impacts on the forced turbulence than the decaying turbulence. In summary, the proposed framework can be used to solve a physics problem that involves turbulence field and point-mass system, and therefore has a broad application.
Mittler, Jessica N; Martsolf, Grant R; Telenko, Shannon J; Scanlon, Dennis P
2013-03-01
Policymakers and practitioners continue to pursue initiatives designed to engage individuals in their health and health care despite discordant views and mixed evidence regarding the ability to cultivate greater individual engagement that improves Americans' health and well-being and helps manage health care costs. There is limited and mixed evidence regarding the value of different interventions. Based on our involvement in evaluating various community-based consumer engagement initiatives and a targeted literature review of models of behavior change, we identified the need for a framework to classify the universe of consumer engagement initiatives toward advancing policymakers' and practitioners' knowledge of their value and fit in various contexts. We developed a framework that expanded our conceptualization of consumer engagement, building on elements of two common models, the individually focused transtheoretical model of behavior and the broader, multilevel social ecological model. Finally, we applied this framework to one community's existing consumer engagement program. Consumer engagement in health and health care refers to the performance of specific behaviors ("engaged behaviors") and/or an individual's capacity and motivation to perform these behaviors ("activation"). These two dimensions are related but distinct and thus should be differentiated. The framework creates four classification schemas, by (1) targeted behavior types (self-management, health care encounter, shopping, and health behaviors) and by (2) individual, (3) group, and (4) community dimensions. Our example illustrates that the framework can systematically classify a variety of consumer engagement programs, and that this exercise and resulting characterization can provide a structured way to consider the program and how its components fit program goals both individually and collectively. Applying the framework could help advance the field by making policymakers and practitioners aware of the wide range of approaches, providing a structured way to organize and characterize interventions retrospectively, and helping them consider how they can meet the program's goals both individually and collectively. © 2013 Milbank Memorial Fund.
Mittler, Jessica N; Martsolf, Grant R; Telenko, Shannon J; Scanlon, Dennis P
2013-01-01
Context Policymakers and practitioners continue to pursue initiatives designed to engage individuals in their health and health care despite discordant views and mixed evidence regarding the ability to cultivate greater individual engagement that improves Americans’ health and well-being and helps manage health care costs. There is limited and mixed evidence regarding the value of different interventions. Methods Based on our involvement in evaluating various community-based consumer engagement initiatives and a targeted literature review of models of behavior change, we identified the need for a framework to classify the universe of consumer engagement initiatives toward advancing policymakers' and practitioners' knowledge of their value and fit in various contexts. We developed a framework that expanded our conceptualization of consumer engagement, building on elements of two common models, the individually focused transtheoretical model of behavior and the broader, multilevel social ecological model. Finally, we applied this framework to one community's existing consumer engagement program. Findings Consumer engagement in health and health care refers to the performance of specific behaviors (“engaged behaviors”) and/or an individual's capacity and motivation to perform these behaviors (“activation”). These two dimensions are related but distinct and thus should be differentiated. The framework creates four classification schemas, by (1) targeted behavior types (self-management, health care encounter, shopping, and health behaviors) and by (2) individual, (3) group, and (4) community dimensions. Our example illustrates that the framework can systematically classify a variety of consumer engagement programs, and that this exercise and resulting characterization can provide a structured way to consider the program and how its components fit program goals both individually and collectively. Conclusions Applying the framework could help advance the field by making policymakers and practitioners aware of the wide range of approaches, providing a structured way to organize and characterize interventions retrospectively, and helping them consider how they can meet the program's goals both individually and collectively. PMID:23488711
Weak Interaction Models with New Quarks and Right-handed Currents
DOE R&D Accomplishments Database
Wilczek, F. A.; Zee, A.; Kingsley, R. L.; Treiman, S. B.
1975-06-01
We discuss various weak interaction issues for a general class of models within the SU(2) x U(1) gauge theory framework, with special emphasis on the effects of right-handed, charged currents and of quarks bearing new quantum numbers. In particular we consider the restrictions on model building which are imposed by the small KL - KS mass difference and by the .I = = rule; and we classify various possibilities for neutral current interactions and, in the case of heavy mesons with new quantum numbers, various possibilities for mixing effects analogous to KL - KS mixing.
Printer model for dot-on-dot halftone screens
NASA Astrophysics Data System (ADS)
Balasubramanian, Raja
1995-04-01
A printer model is described for dot-on-dot halftone screens. For a given input CMYK signal, the model predicts the resulting spectral reflectance of the printed patch. The model is derived in two steps. First, the C, M, Y, K dot growth functions are determined which relate the input digital value to the actual dot area coverages of the colorants. Next, the reflectance of a patch is predicted as a weighted combination of the reflectances of the four solid C, M, Y, K patches and their various overlays. This approach is analogous to the Neugebauer model, with the random mixing equations being replaced by dot-on-dot mixing equations. A Yule-Neilsen correction factor is incorporated to account for light scattering within the paper. The dot area functions and Yule-Neilsen parameter are chosen to optimize the fit to a set of training data. The model is also extended to a cellular framework, requiring additional measurements. The model is tested with a four color xerographic printer employing a line-on-line halftone screen. CIE L*a*b* errors are obtained between measurements and model predictions. The Yule-Neilsen factor significantly decreases the model error. Accuracy is also increased with the use of a cellular framework.
Computational Analyses of Pressurization in Cryogenic Tanks
NASA Technical Reports Server (NTRS)
Ahuja, Vineet; Hosangadi, Ashvin; Lee, Chun P.; Field, Robert E.; Ryan, Harry
2010-01-01
A comprehensive numerical framework utilizing multi-element unstructured CFD and rigorous real fluid property routines has been developed to carry out analyses of propellant tank and delivery systems at NASA SSC. Traditionally CFD modeling of pressurization and mixing in cryogenic tanks has been difficult primarily because the fluids in the tank co-exist in different sub-critical and supercritical states with largely varying properties that have to be accurately accounted for in order to predict the correct mixing and phase change between the ullage and the propellant. For example, during tank pressurization under some circumstances, rapid mixing of relatively warm pressurant gas with cryogenic propellant can lead to rapid densification of the gas and loss of pressure in the tank. This phenomenon can cause serious problems during testing because of the resulting decrease in propellant flow rate. With proper physical models implemented, CFD can model the coupling between the propellant and pressurant including heat transfer and phase change effects and accurately capture the complex physics in the evolving flowfields. This holds the promise of allowing the specification of operational conditions and procedures that could minimize the undesirable mixing and heat transfer inherent in propellant tank operation. In our modeling framework, we incorporated two different approaches to real fluids modeling: (a) the first approach is based on the HBMS model developed by Hirschfelder, Beuler, McGee and Sutton and (b) the second approach is based on a cubic equation of state developed by Soave, Redlich and Kwong (SRK). Both approaches cover fluid properties and property variation spanning sub-critical gas and liquid states as well as the supercritical states. Both models were rigorously tested and properties for common fluids such as oxygen, nitrogen, hydrogen etc were compared against NIST data in both the sub-critical as well as supercritical regimes.
Unification of gauge, family, and flavor symmetries illustrated in gauged SU(12) models
Albright, Carl H.; Feger, Robert P.; Kephart, Thomas W.
2016-04-25
In this study, to explain quark and lepton masses and mixing angles, one has to extend the standard model, and the usual practice is to put the quarks and leptons into irreducible representations of discrete groups. We argue that discrete flavor symmetries (and their concomitant problems) can be avoided if we extend the gauge group. In the framework of SU(12) we give explicit examples of models having varying degrees of predictability obtained by scanning over groups and representations and identifying cases with operators contributing to mass and mixing matrices that need little fine- tuning of prefactors. Fitting with quark andmore » lepton masses run to the GUT scale and known mixing angles allows us to make predictions for the neutrino masses and hierarchy, the octant of the atmospheric mixing angle, leptonic CP violation, Majorana phases, and the effective mass observed in neutrinoless double beta decay.« less
ERIC Educational Resources Information Center
Kwok, Oi-man; West, Stephen G.; Green, Samuel B.
2007-01-01
This Monte Carlo study examined the impact of misspecifying the [big sum] matrix in longitudinal data analysis under both the multilevel model and mixed model frameworks. Under the multilevel model approach, under-specification and general-misspecification of the [big sum] matrix usually resulted in overestimation of the variances of the random…
Developing an appropriate staff mix for anticoagulation clinics: functional job analysis approach
NASA Astrophysics Data System (ADS)
Hailemariam, Desta A.; Shan, Xiaojun; Chung, Sung H.; Khasawneh, Mohammad T.; Lukesh, William; Park, Angela; Rose, Adam
2018-05-01
Anticoagulation clinics (ACCs) are specialty clinics that manage patients with blood clotting problems. Since labor costs usually account for a substantial portion of a healthcare organization's budget, optimizing the number and types of staff required was often the focus, especially for ACCs, where labor-intensive staff-patient interactions occur. A significant portion of tasks performed by clinical pharmacists might be completed by clinical pharmacist technicians, which are less-expensive resources. While nurse staffing models for a hospital inpatient unit are well established, these models are not readily applicable to staffing ACCs. Therefore, the objective of this paper is to develop a framework for determining the right staff mix of clinical pharmacists and clinical pharmacy technicians that increases the efficiency of care delivery process and improves the productivity of ACC staff. A framework is developed and applied to build a semi-automated full-time equivalent (FTE) calculator and compare various staffing scenarios using a simulation model. The FTE calculator provides the right staff mix for a given staff utilization target. Data collected from the ACCs at VA Boston Healthcare System is used to illustrate the FTE calculator and the simulation model. The result of the simulation model can be used by ACC managers to easily determine the number of FTEs of clinical pharmacists and clinical pharmacy technicians required to reach the target utilization and the corresponding staffing cost.
Dark matter and electroweak phase transition in the mixed scalar dark matter model
NASA Astrophysics Data System (ADS)
Liu, Xuewen; Bian, Ligong
2018-03-01
We study the electroweak phase transition in the framework of the scalar singlet-doublet mixed dark matter model, in which the particle dark matter candidate is the lightest neutral Higgs that comprises the C P -even component of the inert doublet and a singlet scalar. The dark matter can be dominated by the inert doublet or singlet scalar depending on the mixing. We present several benchmark models to investigate the two situations after imposing several theoretical and experimental constraints. An additional singlet scalar and the inert doublet drive the electroweak phase transition to be strongly first order. A strong first-order electroweak phase transition and a viable dark matter candidate can be accomplished in two benchmark models simultaneously, for which a proper mass splitting among the neutral and charged Higgs masses is needed.
An Investigation of a Hybrid Mixing Model for PDF Simulations of Turbulent Premixed Flames
NASA Astrophysics Data System (ADS)
Zhou, Hua; Li, Shan; Wang, Hu; Ren, Zhuyin
2015-11-01
Predictive simulations of turbulent premixed flames over a wide range of Damköhler numbers in the framework of Probability Density Function (PDF) method still remain challenging due to the deficiency in current micro-mixing models. In this work, a hybrid micro-mixing model, valid in both the flamelet regime and broken reaction zone regime, is proposed. A priori testing of this model is first performed by examining the conditional scalar dissipation rate and conditional scalar diffusion in a 3-D direct numerical simulation dataset of a temporally evolving turbulent slot jet flame of lean premixed H2-air in the thin reaction zone regime. Then, this new model is applied to PDF simulations of the Piloted Premixed Jet Burner (PPJB) flames, which are a set of highly shear turbulent premixed flames and feature strong turbulence-chemistry interaction at high Reynolds and Karlovitz numbers. Supported by NSFC 51476087 and NSFC 91441202.
Using a Mixed-Methods RE-AIM Framework to Evaluate Community Health Programs for Older Latinas.
Schwingel, Andiara; Gálvez, Patricia; Linares, Deborah; Sebastião, Emerson
2017-06-01
This study used the RE-AIM (Reach, Effectiveness, Adoption, Implementation, and Maintenance) framework to evaluate a promotora-led community health program designed for Latinas ages 50 and older that sought to improve physical activity, nutrition, and stress management. A mixed-methods evaluation approach was administered at participant and organizational levels with a focus on the efficacy, adoption, implementation, and maintenance components of the RE-AIM theoretical model. The program was shown to be effective at improving participants' eating behaviors, increasing their physical activity levels, and lowering their depressive symptoms. Promotoras felt motivated and sufficiently prepared to deliver the program. Some implementation challenges were reported. More child care opportunities and an increased focus on mental well-being were suggested. The promotora delivery model has promise for program sustainability with both promotoras and participants alike expressing interest in leading future programs.
An adjoint-based framework for maximizing mixing in binary fluids
NASA Astrophysics Data System (ADS)
Eggl, Maximilian; Schmid, Peter
2017-11-01
Mixing in the inertial, but laminar parameter regime is a common application in a wide range of industries. Enhancing the efficiency of mixing processes thus has a fundamental effect on product quality, material homogeneity and, last but not least, production costs. In this project, we address mixing efficiency in the above mentioned regime (Reynolds number Re = 1000 , Peclet number Pe = 1000) by developing and demonstrating an algorithm based on nonlinear adjoint looping that minimizes the variance of a passive scalar field which models our binary Newtonian fluids. The numerical method is based on the FLUSI code (Engels et al. 2016), a Fourier pseudo-spectral code, which we modified and augmented by scalar transport and adjoint equations. Mixing is accomplished by moving stirrers which are numerically modeled using a penalization approach. In our two-dimensional simulations we consider rotating circular and elliptic stirrers and extract optimal mixing strategies from the iterative scheme. The case of optimizing shape and rotational speed of the stirrers will be demonstrated.
General Framework for Effect Sizes in Cluster Randomized Experiments
ERIC Educational Resources Information Center
VanHoudnos, Nathan
2016-01-01
Cluster randomized experiments are ubiquitous in modern education research. Although a variety of modeling approaches are used to analyze these data, perhaps the most common methodology is a normal mixed effects model where some effects, such as the treatment effect, are regarded as fixed, and others, such as the effect of group random assignment…
Dig into Learning: A Program Evaluation of an Agricultural Literacy Innovation
ERIC Educational Resources Information Center
Edwards, Erica Brown
2016-01-01
This study is a mixed-methods program evaluation of an agricultural literacy innovation in a local school district in rural eastern North Carolina. This evaluation describes the use of a theory-based framework, the Concerns-Based Adoption Model (CBAM), in accordance with Stufflebeam's Context, Input, Process, Product (CIPP) model by evaluating the…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mueller, Juliane
MISO is an optimization framework for solving computationally expensive mixed-integer, black-box, global optimization problems. MISO uses surrogate models to approximate the computationally expensive objective function. Hence, derivative information, which is generally unavailable for black-box simulation objective functions, is not needed. MISO allows the user to choose the initial experimental design strategy, the type of surrogate model, and the sampling strategy.
Neutrino-electron scattering: general constraints on Z ' and dark photon models
NASA Astrophysics Data System (ADS)
Lindner, Manfred; Queiroz, Farinaldo S.; Rodejohann, Werner; Xu, Xun-Jie
2018-05-01
We study the framework of U(1) X models with kinetic mixing and/or mass mixing terms. We give general and exact analytic formulas of fermion gauge interactions and the cross sections of neutrino-electron scattering in such models. Then we derive limits on a variety of U(1) X models that induce new physics contributions to neutrino-electron scattering, taking into account interference between the new physics and Standard Model contributions. Data from TEXONO, CHARM-II and GEMMA are analyzed and shown to be complementary to each other to provide the most restrictive bounds on masses of the new vector bosons. In particular, we demonstrate the validity of our results to dark photon-like as well as light Z ' models.
A Case Study of a Mixed Methods Study Engaged in Integrated Data Analysis
ERIC Educational Resources Information Center
Schiazza, Daniela Marie
2013-01-01
The nascent field of mixed methods research has yet to develop a cohesive framework of guidelines and procedures for mixed methods data analysis (Greene, 2008). To support the field's development of analytical frameworks, this case study reflects on the development and implementation of a mixed methods study engaged in integrated data analysis.…
Mixed-linker strategy for the construction of multifunctional metal–organic frameworks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qin, Jun-Sheng; Yuan, Shuai; Wang, Qi
2017-01-01
Mixed-linker strategy is a promising way to construct multifunctional metal–organic frameworks (MOFs). In this review, we demonstrate the recent developments, discussions and challenges related to the preparation and applications of four types of mixed-linker MOF materials.
A paradox on quantum field theory of neutrino mixing and oscillations
NASA Astrophysics Data System (ADS)
Li, Yu-Feng; Liu, Qiu-Yu
2006-10-01
Neutrino mixing and oscillations in quantum field theory framework had been studied before, which shew that the Fock space of flavor states is unitarily inequivalent to that of mass states (inequivalent vacua model). A paradox emerges when we use these neutrino weak states to calculate the amplitude of W boson decay. The branching ratio of W+→e++νμ to W+→e++νe is approximately at the order of O(mi2/k2). This existence of flavor changing currents contradicts to the Hamiltonian we started from, and the usual knowledge about weak processes. Also, negative energy neutrinos (or violating the principle of energy conservation) appear in this framework. We discuss possible reasons for the appearance of this paradox.
Su, Li; Farewell, Vernon T
2013-01-01
For semi-continuous data which are a mixture of true zeros and continuously distributed positive values, the use of two-part mixed models provides a convenient modelling framework. However, deriving population-averaged (marginal) effects from such models is not always straightforward. Su et al. presented a model that provided convenient estimation of marginal effects for the logistic component of the two-part model but the specification of marginal effects for the continuous part of the model presented in that paper was based on an incorrect formulation. We present a corrected formulation and additionally explore the use of the two-part model for inferences on the overall marginal mean, which may be of more practical relevance in our application and more generally. PMID:24201470
Random Testing and Model Checking: Building a Common Framework for Nondeterministic Exploration
NASA Technical Reports Server (NTRS)
Groce, Alex; Joshi, Rajeev
2008-01-01
Two popular forms of dynamic analysis, random testing and explicit-state software model checking, are perhaps best viewed as search strategies for exploring the state spaces introduced by nondeterminism in program inputs. We present an approach that enables this nondeterminism to be expressed in the SPIN model checker's PROMELA language, and then lets users generate either model checkers or random testers from a single harness for a tested C program. Our approach makes it easy to compare model checking and random testing for models with precisely the same input ranges and probabilities and allows us to mix random testing with model checking's exhaustive exploration of non-determinism. The PROMELA language, as intended in its design, serves as a convenient notation for expressing nondeterminism and mixing random choices with nondeterministic choices. We present and discuss a comparison of random testing and model checking. The results derive from using our framework to test a C program with an effectively infinite state space, a module in JPL's next Mars rover mission. More generally, we show how the ability of the SPIN model checker to call C code can be used to extend SPIN's features, and hope to inspire others to use the same methods to implement dynamic analyses that can make use of efficient state storage, matching, and backtracking.
NASA Astrophysics Data System (ADS)
Sund, Nicole; Porta, Giovanni; Bolster, Diogo; Parashar, Rishi
2017-11-01
Prediction of effective transport for mixing-driven reactive systems at larger scales, requires accurate representation of mixing at small scales, which poses a significant upscaling challenge. Depending on the problem at hand, there can be benefits to using a Lagrangian framework, while in others an Eulerian might have advantages. Here we propose and test a novel hybrid model which attempts to leverage benefits of each. Specifically, our framework provides a Lagrangian closure required for a volume-averaging procedure of the advection diffusion reaction equation. This hybrid model is a LAgrangian Transport Eulerian Reaction Spatial Markov model (LATERS Markov model), which extends previous implementations of the Lagrangian Spatial Markov model and maps concentrations to an Eulerian grid to quantify closure terms required to calculate the volume-averaged reaction terms. The advantage of this approach is that the Spatial Markov model is known to provide accurate predictions of transport, particularly at preasymptotic early times, when assumptions required by traditional volume-averaging closures are least likely to hold; likewise, the Eulerian reaction method is efficient, because it does not require calculation of distances between particles. This manuscript introduces the LATERS Markov model and demonstrates by example its ability to accurately predict bimolecular reactive transport in a simple benchmark 2-D porous medium.
A Unified Analysis of Structured Sonar-terrain Data using Bayesian Functional Mixed Models.
Zhu, Hongxiao; Caspers, Philip; Morris, Jeffrey S; Wu, Xiaowei; Müller, Rolf
2018-01-01
Sonar emits pulses of sound and uses the reflected echoes to gain information about target objects. It offers a low cost, complementary sensing modality for small robotic platforms. While existing analytical approaches often assume independence across echoes, real sonar data can have more complicated structures due to device setup or experimental design. In this paper, we consider sonar echo data collected from multiple terrain substrates with a dual-channel sonar head. Our goals are to identify the differential sonar responses to terrains and study the effectiveness of this dual-channel design in discriminating targets. We describe a unified analytical framework that achieves these goals rigorously, simultaneously, and automatically. The analysis was done by treating the echo envelope signals as functional responses and the terrain/channel information as covariates in a functional regression setting. We adopt functional mixed models that facilitate the estimation of terrain and channel effects while capturing the complex hierarchical structure in data. This unified analytical framework incorporates both Gaussian models and robust models. We fit the models using a full Bayesian approach, which enables us to perform multiple inferential tasks under the same modeling framework, including selecting models, estimating the effects of interest, identifying significant local regions, discriminating terrain types, and describing the discriminatory power of local regions. Our analysis of the sonar-terrain data identifies time regions that reflect differential sonar responses to terrains. The discriminant analysis suggests that a multi- or dual-channel design achieves target identification performance comparable with or better than a single-channel design.
A Unified Analysis of Structured Sonar-terrain Data using Bayesian Functional Mixed Models
Zhu, Hongxiao; Caspers, Philip; Morris, Jeffrey S.; Wu, Xiaowei; Müller, Rolf
2017-01-01
Sonar emits pulses of sound and uses the reflected echoes to gain information about target objects. It offers a low cost, complementary sensing modality for small robotic platforms. While existing analytical approaches often assume independence across echoes, real sonar data can have more complicated structures due to device setup or experimental design. In this paper, we consider sonar echo data collected from multiple terrain substrates with a dual-channel sonar head. Our goals are to identify the differential sonar responses to terrains and study the effectiveness of this dual-channel design in discriminating targets. We describe a unified analytical framework that achieves these goals rigorously, simultaneously, and automatically. The analysis was done by treating the echo envelope signals as functional responses and the terrain/channel information as covariates in a functional regression setting. We adopt functional mixed models that facilitate the estimation of terrain and channel effects while capturing the complex hierarchical structure in data. This unified analytical framework incorporates both Gaussian models and robust models. We fit the models using a full Bayesian approach, which enables us to perform multiple inferential tasks under the same modeling framework, including selecting models, estimating the effects of interest, identifying significant local regions, discriminating terrain types, and describing the discriminatory power of local regions. Our analysis of the sonar-terrain data identifies time regions that reflect differential sonar responses to terrains. The discriminant analysis suggests that a multi- or dual-channel design achieves target identification performance comparable with or better than a single-channel design. PMID:29749977
Tetraquark mixing framework for isoscalar resonances in light mesons
NASA Astrophysics Data System (ADS)
Kim, Hungchong; Kim, K. S.; Cheoun, Myung-Ki; Oka, Makoto
2018-05-01
Recently, a tetraquark mixing framework has been proposed for light mesons and applied more or less successfully to the isovector resonances, a0(980 ) , a0(1450 ) , as well as to the isodoublet resonances, K0*(800 ),K0*(1430 ). In this work, we present a more extensive view on the mixing framework and apply this framework to the isoscalar resonances, f0(500 ), f0(980 ), f0(1370 ), f0(1500 ). Tetraquarks in this framework can have two spin configurations containing either spin-0 diquark or spin-1 diquark and each configuration forms a nonet in flavor space. The two spin configurations are found to mix strongly through the color-spin interactions. Their mixtures, which diagonalize the hyperfine masses, can generate the physical resonances constituting two nonets, which, in fact, coincide roughly with the experimental observation. We identify that f0(500 ), f0(980 ) are the isoscalar members in the light nonet, and f0(1370 ), f0(1500 ) are the similar members in the heavy nonet. This means that the spin configuration mixing, as it relates the corresponding members in the two nonets, can generate f0(500 ) , f0(1370 ) among the members in light mass, and f0(980 ) , f0(1500 ) in heavy mass. The complication arises because the isoscalar members of each nonet are subject to an additional flavor mixing known as Okubo-Zweig-Iizuka rule so that f0(500 ) , f0(980 ) , and similarly f0(1370 ) , f0(1500 ) , are the mixture of two isoscalar members belonging to an octet and a singlet in SUf(3 ) . The tetraquark mixing framework including the flavor mixing is tested for the isoscalar resonances in terms of the mass splitting and the fall-apart decay modes. The mass splitting among the isoscalar resonances is found to be consistent qualitatively with their hyperfine mass splitting strongly driven by the spin configuration mixing, which suggests that the tetraquark mixing framework works. The fall-apart modes from our tetraquarks also seem to be consistent with the experimental modes. We also discuss possible existence of the spin-1 tetraquarks that can be constructed by the spin-1 diquark.
a Framework for Voxel-Based Global Scale Modeling of Urban Environments
NASA Astrophysics Data System (ADS)
Gehrung, Joachim; Hebel, Marcus; Arens, Michael; Stilla, Uwe
2016-10-01
The generation of 3D city models is a very active field of research. Modeling environments as point clouds may be fast, but has disadvantages. These are easily solvable by using volumetric representations, especially when considering selective data acquisition, change detection and fast changing environments. Therefore, this paper proposes a framework for the volumetric modeling and visualization of large scale urban environments. Beside an architecture and the right mix of algorithms for the task, two compression strategies for volumetric models as well as a data quality based approach for the import of range measurements are proposed. The capabilities of the framework are shown on a mobile laser scanning dataset of the Technical University of Munich. Furthermore the loss of the compression techniques is evaluated and their memory consumption is compared to that of raw point clouds. The presented results show that generation, storage and real-time rendering of even large urban models are feasible, even with off-the-shelf hardware.
Reed, Frances M; Fitzgerald, Les; Rae, Melanie
2016-01-01
To highlight philosophical and theoretical considerations for planning a mixed methods research design that can inform a practice model to guide rural district nursing end of life care. Conceptual models of nursing in the community are general and lack guidance for rural district nursing care. A combination of pragmatism and nurse agency theory can provide a framework for ethical considerations in mixed methods research in the private world of rural district end of life care. Reflection on experience gathered in a two-stage qualitative research phase, involving rural district nurses who use advocacy successfully, can inform a quantitative phase for testing and complementing the data. Ongoing data analysis and integration result in generalisable inferences to achieve the research objective. Mixed methods research that creatively combines philosophical and theoretical elements to guide design in the particular ethical situation of community end of life care can be used to explore an emerging field of interest and test the findings for evidence to guide quality nursing practice. Combining philosophy and nursing theory to guide mixed methods research design increases the opportunity for sound research outcomes that can inform a nursing model of care.
Search for sterile neutrino mixing in the νμ → ντ appearance channel with the OPERA detector
NASA Astrophysics Data System (ADS)
Mauri, N.
2016-11-01
The OPERA experiment has observed muon neutrino to tau neutrino oscillations in the atmospheric sector in appearance mode. Five ντ candidate events have been detected, a number consistent with the expectation from the "standard" 3ν framework. Based on this result new limits on the mixing parameters of a massive sterile neutrino have been set. The analysis is performed in the 3+1 neutrino model.
NASA Astrophysics Data System (ADS)
Vicuna, S.; Melo, O.; Meza, F. J.; Alvarez, P.; Maureira, F.; Sanchez, A.; Tapia, A.; Cortes, M.; Dale, L. L.
2013-12-01
Future climate conditions could potentially affect water supply and demand on water basins throughout the world but especially on snowmelt-driven agriculture oriented basins that can be found throughout central Chile. Increasing temperature and reducing precipitation will affect both the magnitude and timing of water supply this part of the world. Different adaptation strategies could be implemented to reduce the impacts of such scenarios. Some could be incorporated as planned policies decided at the basin or Water Use Organization levels. Examples include changing large scale irrigation infrastructure (reservoirs and main channels) either physically or its operation. Complementing these strategies it is reasonable to think that at a disaggregated level, farmers would also react (adapt) to these new conditions using a mix of options to either modify their patterns of consumption (irrigation efficiency, crop mix, crop area reduction), increase their ability to access new sources of water (groundwater, water markets) or finally compensate their expected losses (insurance). We present a modeling framework developed to represent these issues using as a case study the Limarí basin located in Central Chile. This basin is a renowned example of how the development of reservoirs and irrigation infrastructure can reduce climate vulnerabilities allowing the economic development of a basin. Farmers in this basin tackle climate variability by adopting different strategies that depend first on the reservoir water volume allocation rule, on the type and size of investment they have at their farms and finally their potential access to water markets and other water supplies options. The framework developed can be used to study these strategies under current and future climate scenarios. The cornerstone of the framework is an hydrology and water resources model developed on the WEAP platform. This model is able to reproduce the large scale hydrologic features of the basin such as snowmelt hydrology, reservoir operation and groundwater dynamics. Crop yield under different water irrigation patterns have been inferred using a calibrated Cropsyst model. These crop yields together with user association irrigation constraints are used in a GAMS optimization model embedded dynamically in WEAP in order to obtain every year decisions on crop mix (including fallow land), irrigation patterns and participation in the spot water market. The GAMS optimization model has been calibrated using annual crop mix time series derived using a combination of sources of information ranging from different type of census plus satellite images. The resulting modeling platform is able to simulate under historic and future climate scenarios water availability in different locations of the basin with associated crop yield and economic consequences. The platform also allows the implementation of autonomous and planned adaptation strategies that could reduce the impacts of climate variability and climate change.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sprague, Michael A.; Stickel, Jonathan J.; Sitaraman, Hariswaran
Designing processing equipment for the mixing of settling suspensions is a challenging problem. Achieving low-cost mixing is especially difficult for the application of slowly reacting suspended solids because the cost of impeller power consumption becomes quite high due to the long reaction times (batch mode) or due to large-volume reactors (continuous mode). Further, the usual scale-up metrics for mixing, e.g., constant tip speed and constant power per volume, do not apply well for mixing of suspensions. As an alternative, computational fluid dynamics (CFD) can be useful for analyzing mixing at multiple scales and determining appropriate mixer designs and operating parameters.more » We developed a mixture model to describe the hydrodynamics of a settling cellulose suspension. The suspension motion is represented as a single velocity field in a computationally efficient Eulerian framework. The solids are represented by a scalar volume-fraction field that undergoes transport due to particle diffusion, settling, fluid advection, and shear stress. A settling model and a viscosity model, both functions of volume fraction, were selected to fit experimental settling and viscosity data, respectively. Simulations were performed with the open-source Nek5000 CFD program, which is based on the high-order spectral-finite-element method. Simulations were performed for the cellulose suspension undergoing mixing in a laboratory-scale vane mixer. The settled-bed heights predicted by the simulations were in semi-quantitative agreement with experimental observations. Further, the simulation results were in quantitative agreement with experimentally obtained torque and mixing-rate data, including a characteristic torque bifurcation. In future work, we plan to couple this CFD model with a reaction-kinetics model for the enzymatic digestion of cellulose, allowing us to predict enzymatic digestion performance for various mixing intensities and novel reactor designs.« less
Exotic superconducting states in the extended attractive Hubbard model.
Nayak, Swagatam; Kumar, Sanjeev
2018-04-04
We show that the extended attractive Hubbard model on a square lattice allows for a variety of superconducting phases, including exotic mixed-symmetry phases with [Formula: see text] and [Formula: see text] symmetries, and a novel [Formula: see text] state. The calculations are performed within the Hartree-Fock Bardeen-Cooper-Schrieffer framework. The ground states of the mean-field Hamiltonian are obtained via a minimization scheme that relaxes the symmetry constraints on the superconducting solutions, hence allowing for a mixing of s-, p- and d-wave order parameters. The results are obtained within the assumption of uniform-density states. Our results show that extended attractive Hubbard model can serve as an effective model for investigating properties of exotic superconductors.
Exotic superconducting states in the extended attractive Hubbard model
NASA Astrophysics Data System (ADS)
Nayak, Swagatam; Kumar, Sanjeev
2018-04-01
We show that the extended attractive Hubbard model on a square lattice allows for a variety of superconducting phases, including exotic mixed-symmetry phases with dx^2-y^2 + i [s + s^*] and dx^2-y^2 + px symmetries, and a novel px + i py state. The calculations are performed within the Hartree-Fock Bardeen-Cooper-Schrieffer framework. The ground states of the mean-field Hamiltonian are obtained via a minimization scheme that relaxes the symmetry constraints on the superconducting solutions, hence allowing for a mixing of s-, p- and d-wave order parameters. The results are obtained within the assumption of uniform-density states. Our results show that extended attractive Hubbard model can serve as an effective model for investigating properties of exotic superconductors.
Irwin, Brian J.; Wagner, Tyler; Bence, James R.; Kepler, Megan V.; Liu, Weihai; Hayes, Daniel B.
2013-01-01
Partitioning total variability into its component temporal and spatial sources is a powerful way to better understand time series and elucidate trends. The data available for such analyses of fish and other populations are usually nonnegative integer counts of the number of organisms, often dominated by many low values with few observations of relatively high abundance. These characteristics are not well approximated by the Gaussian distribution. We present a detailed description of a negative binomial mixed-model framework that can be used to model count data and quantify temporal and spatial variability. We applied these models to data from four fishery-independent surveys of Walleyes Sander vitreus across the Great Lakes basin. Specifically, we fitted models to gill-net catches from Wisconsin waters of Lake Superior; Oneida Lake, New York; Saginaw Bay in Lake Huron, Michigan; and Ohio waters of Lake Erie. These long-term monitoring surveys varied in overall sampling intensity, the total catch of Walleyes, and the proportion of zero catches. Parameter estimation included the negative binomial scaling parameter, and we quantified the random effects as the variations among gill-net sampling sites, the variations among sampled years, and site × year interactions. This framework (i.e., the application of a mixed model appropriate for count data in a variance-partitioning context) represents a flexible approach that has implications for monitoring programs (e.g., trend detection) and for examining the potential of individual variance components to serve as response metrics to large-scale anthropogenic perturbations or ecological changes.
Tom, Brian Dm; Su, Li; Farewell, Vernon T
2016-10-01
For semi-continuous data which are a mixture of true zeros and continuously distributed positive values, the use of two-part mixed models provides a convenient modelling framework. However, deriving population-averaged (marginal) effects from such models is not always straightforward. Su et al. presented a model that provided convenient estimation of marginal effects for the logistic component of the two-part model but the specification of marginal effects for the continuous part of the model presented in that paper was based on an incorrect formulation. We present a corrected formulation and additionally explore the use of the two-part model for inferences on the overall marginal mean, which may be of more practical relevance in our application and more generally. © The Author(s) 2013.
Cutting through the noise: an evaluative framework for research communication
NASA Astrophysics Data System (ADS)
Strickert, G. E.; Bradford, L. E.; Shantz, S.; Steelman, T.; Orozs, C.; Rose, I.
2017-12-01
With an ever-increasing amount of research, there is a parallel challenge to mobilize the research for decision making, policy development and management actions. The tradition of "loading dock" model of science to policy is under renovation, replaced by more engaging methods of research communication. Research communication falls on a continuum from passive methods (e.g. reports, social media, infographics) to more active methods (e.g. forum theatre, decision labs, and stakeholder planning, and mix media installations that blend, art, science and traditional knowledge). Drawing on a five-year water science research program in the Saskatchewan River Basin, an evaluation framework is presented that draws on a wide communities of knowledge users including: First Nation and Metis, Community Organizers, Farmers, Consultants, Researchers, and Civil Servants. A mixed method framework consisting of quantitative surveys, qualitative interviews, focus groups, and q-sorts demonstrates that participants prefer more active means of research communication to draw them into the research, but they also value more traditional and passive methods to provide more in-depth information when needed.
McClintock, B.T.; White, Gary C.; Burnham, K.P.; Pryde, M.A.; Thomson, David L.; Cooch, Evan G.; Conroy, Michael J.
2009-01-01
In recent years, the mark-resight method for estimating abundance when the number of marked individuals is known has become increasingly popular. By using field-readable bands that may be resighted from a distance, these techniques can be applied to many species, and are particularly useful for relatively small, closed populations. However, due to the different assumptions and general rigidity of the available estimators, researchers must often commit to a particular model without rigorous quantitative justification for model selection based on the data. Here we introduce a nonlinear logit-normal mixed effects model addressing this need for a more generalized framework. Similar to models available for mark-recapture studies, the estimator allows a wide variety of sampling conditions to be parameterized efficiently under a robust sampling design. Resighting rates may be modeled simply or with more complexity by including fixed temporal and random individual heterogeneity effects. Using information theory, the model(s) best supported by the data may be selected from the candidate models proposed. Under this generalized framework, we hope the uncertainty associated with mark-resight model selection will be reduced substantially. We compare our model to other mark-resight abundance estimators when applied to mainland New Zealand robin (Petroica australis) data recently collected in Eglinton Valley, Fiordland National Park and summarize its performance in simulation experiments.
Stayman, J Webster; Tilley, Steven; Siewerdsen, Jeffrey H
2014-01-01
Previous investigations [1-3] have demonstrated that integrating specific knowledge of the structure and composition of components like surgical implants, devices, and tools into a model-based reconstruction framework can improve image quality and allow for potential exposure reductions in CT. Using device knowledge in practice is complicated by uncertainties in the exact shape of components and their particular material composition. Such unknowns in the morphology and attenuation properties lead to errors in the forward model that limit the utility of component integration. In this work, a methodology is presented to accommodate both uncertainties in shape as well as unknown energy-dependent attenuation properties of the surgical devices. This work leverages the so-called known-component reconstruction (KCR) framework [1] with a generalized deformable registration operator and modifications to accommodate a spectral transfer function in the component model. Moreover, since this framework decomposes the object into separate background anatomy and "known" component factors, a mixed fidelity forward model can be adopted so that measurements associated with projections through the surgical devices can be modeled with much greater accuracy. A deformable KCR (dKCR) approach using the mixed fidelity model is introduced and applied to a flexible wire component with unknown structure and composition. Image quality advantages of dKCR over traditional reconstruction methods are illustrated in cone-beam CT (CBCT) data acquired on a testbench emulating a 3D-guided needle biopsy procedure - i.e., a deformable component (needle) with strong energy-dependent attenuation characteristics (steel) within a complex soft-tissue background.
Quasi-Geostrophic Diagnosis of Mixed-Layer Dynamics Embedded in a Mesoscale Turbulent Field
NASA Astrophysics Data System (ADS)
Chavanne, C. P.; Klein, P.
2016-02-01
A new quasi-geostrophic model has been developed to diagnose the three-dimensional circulation, including the vertical velocity, in the upper ocean from high-resolution observations of sea surface height and buoyancy. The formulation for the adiabatic component departs from the classical surface quasi-geostrophic framework considered before since it takes into account the stratification within the surface mixed-layer that is usually much weaker than that in the ocean interior. To achieve this, the model approximates the ocean with two constant-stratification layers : a finite-thickness surface layer (or the mixed-layer) and an infinitely-deep interior layer. It is shown that the leading-order adiabatic circulation is entirely determined if both the surface streamfunction and buoyancy anomalies are considered. The surface layer further includes a diabatic dynamical contribution. Parameterization of diabatic vertical velocities is based on their restoring impacts of the thermal-wind balance that is perturbed by turbulent vertical mixing of momentum and buoyancy. The model skill in reproducing the three-dimensional circulation in the upper ocean from surface data is checked against the output of a high-resolution primitive-equation numerical simulation. Correlation between simulated and diagnosed vertical velocities are significantly improved in the mixed-layer for the new model compared to the classical surface quasi-geostrophic model, reaching 0.9 near the surface.
Evaluating Mixed Research Studies: A Mixed Methods Approach
ERIC Educational Resources Information Center
Leech, Nancy L.; Dellinger, Amy B.; Brannagan, Kim B.; Tanaka, Hideyuki
2010-01-01
The purpose of this article is to demonstrate application of a new framework, the validation framework (VF), to assist researchers in evaluating mixed research studies. Based on an earlier work by Dellinger and Leech, a description of the VF is delineated. Using the VF, three studies from education, health care, and counseling fields are…
Sadeghi, Neda; Prastawa, Marcel; Fletcher, P Thomas; Gilmore, John H; Lin, Weili; Gerig, Guido
2012-01-01
A population growth model that represents the growth trajectories of individual subjects is critical to study and understand neurodevelopment. This paper presents a framework for jointly estimating and modeling individual and population growth trajectories, and determining significant regional differences in growth pattern characteristics applied to longitudinal neuroimaging data. We use non-linear mixed effect modeling where temporal change is modeled by the Gompertz function. The Gompertz function uses intuitive parameters related to delay, rate of change, and expected asymptotic value; all descriptive measures which can answer clinical questions related to growth. Our proposed framework combines nonlinear modeling of individual trajectories, population analysis, and testing for regional differences. We apply this framework to the study of early maturation in white matter regions as measured with diffusion tensor imaging (DTI). Regional differences between anatomical regions of interest that are known to mature differently are analyzed and quantified. Experiments with image data from a large ongoing clinical study show that our framework provides descriptive, quantitative information on growth trajectories that can be directly interpreted by clinicians. To our knowledge, this is the first longitudinal analysis of growth functions to explain the trajectory of early brain maturation as it is represented in DTI.
Toward a Unified Validation Framework in Mixed Methods Research
ERIC Educational Resources Information Center
Dellinger, Amy B.; Leech, Nancy L.
2007-01-01
The primary purpose of this article is to further discussions of validity in mixed methods research by introducing a validation framework to guide thinking about validity in this area. To justify the use of this framework, the authors discuss traditional terminology and validity criteria for quantitative and qualitative research, as well as…
Toward a Contingency Theory of Decision Making.
ERIC Educational Resources Information Center
Tarter, C. John; Hoy, Wayne K.
1998-01-01
There is no single best decision-making approach. This article reviews and compares six contemporary models (classical, administrative, incremental, mixed-scanning, garbage-can, and political) and develops a framework and 10 propositions to match strategies with circumstances. A contingency approach suggests that administrators use satisficing (a…
Genetic mixed linear models for twin survival data.
Ha, Il Do; Lee, Youngjo; Pawitan, Yudi
2007-07-01
Twin studies are useful for assessing the relative importance of genetic or heritable component from the environmental component. In this paper we develop a methodology to study the heritability of age-at-onset or lifespan traits, with application to analysis of twin survival data. Due to limited period of observation, the data can be left truncated and right censored (LTRC). Under the LTRC setting we propose a genetic mixed linear model, which allows general fixed predictors and random components to capture genetic and environmental effects. Inferences are based upon the hierarchical-likelihood (h-likelihood), which provides a statistically efficient and unified framework for various mixed-effect models. We also propose a simple and fast computation method for dealing with large data sets. The method is illustrated by the survival data from the Swedish Twin Registry. Finally, a simulation study is carried out to evaluate its performance.
Dynamic Roughness Ratio-Based Framework for Modeling Mixed Mode of Droplet Evaporation.
Gunjan, Madhu Ranjan; Raj, Rishi
2017-07-18
The spatiotemporal evolution of an evaporating sessile droplet and its effect on lifetime is crucial to various disciplines of science and technology. Although experimental investigations suggest three distinct modes through which a droplet evaporates, namely, the constant contact radius (CCR), the constant contact angle (CCA), and the mixed, only the CCR and the CCA modes have been modeled reasonably. Here we use experiments with water droplets on flat and micropillared silicon substrates to characterize the mixed mode. We visualize that a perfect CCA mode after the initial CCR mode is an idealization on a flat silicon substrate, and the receding contact line undergoes intermittent but recurring pinning (CCR mode) as it encounters fresh contaminants on the surface. The resulting increase in roughness lowers the contact angle of the droplet during these intermittent CCR modes until the next depinning event, followed by the CCA mode of evaporation. The airborne contaminants in our experiments are mostly loosely adhered to the surface and travel along with the receding contact line. The resulting gradual increase in the apparent roughness and hence the extent of CCR mode over CCA mode forces appreciable decrease in the contact angle observed during the mixed mode of evaporation. Unlike loosely adhered airborne contaminants on flat samples, micropillars act as fixed roughness features. The apparent roughness fluctuates about the mean value as the contact line recedes between pillars. Evaporation on these surfaces exhibits stick-jump motion with a short-duration mixed mode toward the end when the droplet size becomes comparable to the pillar spacing. We incorporate this dynamic roughness into a classical evaporation model to accurately predict the droplet evolution throughout the three modes, for both flat and micropillared silicon surfaces. We believe that this framework can also be extended to model the evaporation of nanofluids and the coffee-ring effect, among others.
Longo, Francesco; Notarnicola, Elisabetta; Tasselli, Stefano
2015-04-09
The mechanisms through which the relationships among public institutions, private providers and families affect care and service provision systems are puzzling. How can we understand the mechanisms in these contexts? Which elements should we explore to capture the complexity of care provision? The aim of our study is to provide a framework that can help read and reframe these puzzling care provision mechanisms in a welfare mix context. First, we develop a theoretical framework for understanding how service provision occurs in care systems that are characterised by a variety of relationships between multiple actors, using an evidence-based approach that looks at both public and private expenditures and the number of users relative to the level of needs coverage and compared with declared values and political rhetoric. Second, we test this framework in two case studies built on data from two prominent Italian regions, Lombardy and Emilia-Romagna. We argue that service provision models depend on the interplay among six conceptual elements: policy values, governance rules, resources, nature of the providers, service standards and eligibility criteria. Our empirical study shows that beneath the relevant differences in values and political rhetoric between the case studies of the two Italian regions, there is a surprising isomorphism in service standards and the levels of covering the population's needs. The suggested framework appears to be effective and feasible; it fosters interdisciplinary approaches and supports policy-making discussions. This study may contribute to deepening knowledge about public care service provision and institutional arrangements, which can be used to promote more effective reforms and may advance future research. Although the framework was tested on the Italian welfare system, it can be used to assess many different systems.
An Open Source Simulation Model for Soil and Sediment Bioturbation
Schiffers, Katja; Teal, Lorna Rachel; Travis, Justin Mark John; Solan, Martin
2011-01-01
Bioturbation is one of the most widespread forms of ecological engineering and has significant implications for the structure and functioning of ecosystems, yet our understanding of the processes involved in biotic mixing remains incomplete. One reason is that, despite their value and utility, most mathematical models currently applied to bioturbation data tend to neglect aspects of the natural complexity of bioturbation in favour of mathematical simplicity. At the same time, the abstract nature of these approaches limits the application of such models to a limited range of users. Here, we contend that a movement towards process-based modelling can improve both the representation of the mechanistic basis of bioturbation and the intuitiveness of modelling approaches. In support of this initiative, we present an open source modelling framework that explicitly simulates particle displacement and a worked example to facilitate application and further development. The framework combines the advantages of rule-based lattice models with the application of parameterisable probability density functions to generate mixing on the lattice. Model parameters can be fitted by experimental data and describe particle displacement at the spatial and temporal scales at which bioturbation data is routinely collected. By using the same model structure across species, but generating species-specific parameters, a generic understanding of species-specific bioturbation behaviour can be achieved. An application to a case study and comparison with a commonly used model attest the predictive power of the approach. PMID:22162997
An open source simulation model for soil and sediment bioturbation.
Schiffers, Katja; Teal, Lorna Rachel; Travis, Justin Mark John; Solan, Martin
2011-01-01
Bioturbation is one of the most widespread forms of ecological engineering and has significant implications for the structure and functioning of ecosystems, yet our understanding of the processes involved in biotic mixing remains incomplete. One reason is that, despite their value and utility, most mathematical models currently applied to bioturbation data tend to neglect aspects of the natural complexity of bioturbation in favour of mathematical simplicity. At the same time, the abstract nature of these approaches limits the application of such models to a limited range of users. Here, we contend that a movement towards process-based modelling can improve both the representation of the mechanistic basis of bioturbation and the intuitiveness of modelling approaches. In support of this initiative, we present an open source modelling framework that explicitly simulates particle displacement and a worked example to facilitate application and further development. The framework combines the advantages of rule-based lattice models with the application of parameterisable probability density functions to generate mixing on the lattice. Model parameters can be fitted by experimental data and describe particle displacement at the spatial and temporal scales at which bioturbation data is routinely collected. By using the same model structure across species, but generating species-specific parameters, a generic understanding of species-specific bioturbation behaviour can be achieved. An application to a case study and comparison with a commonly used model attest the predictive power of the approach.
Social Relations and Resident Health in Assisted Living: An Application of the Convoy Model
ERIC Educational Resources Information Center
Perkins, Molly M.; Ball, Mary M.; Kemp, Candace L.; Hollingsworth, Carole
2013-01-01
Purpose: This article, based on analysis of data from a mixed methods study, builds on a growing body of assisted living (AL) research focusing on the link between residents' social relationships and health. A key aim of this analysis, which uses the social convoy model as a conceptual and methodological framework, was to examine the relative…
ERIC Educational Resources Information Center
Russell, Heather Gordy
2010-01-01
The mixed method study focused on increasing blood donations from staff who work in a blood collecting organization and relies on Gilbert's Behavior Engineering Model as a framework. The qualitative phase of the study involved focus groups. Information from the focus groups and the literature review were used to create hypotheses. A survey was…
Testing the Grossman model of medical spending determinants with macroeconomic panel data.
Hartwig, Jochen; Sturm, Jan-Egbert
2018-02-16
Michael Grossman's human capital model of the demand for health has been argued to be one of the major achievements in theoretical health economics. Attempts to test this model empirically have been sparse, however, and with mixed results. These attempts so far relied on using-mostly cross-sectional-micro data from household surveys. For the first time in the literature, we bring in macroeconomic panel data for 29 OECD countries over the period 1970-2010 to test the model. To check the robustness of the results for the determinants of medical spending identified by the model, we include additional covariates in an extreme bounds analysis (EBA) framework. The preferred model specifications (including the robust covariates) do not lend much empirical support to the Grossman model. This is in line with the mixed results of earlier studies.
NASA Astrophysics Data System (ADS)
Tian, Jian
With the recently-developed particle-resolved model PartMC-MOSAIC, the mixing state and other physico-chemical properties of individual aerosol particles can be tracked as the particles undergo aerosol aging processes. However, existing PartMC-MOSAIC applications have mainly been based on idealized scenarios, and a link to real atmospheric measurement has not yet been established. In this thesis, we extend the capability of PartMC-MOSAIC and apply the model framework to three distinct scenarios with different environmental conditions to investigate the physical and chemical aging of aerosols in those environments. The first study is to investigate the evolution of particle mixing state and cloud condensation nuclei (CCN) activation properties in a ship plume. Comparisons of our results with observations from the QUANTIFY Study in 2007 in the English channel and the Gulf of Biscay showed that the model was able to reproduce the observed evolution of total number concentration and the vanishing of the nucleation mode consisting of sulfate particles. Further process analysis revealed that during the first hour after emission, dilution reduced the total number concentration by four orders of magnitude, while coagulation reduced it by an additional order of magnitude. Neglecting coagulation resulted in an overprediction of more than one order of magnitude in the number concentration of particles smaller than 40 nm at a plume age of 100 s. Coagulation also significantly altered the mixing state of the particles, leading to a continuum of internal mixtures of sulfate and black carbon. The impact of condensation on CCN concentrations depended on the supersaturation threshold at which CCN activity was evaluated. Nucleation was observed to have a limited impact on the CCN concentration in the ship plume we studied, but was sensitive to formation rates of secondary aerosol. For the second study we adapted PartMC to represent the aerosol evolution in an aerosol chamber, with the intention to use the model as a tool to interpret and guide chamber experiments in the future. We added chamber-specific processes to our model formulation such as wall loss due to particle diffusion and sedimentation, and dilution effects due to sampling. We also implemented a treatment of fractal particles to account for the morphology of agglomerates and its impact on aerosol dynamics. We verified the model with published results of self-similar size distributions, and validated the model using experimental data from an aerosol chamber. To this end we developed a fitting optimization approach to determine the best-estimate values for the wall loss parameters based on minimizing the l2-norm of the model errors of the number distribution. Obtaining the best fit required taking into account the non-spherical structure of the particle agglomerates. Our third study focuses on the implementation of volatility basis set (VBS) framework in PartMC-MOSAIC to investigate the chemical aging of organic aerosols in the atmosphere. The updated PartMC-MOSAIC model framework was used to simulate the evolution of aerosols in air trajectories initialized from CARES field campaign conducted in California in June 2010. The simulation results were compared with aircraft measurement data during the campaign. PartMC-MOSAIC was able to produce gas and aerosol concentrations at similar levels compared to the observation data. Moreover, the simulation with VBS enabled produced consistently more secondary organic aerosols (SOA). The investigation of particle mixing state revealed that the impact of VBS framework on particle mixing state is sensitive to the daylight exposure time. (Abstract shortened by ProQuest.).
NASA Astrophysics Data System (ADS)
Nguyen, Tien M.; Guillen, Andy T.
2017-05-01
This paper describes static Bayesian game models with "Pure" and "Mixed" games for the development of an optimum Program and Technical Baseline (PTB) solution for affordable acquisition of future space systems. The paper discusses System Engineering (SE) frameworks and analytical and simulation modeling approaches for developing the optimum PTB solutions from both the government and contractor perspectives.
MATRIX-VBS Condensing Organic Aerosols in an Aerosol Microphysics Model
NASA Technical Reports Server (NTRS)
Gao, Chloe Y.; Tsigaridis, Konstas; Bauer, Susanne E.
2015-01-01
The condensation of organic aerosols is represented in a newly developed box-model scheme, where its effect on the growth and composition of particles are examined. We implemented the volatility-basis set (VBS) framework into the aerosol mixing state resolving microphysical scheme Multiconfiguration Aerosol TRacker of mIXing state (MATRIX). This new scheme is unique and advances the representation of organic aerosols in models in that, contrary to the traditional treatment of organic aerosols as non-volatile in most climate models and in the original version of MATRIX, this new scheme treats them as semi-volatile. Such treatment is important because low-volatility organics contribute significantly to the growth of particles. The new scheme includes several classes of semi-volatile organic compounds from the VBS framework that can partition among aerosol populations in MATRIX, thus representing the growth of particles via condensation of low volatility organic vapors. Results from test cases representing Mexico City and a Finish forrest condistions show good representation of the time evolutions of concentration for VBS species in the gas phase and in the condensed particulate phase. Emitted semi-volatile primary organic aerosols evaporate almost completely in the high volatile range, and they condense more efficiently in the low volatility range.
Schaid, Daniel J
2010-01-01
Measures of genomic similarity are the basis of many statistical analytic methods. We review the mathematical and statistical basis of similarity methods, particularly based on kernel methods. A kernel function converts information for a pair of subjects to a quantitative value representing either similarity (larger values meaning more similar) or distance (smaller values meaning more similar), with the requirement that it must create a positive semidefinite matrix when applied to all pairs of subjects. This review emphasizes the wide range of statistical methods and software that can be used when similarity is based on kernel methods, such as nonparametric regression, linear mixed models and generalized linear mixed models, hierarchical models, score statistics, and support vector machines. The mathematical rigor for these methods is summarized, as is the mathematical framework for making kernels. This review provides a framework to move from intuitive and heuristic approaches to define genomic similarities to more rigorous methods that can take advantage of powerful statistical modeling and existing software. A companion paper reviews novel approaches to creating kernels that might be useful for genomic analyses, providing insights with examples [1]. Copyright © 2010 S. Karger AG, Basel.
Weickenmeier, J; Jabareen, M
2014-11-01
The characteristic highly nonlinear, time-dependent, and often inelastic material response of soft biological tissues can be expressed in a set of elastic-viscoplastic constitutive equations. The specific elastic-viscoplastic model for soft tissues proposed by Rubin and Bodner (2002) is generalized with respect to the constitutive equations for the scalar quantity of the rate of inelasticity and the hardening parameter in order to represent a general framework for elastic-viscoplastic models. A strongly objective integration scheme and a new mixed finite element formulation were developed based on the introduction of the relative deformation gradient-the deformation mapping between the last converged and current configurations. The numerical implementation of both the generalized framework and the specific Rubin and Bodner model is presented. As an example of a challenging application of the new model equations, the mechanical response of facial skin tissue is characterized through an experimental campaign based on the suction method. The measurement data are used for the identification of a suitable set of model parameters that well represents the experimentally observed tissue behavior. Two different measurement protocols were defined to address specific tissue properties with respect to the instantaneous tissue response, inelasticity, and tissue recovery. Copyright © 2014 John Wiley & Sons, Ltd.
Friendship Dissolution Within Social Networks Modeled Through Multilevel Event History Analysis
Dean, Danielle O.; Bauer, Daniel J.; Prinstein, Mitchell J.
2018-01-01
A social network perspective can bring important insight into the processes that shape human behavior. Longitudinal social network data, measuring relations between individuals over time, has become increasingly common—as have the methods available to analyze such data. A friendship duration model utilizing discrete-time multilevel survival analysis with a multiple membership random effect structure is developed and applied here to study the processes leading to undirected friendship dissolution within a larger social network. While the modeling framework is introduced in terms of understanding friendship dissolution, it can be used to understand microlevel dynamics of a social network more generally. These models can be fit with standard generalized linear mixed-model software, after transforming the data to a pair-period data set. An empirical example highlights how the model can be applied to understand the processes leading to friendship dissolution between high school students, and a simulation study is used to test the use of the modeling framework under representative conditions that would be found in social network data. Advantages of the modeling framework are highlighted, and potential limitations and future directions are discussed. PMID:28463022
Fermion masses and mixing in general warped extra dimensional models
NASA Astrophysics Data System (ADS)
Frank, Mariana; Hamzaoui, Cherif; Pourtolami, Nima; Toharia, Manuel
2015-06-01
We analyze fermion masses and mixing in a general warped extra dimensional model, where all the Standard Model (SM) fields, including the Higgs, are allowed to propagate in the bulk. In this context, a slightly broken flavor symmetry imposed universally on all fermion fields, without distinction, can generate the full flavor structure of the SM, including quarks, charged leptons and neutrinos. For quarks and charged leptons, the exponential sensitivity of their wave functions to small flavor breaking effects yield hierarchical masses and mixing as it is usual in warped models with fermions in the bulk. In the neutrino sector, the exponential wave-function factors can be flavor blind and thus insensitive to the small flavor symmetry breaking effects, directly linking their masses and mixing angles to the flavor symmetric structure of the five-dimensional neutrino Yukawa couplings. The Higgs must be localized in the bulk and the model is more successful in generalized warped scenarios where the metric background solution is different than five-dimensional anti-de Sitter (AdS5 ). We study these features in two simple frameworks, flavor complimentarity and flavor democracy, which provide specific predictions and correlations between quarks and leptons, testable as more precise data in the neutrino sector becomes available.
Stakeholders' Views of South Korea's Higher Education Internationalization Policy
ERIC Educational Resources Information Center
Cho, Young Ha; Palmer, John D.
2013-01-01
The study investigated the stakeholders' perceptions of South Korea's higher education internationalization policy. Based on the research framework that defines four policy values--propriety, effectiveness, diversity, and engagement, the convergence model was employed with a concurrent mixed method sampling strategy to analyze the stakeholders'…
Metapopulation epidemic models with heterogeneous mixing and travel behaviour
2014-01-01
Background Determining the pandemic potential of an emerging infectious disease and how it depends on the various epidemic and population aspects is critical for the preparation of an adequate response aimed at its control. The complex interplay between population movements in space and non-homogeneous mixing patterns have so far hindered the fundamental understanding of the conditions for spatial invasion through a general theoretical framework. To address this issue, we present an analytical modelling approach taking into account such interplay under general conditions of mobility and interactions, in the simplifying assumption of two population classes. Methods We describe a spatially structured population with non-homogeneous mixing and travel behaviour through a multi-host stochastic epidemic metapopulation model. Different population partitions, mixing patterns and mobility structures are considered, along with a specific application for the study of the role of age partition in the early spread of the 2009 H1N1 pandemic influenza. Results We provide a complete mathematical formulation of the model and derive a semi-analytical expression of the threshold condition for global invasion of an emerging infectious disease in the metapopulation system. A rich solution space is found that depends on the social partition of the population, the pattern of contacts across groups and their relative social activity, the travel attitude of each class, and the topological and traffic features of the mobility network. Reducing the activity of the less social group and reducing the cross-group mixing are predicted to be the most efficient strategies for controlling the pandemic potential in the case the less active group constitutes the majority of travellers. If instead traveling is dominated by the more social class, our model predicts the existence of an optimal across-groups mixing that maximises the pandemic potential of the disease, whereas the impact of variations in the activity of each group is less important. Conclusions The proposed modelling approach introduces a theoretical framework for the study of infectious diseases spread in a population with two layers of heterogeneity relevant for the local transmission and the spatial propagation of the disease. It can be used for pandemic preparedness studies to identify adequate interventions and quantitatively estimate the corresponding required effort, as well as in an emerging epidemic situation to assess the pandemic potential of the pathogen from population and early outbreak data. PMID:24418011
Metapopulation epidemic models with heterogeneous mixing and travel behaviour.
Apolloni, Andrea; Poletto, Chiara; Ramasco, José J; Jensen, Pablo; Colizza, Vittoria
2014-01-13
Determining the pandemic potential of an emerging infectious disease and how it depends on the various epidemic and population aspects is critical for the preparation of an adequate response aimed at its control. The complex interplay between population movements in space and non-homogeneous mixing patterns have so far hindered the fundamental understanding of the conditions for spatial invasion through a general theoretical framework. To address this issue, we present an analytical modelling approach taking into account such interplay under general conditions of mobility and interactions, in the simplifying assumption of two population classes. We describe a spatially structured population with non-homogeneous mixing and travel behaviour through a multi-host stochastic epidemic metapopulation model. Different population partitions, mixing patterns and mobility structures are considered, along with a specific application for the study of the role of age partition in the early spread of the 2009 H1N1 pandemic influenza. We provide a complete mathematical formulation of the model and derive a semi-analytical expression of the threshold condition for global invasion of an emerging infectious disease in the metapopulation system. A rich solution space is found that depends on the social partition of the population, the pattern of contacts across groups and their relative social activity, the travel attitude of each class, and the topological and traffic features of the mobility network. Reducing the activity of the less social group and reducing the cross-group mixing are predicted to be the most efficient strategies for controlling the pandemic potential in the case the less active group constitutes the majority of travellers. If instead traveling is dominated by the more social class, our model predicts the existence of an optimal across-groups mixing that maximises the pandemic potential of the disease, whereas the impact of variations in the activity of each group is less important. The proposed modelling approach introduces a theoretical framework for the study of infectious diseases spread in a population with two layers of heterogeneity relevant for the local transmission and the spatial propagation of the disease. It can be used for pandemic preparedness studies to identify adequate interventions and quantitatively estimate the corresponding required effort, as well as in an emerging epidemic situation to assess the pandemic potential of the pathogen from population and early outbreak data.
Sever, Ivan; Klaric, Eva; Tarle, Zrinka
2016-07-01
Dental microhardness experiments are influenced by unobserved factors related to the varying tooth characteristics that affect measurement reproducibility. This paper explores the appropriate analytical tools for modeling different sources of unobserved variability to reduce the biases encountered and increase the validity of microhardness studies. The enamel microhardness of human third molars was measured by Vickers diamond. The effects of five bleaching agents-10, 16, and 30 % carbamide peroxide, and 25 and 38 % hydrogen peroxide-were examined, as well as the effect of artificial saliva and amorphous calcium phosphate. To account for both between- and within-tooth heterogeneity in evaluating treatment effects, the statistical analysis was performed in the mixed-effects framework, which also included the appropriate weighting procedure to adjust for confounding. The results were compared to those of the standard ANOVA model usually applied. The weighted mixed-effects model produced the parameter estimates of different magnitude and significance than the standard ANOVA model. The results of the former model were more intuitive, with more precise estimates and better fit. Confounding could seriously bias the study outcomes, highlighting the need for more robust statistical procedures in dental research that account for the measurement reliability. The presented framework is more flexible and informative than existing analytical techniques and may improve the quality of inference in dental research. Reported results could be misleading if underlying heterogeneity of microhardness measurements is not taken into account. The confidence in treatment outcomes could be increased by applying the framework presented.
Morris, Jeffrey S; Baladandayuthapani, Veerabhadran; Herrick, Richard C; Sanna, Pietro; Gutstein, Howard
2011-01-01
Image data are increasingly encountered and are of growing importance in many areas of science. Much of these data are quantitative image data, which are characterized by intensities that represent some measurement of interest in the scanned images. The data typically consist of multiple images on the same domain and the goal of the research is to combine the quantitative information across images to make inference about populations or interventions. In this paper, we present a unified analysis framework for the analysis of quantitative image data using a Bayesian functional mixed model approach. This framework is flexible enough to handle complex, irregular images with many local features, and can model the simultaneous effects of multiple factors on the image intensities and account for the correlation between images induced by the design. We introduce a general isomorphic modeling approach to fitting the functional mixed model, of which the wavelet-based functional mixed model is one special case. With suitable modeling choices, this approach leads to efficient calculations and can result in flexible modeling and adaptive smoothing of the salient features in the data. The proposed method has the following advantages: it can be run automatically, it produces inferential plots indicating which regions of the image are associated with each factor, it simultaneously considers the practical and statistical significance of findings, and it controls the false discovery rate. Although the method we present is general and can be applied to quantitative image data from any application, in this paper we focus on image-based proteomic data. We apply our method to an animal study investigating the effects of opiate addiction on the brain proteome. Our image-based functional mixed model approach finds results that are missed with conventional spot-based analysis approaches. In particular, we find that the significant regions of the image identified by the proposed method frequently correspond to subregions of visible spots that may represent post-translational modifications or co-migrating proteins that cannot be visually resolved from adjacent, more abundant proteins on the gel image. Thus, it is possible that this image-based approach may actually improve the realized resolution of the gel, revealing differentially expressed proteins that would not have even been detected as spots by modern spot-based analyses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valocchi, Albert; Werth, Charles; Liu, Wen-Tso
Bioreduction is being actively investigated as an effective strategy for subsurface remediation and long-term management of DOE sites contaminated by metals and radionuclides (i.e. U(VI)). These strategies require manipulation of the subsurface, usually through injection of chemicals (e.g., electron donor) which mix at varying scales with the contaminant to stimulate metal reducing bacteria. There is evidence from DOE field experiments suggesting that mixing limitations of substrates at all scales may affect biological growth and activity for U(VI) reduction. Although current conceptual models hold that biomass growth and reduction activity is limited by physical mixing processes, a growing body of literaturemore » suggests that reaction could be enhanced by cell-to-cell interaction occurring over length scales extending tens to thousands of microns. Our project investigated two potential mechanisms of enhanced electron transfer. The first is the formation of single- or multiple-species biofilms that transport electrons via direct electrical connection such as conductive pili (i.e. ‘nanowires’) through biofilms to where the electron acceptor is available. The second is through diffusion of electron carriers from syntrophic bacteria to dissimilatory metal reducing bacteria (DMRB). The specific objectives of this work are (i) to quantify the extent and rate that electrons are transported between microorganisms in physical mixing zones between an electron donor and electron acceptor (e.g. U(IV)), (ii) to quantify the extent that biomass growth and reaction are enhanced by interspecies electron transport, and (iii) to integrate mixing across scales (e.g., microscopic scale of electron transfer and macroscopic scale of diffusion) in an integrated numerical model to quantify these mechanisms on overall U(VI) reduction rates. We tested these hypotheses with five tasks that integrate microbiological experiments, unique micro-fluidics experiments, flow cell experiments, and multi-scale numerical models. Continuous fed-batch reactors were used to derive kinetic parameters for DMRB, and to develop an enrichment culture for elucidation of syntrophic relationships in a complex microbial community. Pore and continuum scale experiments using microfluidic and bench top flow cells were used to evaluate the impact of cell-to-cell and microbial interactions on reaction enhancement in mixing-limited bioactive zones, and the mechanisms of this interaction. Some of the microfluidic experiments were used to develop and test models that considers direct cell-to-cell interactions during metal reduction. Pore scale models were incorporated into a multi-scale hybrid modeling framework that combines pore scale modeling at the reaction interface with continuum scale modeling. New computational frameworks for combining continuum and pore-scale models were also developed« less
Structure of 98Ru in the IBA-2 interacting boson model
NASA Astrophysics Data System (ADS)
Giannatiempo, A.
2017-10-01
The 98Ru structure has been investigated in the framework of the IBA-2 model, extending previous works to exploit the whole set of new spectroscopic data. The occurrence of states of mixed symmetry character in the proton and neutron degrees of freedom is of prominent importance in enlightening the structure of this nucleus, which displays features close to those of the Uπ ,ν(5) limit of the model.
Boland, Mary Regina; Rusanov, Alexander; So, Yat; Lopez-Jimenez, Carlos; Busacca, Linda; Steinman, Richard C; Bakken, Suzanne; Bigger, J Thomas; Weng, Chunhua
2014-12-01
Underspecified user needs and frequent lack of a gold standard reference are typical barriers to technology evaluation. To address this problem, this paper presents a two-phase evaluation framework involving usability experts (phase 1) and end-users (phase 2). In phase 1, a cross-system functionality alignment between expert-derived user needs and system functions was performed to inform the choice of "the best available" comparison system to enable a cognitive walkthrough in phase 1 and a comparative effectiveness evaluation in phase 2. During phase 2, five quantitative and qualitative evaluation methods are mixed to assess usability: time-motion analysis, software log, questionnaires - System Usability Scale and the Unified Theory of Acceptance of Use of Technology, think-aloud protocols, and unstructured interviews. Each method contributes data for a unique measure (e.g., time motion analysis contributes task-completion-time; software log contributes action transition frequency). The measures are triangulated to yield complementary insights regarding user-perceived ease-of-use, functionality integration, anxiety during use, and workflow impact. To illustrate its use, we applied this framework in a formative evaluation of a software called Integrated Model for Patient Care and Clinical Trials (IMPACT). We conclude that this mixed-methods evaluation framework enables an integrated assessment of user needs satisfaction and user-perceived usefulness and usability of a novel design. This evaluation framework effectively bridges the gap between co-evolving user needs and technology designs during iterative prototyping and is particularly useful when it is difficult for users to articulate their needs for technology support due to the lack of a baseline. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Rahbarimanesh, Saeed; Brinkerhoff, Joshua
2017-11-01
The mutual interaction of shear layer instabilities and phase change in a two-dimensional cryogenic cavitating mixing layer is investigated using a numerical model. The developed model employs the homogeneous equilibrium mixture (HEM) approach in a density-based framework to compute the temperature-dependent cavitation field for liquefied natural gas (LNG). Thermal and baroclinic effects are captured via iterative coupled solution of the governing equations with dynamic thermophysical models that accurately capture the properties of LNG. The mixing layer is simulated for vorticity-thickness Reynolds numbers of 44 to 215 and cavitation numbers of 0.1 to 1.1. Attached cavity structures develop on the splitter plate followed by roll-up of the separated shear layer via the well-known Kelvin-Helmholtz mode, leading to streamwise accumulation of vorticity and eventual shedding of discrete vortices. Cavitation occurs as vapor cavities nucleate and grow from the low-pressure cores in the rolled-up vortices. Thermal effects and baroclinic vorticity production are found to have significant impacts on the mixing layer instability and cavitation processes.
Presence within a mixed reality environment.
van Schaik, Paul; Turnbull, Triece; van Wersch, Anna; Drummond, Sarah
2004-10-01
Mixed reality environments represent a new approach to creating technology-mediated experiences. However, there is a lack of empirical research investigating users' actual experience. The aim of the current exploratory, non-experimental study was to establish levels of and identify factors associated with presence, within the framework of Schubert et al.'s model of presence. Using questionnaire and interview methods, the experience of the final performance of the Desert Rain mixed reality environment was investigated. Levels of general and spatial presence were relatively high, but levels of involvement and realness were not. Overall, intrinsic motivation, confidence and intention to re-visit Desert Rain were high. However, age was negatively associated with both spatial presence and confidence to play. Furthermore, various problems in navigating the environment were identified. Results are discussed in terms of Schubert's model and other theoretical perspectives. Implications for system design are presented.
Estimation of evaporation from equilibrium diurnal boundary layer humidity
NASA Astrophysics Data System (ADS)
Salvucci, G.; Rigden, A. J.; Li, D.; Gentine, P.
2017-12-01
Simplified conceptual models of the convective boundary layer as a well mixed profile of potential temperature (theta) and specific humidity (q) impinging on an initially stably stratified linear potential temperature profile have a long history in atmospheric sciences. These one dimensional representations of complex mixing are useful for gaining insights into land-atmosphere interactions and for prediction when state of the art LES approaches are infeasible. As previously shown (e.g. Betts), if one neglects the role of q in bouyancy, the framework yields a unique relation between mixed layer Theta, mixed layer height (h), and cumulative sensible heat flux (SH) throughout the day. Similarly assuming an initially q profile yields a simple relation between q, h, and cumulative latent heat flux (LH). The diurnal dynamics of theta and q are strongly dependent on SH and the initial lapse rates of theta (gamma_thet) and q (gamma q). In the estimation method proposed here, we further constrain these relations with two more assumptions: 1) The specific humidity is the same at the start of the period of boundary layer growth and at the collapse; and 2) Once the mixed layer reaches the LCL, further drying occurs proportionally to the deardorff convective velocity scale (omega) multiplied by q. Assumption (1) is based on the idea that below the cloud layer, there are no sinks of moisture within the mixed layer (neglecting lateral humidity divergence). Thus the net mixing of dry air aloft with evaporation from the surface must balance. Inclusion of the simple model of moisture loss above the LCL into the bulk-CBL model allows definition of an equilibrium humidity (q) condition at which the diurnal cycle of q repeats (i.e. additions of q from surface balance entrainment of dry air from above). Surprisingly, this framework allows estimation of LH from q, theta, and estimated net radiation by solving for the value of Evaporative Fraction (EF) for which the diurnal cycle of q repeats. Three parameters need specification: cloud area fraction, entrainment factor, and morning lapse rate. Surprisingly, a single set of values for these parameters are adequate to estimate EF at over 70 tested Ameriflux sites to within about 20%, though improvements are gained using a single regression model for gamma_thet that has been fitted to radiosonde data.
The Esophagiome: concept, status, and future perspectives.
Gregersen, Hans; Liao, Donghua; Brasseur, James G
2016-09-01
The term "Esophagiome" is meant to imply a holistic, multiscale treatment of esophageal function from cellular and muscle physiology to the mechanical responses that transport and mix fluid contents. The development and application of multiscale mathematical models of esophageal function are central to the Esophagiome concept. These model elements underlie the development of a "virtual esophagus" modeling framework to characterize and analyze function and disease by quantitatively contrasting normal and pathophysiological function. Functional models incorporate anatomical details with sensory-motor properties and functional responses, especially related to biomechanical functions, such as bolus transport and gastrointestinal fluid mixing. This brief review provides insight into Esophagiome research. Future advanced models can provide predictive evaluations of the therapeutic consequences of surgical and endoscopic treatments and will aim to facilitate clinical diagnostics and treatment. © 2016 New York Academy of Sciences.
NASA Technical Reports Server (NTRS)
Plitau, Denis; Prasad, Narasimha S.
2012-01-01
The Active Sensing of CO2 Emissions over Nights Days and Seasons (ASCENDS) mission recommended by the NRC Decadal Survey has a desired accuracy of 0.3% in carbon dioxide mixing ratio (XCO2) retrievals requiring careful selection and optimization of the instrument parameters. NASA Langley Research Center (LaRC) is investigating 1.57 micron carbon dioxide as well as the 1.26-1.27 micron oxygen bands for our proposed ASCENDS mission requirements investigation. Simulation studies are underway for these bands to select optimum instrument parameters. The simulations are based on a multi-wavelength lidar modeling framework being developed at NASA LaRC to predict the performance of CO2 and O2 sensing from space and airborne platforms. The modeling framework consists of a lidar simulation module and a line-by-line calculation component with interchangeable lineshape routines to test the performance of alternative lineshape models in the simulations. As an option the line-by-line radiative transfer model (LBLRTM) program may also be used for line-by-line calculations. The modeling framework is being used to perform error analysis, establish optimum measurement wavelengths as well as to identify the best lineshape models to be used in CO2 and O2 retrievals. Several additional programs for HITRAN database management and related simulations are planned to be included in the framework. The description of the modeling framework with selected results of the simulation studies for CO2 and O2 sensing is presented in this paper.
Markov and semi-Markov switching linear mixed models used to identify forest tree growth components.
Chaubert-Pereira, Florence; Guédon, Yann; Lavergne, Christian; Trottier, Catherine
2010-09-01
Tree growth is assumed to be mainly the result of three components: (i) an endogenous component assumed to be structured as a succession of roughly stationary phases separated by marked change points that are asynchronous among individuals, (ii) a time-varying environmental component assumed to take the form of synchronous fluctuations among individuals, and (iii) an individual component corresponding mainly to the local environment of each tree. To identify and characterize these three components, we propose to use semi-Markov switching linear mixed models, i.e., models that combine linear mixed models in a semi-Markovian manner. The underlying semi-Markov chain represents the succession of growth phases and their lengths (endogenous component) whereas the linear mixed models attached to each state of the underlying semi-Markov chain represent-in the corresponding growth phase-both the influence of time-varying climatic covariates (environmental component) as fixed effects, and interindividual heterogeneity (individual component) as random effects. In this article, we address the estimation of Markov and semi-Markov switching linear mixed models in a general framework. We propose a Monte Carlo expectation-maximization like algorithm whose iterations decompose into three steps: (i) sampling of state sequences given random effects, (ii) prediction of random effects given state sequences, and (iii) maximization. The proposed statistical modeling approach is illustrated by the analysis of successive annual shoots along Corsican pine trunks influenced by climatic covariates. © 2009, The International Biometric Society.
NASA Astrophysics Data System (ADS)
Dolan, B.; Rutledge, S. A.; Barnum, J. I.; Matsui, T.; Tao, W. K.; Iguchi, T.
2017-12-01
POLarimetric Radar Retrieval and Instrument Simulator (POLARRIS) is a framework that has been developed to simulate radar observations from cloud resolving model (CRM) output and subject model data and observations to the same retrievals, analysis and visualization. This framework not only enables validation of bulk microphysical model simulated properties, but also offers an opportunity to study the uncertainties associated with retrievals such as hydrometeor classification (HID). For the CSU HID, membership beta functions (MBFs) are built using a set of simulations with realistic microphysical assumptions about axis ratio, density, canting angles, size distributions for each of ten hydrometeor species. These assumptions are tested using POLARRIS to understand their influence on the resulting simulated polarimetric data and final HID classification. Several of these parameters (density, size distributions) are set by the model microphysics, and therefore the specific assumptions of axis ratio and canting angle are carefully studied. Through these sensitivity studies, we hope to be able to provide uncertainties in retrieved polarimetric variables and HID as applied to CRM output. HID retrievals assign a classification to each point by determining the highest score, thereby identifying the dominant hydrometeor type within a volume. However, in nature, there is rarely just one a single hydrometeor type at a particular point. Models allow for mixing ratios of different hydrometeors within a grid point. We use the mixing ratios from CRM output in concert with the HID scores and classifications to understand how the HID algorithm can provide information about mixtures within a volume, as well as calculate a confidence in the classifications. We leverage the POLARRIS framework to additionally probe radar wavelength differences toward the possibility of a multi-wavelength HID which could utilize the strengths of different wavelengths to improve HID classifications. With these uncertainties and algorithm improvements, cases of convection are studied in a continental (Oklahoma) and maritime (Darwin, Australia) regime. Observations from C-band polarimetric data in both locations are compared to CRM simulations from NU-WRF using the POLARRIS framework.
Effects of Transition-Metal Mixing on Na Ordering and Kinetics in Layered P 2 Oxides
NASA Astrophysics Data System (ADS)
Zheng, Chen; Radhakrishnan, Balachandran; Chu, Iek-Heng; Wang, Zhenbin; Ong, Shyue Ping
2017-06-01
Layered P 2 oxides are promising cathode materials for rechargeable sodium-ion batteries. In this work, we systematically investigate the effects of transition-metal (TM) mixing on Na ordering and kinetics in the NaxCo1 -yMnyO2 model system using density-functional-theory (DFT) calculations. The DFT-predicted 0-K stability diagrams indicate that Co-Mn mixing reduces the energetic differences between Na orderings, which may account for the reduction of the number of phase transformations observed during the cycling of mixed-TM P 2 layered oxides compared to a single TM. Using ab initio molecular-dynamics simulations and nudged elastic-band calculations, we show that the TM composition at the Na(1) (face-sharing) site has a strong influence on the Na site energies, which in turn impacts the kinetics of Na diffusion towards the end of the charge. By employing a site-percolation model, we establish theoretical upper and lower bounds for TM concentrations based on their effect on Na(1) site energies, providing a framework to rationally tune mixed-TM compositions for optimal Na diffusion.
NASA Astrophysics Data System (ADS)
Yang, Kai; Chen, Xiangguang; Wang, Li; Jin, Huaiping
2017-01-01
In rubber mixing process, the key parameter (Mooney viscosity), which is used to evaluate the property of the product, can only be obtained with 4-6h delay offline. It is quite helpful for the industry, if the parameter can be estimate on line. Various data driven soft sensors have been used to prediction in the rubber mixing. However, it always not functions well due to the phase and nonlinear property in the process. The purpose of this paper is to develop an efficient soft sensing algorithm to solve the problem. Based on the proposed GMMD local sample selecting criterion, the phase information is extracted in the local modeling. Using the Gaussian local modeling method within Just-in-time (JIT) learning framework, nonlinearity of the process is well handled. Efficiency of the new method is verified by comparing the performance with various mainstream soft sensors, using the samples from real industrial rubber mixing process.
Large field inflation from axion mixing
NASA Astrophysics Data System (ADS)
Shiu, Gary; Staessens, Wieland; Ye, Fang
2015-06-01
We study the general multi-axion systems, focusing on the possibility of large field inflation driven by axions. We find that through axion mixing from a non-diagonal metric on the moduli space and/or from Stückelberg coupling to a U(1) gauge field, an effectively super-Planckian decay constant can be generated without the need of "alignment" in the axion decay constants. We also investigate the consistency conditions related to the gauge symmetries in the multi-axion systems, such as vanishing gauge anomalies and the potential presence of generalized Chern-Simons terms. Our scenario applies generally to field theory models whose axion periodicities are intrinsically sub-Planckian, but it is most naturally realized in string theory. The types of axion mixings invoked in our scenario appear quite commonly in D-brane models, and we present its implementation in type II superstring theory. Explicit stringy models exhibiting all the characteristics of our ideas are constructed within the frameworks of Type IIA intersecting D6-brane models on and Type IIB intersecting D7-brane models on Swiss-Cheese Calabi-Yau orientifolds.
Bleka, Øyvind; Storvik, Geir; Gill, Peter
2016-03-01
We have released a software named EuroForMix to analyze STR DNA profiles in a user-friendly graphical user interface. The software implements a model to explain the allelic peak height on a continuous scale in order to carry out weight-of-evidence calculations for profiles which could be from a mixture of contributors. Through a properly parameterized model we are able to do inference on mixture proportions, the peak height properties, stutter proportion and degradation. In addition, EuroForMix includes models for allele drop-out, allele drop-in and sub-population structure. EuroForMix supports two inference approaches for likelihood ratio calculations. The first approach uses maximum likelihood estimation of the unknown parameters. The second approach is Bayesian based which requires prior distributions to be specified for the parameters involved. The user may specify any number of known and unknown contributors in the model, however we find that there is a practical computing time limit which restricts the model to a maximum of four unknown contributors. EuroForMix is the first freely open source, continuous model (accommodating peak height, stutter, drop-in, drop-out, population substructure and degradation), to be reported in the literature. It therefore serves an important purpose to act as an unrestricted platform to compare different solutions that are available. The implementation of the continuous model used in the software showed close to identical results to the R-package DNAmixtures, which requires a HUGIN Expert license to be used. An additional feature in EuroForMix is the ability for the user to adapt the Bayesian inference framework by incorporating their own prior information. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Zhang, Hanze; Huang, Yangxin; Wang, Wei; Chen, Henian; Langland-Orban, Barbara
2017-01-01
In longitudinal AIDS studies, it is of interest to investigate the relationship between HIV viral load and CD4 cell counts, as well as the complicated time effect. Most of common models to analyze such complex longitudinal data are based on mean-regression, which fails to provide efficient estimates due to outliers and/or heavy tails. Quantile regression-based partially linear mixed-effects models, a special case of semiparametric models enjoying benefits of both parametric and nonparametric models, have the flexibility to monitor the viral dynamics nonparametrically and detect the varying CD4 effects parametrically at different quantiles of viral load. Meanwhile, it is critical to consider various data features of repeated measurements, including left-censoring due to a limit of detection, covariate measurement error, and asymmetric distribution. In this research, we first establish a Bayesian joint models that accounts for all these data features simultaneously in the framework of quantile regression-based partially linear mixed-effects models. The proposed models are applied to analyze the Multicenter AIDS Cohort Study (MACS) data. Simulation studies are also conducted to assess the performance of the proposed methods under different scenarios.
What can we learn from international comparisons of health systems and health system reform?
McPake, B.; Mills, A.
2000-01-01
Most commonly, lessons derived from comparisons of international health sector reform can only be generalized in a limited way to similar countries. However, there is little guidance as to what constitutes "similarity" in this respect. We propose that a framework for assessing similarity could be derived from the performance of individual policies in different contexts, and from the cause and effect processes related to the policies. We demonstrate this process by considering research evidence in the "public-private mix", and propose variables for an initial framework that we believe determine private involvement in the public health sector. The most influential model of public leadership places the private role in a contracting framework. Research in countries that have adopted this model suggests an additional list of variables to add to the framework. The variables can be grouped under the headings "demand factors", "supply factors", and "strength of the public sector". These illustrate the nature of a framework that could emerge, and which would help countries aiming to learn from international experience. PMID:10916918
Martins Pereira, Sandra; de Sá Brandão, Patrícia Joana; Araújo, Joana; Carvalho, Ana Sofia
2017-01-01
Introduction Antimicrobial resistance (AMR) is a challenging global and public health issue, raising bioethical challenges, considerations and strategies. Objectives This research protocol presents a conceptual model leading to formulating an empirically based bioethics framework for antibiotic use, AMR and designing ethically robust strategies to protect human health. Methods Mixed methods research will be used and operationalized into five substudies. The bioethical framework will encompass and integrate two theoretical models: global bioethics and ethical decision-making. Results Being a study protocol, this article reports on planned and ongoing research. Conclusions Based on data collection, future findings and using a comprehensive, integrative, evidence-based approach, a step-by-step bioethical framework will be developed for (i) responsible use of antibiotics in healthcare and (ii) design of strategies to decrease AMR. This will entail the analysis and interpretation of approaches from several bioethical theories, including deontological and consequentialist approaches, and the implications of uncertainty to these approaches. PMID:28459355
Nicod, Elena; Kanavos, Panos
2016-01-01
Health Technology Assessment (HTA) often results in different coverage recommendations across countries for a same medicine despite similar methodological approaches. This paper develops and pilots a methodological framework that systematically identifies the reasons for these differences using an exploratory sequential mixed methods research design. The study countries were England, Scotland, Sweden and France. The methodological framework was built around three stages of the HTA process: (a) evidence, (b) its interpretation, and (c) its influence on the final recommendation; and was applied to two orphan medicinal products. The criteria accounted for at each stage were qualitatively analyzed through thematic analysis. Piloting the framework for two medicines, eight trials, 43 clinical endpoints and seven economic models were coded 155 times. Eighteen different uncertainties about this evidence were coded 28 times, 56% of which pertained to evidence commonly appraised and 44% to evidence considered by only some agencies. The poor agreement in interpreting this evidence (κ=0.183) was partly explained by stakeholder input (ns=48 times), or by agency-specific risk (nu=28 uncertainties) and value preferences (noc=62 "other considerations"), derived through correspondence analysis. Accounting for variability at each stage of the process can be achieved by codifying its existence and quantifying its impact through the application of this framework. The transferability of this framework to other disease areas, medicines and countries is ensured by its iterative and flexible nature, and detailed description. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Fukuda, Jun'ichi; Johnson, Kaj M.
2010-06-01
We present a unified theoretical framework and solution method for probabilistic, Bayesian inversions of crustal deformation data. The inversions involve multiple data sets with unknown relative weights, model parameters that are related linearly or non-linearly through theoretic models to observations, prior information on model parameters and regularization priors to stabilize underdetermined problems. To efficiently handle non-linear inversions in which some of the model parameters are linearly related to the observations, this method combines both analytical least-squares solutions and a Monte Carlo sampling technique. In this method, model parameters that are linearly and non-linearly related to observations, relative weights of multiple data sets and relative weights of prior information and regularization priors are determined in a unified Bayesian framework. In this paper, we define the mixed linear-non-linear inverse problem, outline the theoretical basis for the method, provide a step-by-step algorithm for the inversion, validate the inversion method using synthetic data and apply the method to two real data sets. We apply the method to inversions of multiple geodetic data sets with unknown relative data weights for interseismic fault slip and locking depth. We also apply the method to the problem of estimating the spatial distribution of coseismic slip on faults with unknown fault geometry, relative data weights and smoothing regularization weight.
Five-Year Research and Development Plan, Fiscal Years 2008-2013
2008-08-01
protect our interior 1.l.1 Deploy a mix of infrastructure, technology, and personnel on the Southwest border to ensure all illegal activity along...requirements into a systems model. FY 2009: • Review the System of Systems model and ensure it correctly addresses SBI requirements. FY 2010... on a ship). This security architecture provides the framework within which DHS will incorporate their near-term CSD and future container security
Ju, Jin Hyun; Crystal, Ronald G.
2017-01-01
Genome-wide expression Quantitative Trait Loci (eQTL) studies in humans have provided numerous insights into the genetics of both gene expression and complex diseases. While the majority of eQTL identified in genome-wide analyses impact a single gene, eQTL that impact many genes are particularly valuable for network modeling and disease analysis. To enable the identification of such broad impact eQTL, we introduce CONFETI: Confounding Factor Estimation Through Independent component analysis. CONFETI is designed to address two conflicting issues when searching for broad impact eQTL: the need to account for non-genetic confounding factors that can lower the power of the analysis or produce broad impact eQTL false positives, and the tendency of methods that account for confounding factors to model broad impact eQTL as non-genetic variation. The key advance of the CONFETI framework is the use of Independent Component Analysis (ICA) to identify variation likely caused by broad impact eQTL when constructing the sample covariance matrix used for the random effect in a mixed model. We show that CONFETI has better performance than other mixed model confounding factor methods when considering broad impact eQTL recovery from synthetic data. We also used the CONFETI framework and these same confounding factor methods to identify eQTL that replicate between matched twin pair datasets in the Multiple Tissue Human Expression Resource (MuTHER), the Depression Genes Networks study (DGN), the Netherlands Study of Depression and Anxiety (NESDA), and multiple tissue types in the Genotype-Tissue Expression (GTEx) consortium. These analyses identified both cis-eQTL and trans-eQTL impacting individual genes, and CONFETI had better or comparable performance to other mixed model confounding factor analysis methods when identifying such eQTL. In these analyses, we were able to identify and replicate a few broad impact eQTL although the overall number was small even when applying CONFETI. In light of these results, we discuss the broad impact eQTL that have been previously reported from the analysis of human data and suggest that considerable caution should be exercised when making biological inferences based on these reported eQTL. PMID:28505156
Ju, Jin Hyun; Shenoy, Sushila A; Crystal, Ronald G; Mezey, Jason G
2017-05-01
Genome-wide expression Quantitative Trait Loci (eQTL) studies in humans have provided numerous insights into the genetics of both gene expression and complex diseases. While the majority of eQTL identified in genome-wide analyses impact a single gene, eQTL that impact many genes are particularly valuable for network modeling and disease analysis. To enable the identification of such broad impact eQTL, we introduce CONFETI: Confounding Factor Estimation Through Independent component analysis. CONFETI is designed to address two conflicting issues when searching for broad impact eQTL: the need to account for non-genetic confounding factors that can lower the power of the analysis or produce broad impact eQTL false positives, and the tendency of methods that account for confounding factors to model broad impact eQTL as non-genetic variation. The key advance of the CONFETI framework is the use of Independent Component Analysis (ICA) to identify variation likely caused by broad impact eQTL when constructing the sample covariance matrix used for the random effect in a mixed model. We show that CONFETI has better performance than other mixed model confounding factor methods when considering broad impact eQTL recovery from synthetic data. We also used the CONFETI framework and these same confounding factor methods to identify eQTL that replicate between matched twin pair datasets in the Multiple Tissue Human Expression Resource (MuTHER), the Depression Genes Networks study (DGN), the Netherlands Study of Depression and Anxiety (NESDA), and multiple tissue types in the Genotype-Tissue Expression (GTEx) consortium. These analyses identified both cis-eQTL and trans-eQTL impacting individual genes, and CONFETI had better or comparable performance to other mixed model confounding factor analysis methods when identifying such eQTL. In these analyses, we were able to identify and replicate a few broad impact eQTL although the overall number was small even when applying CONFETI. In light of these results, we discuss the broad impact eQTL that have been previously reported from the analysis of human data and suggest that considerable caution should be exercised when making biological inferences based on these reported eQTL.
An Examination of First-to Second-Year Persistence of First-Generation College Students
ERIC Educational Resources Information Center
Guyer, Kimberly Denise
2013-01-01
This dissertation uses a mixed-methods design to examine persistence into the second year by students' parental education level. The institution selected for this dissertation is Temple University, a large, urban, public university in the Northeast. Using Tinto's (1993) model of student departure as a conceptual framework, the quantitative…
The calculation of aerosol optical properties from aerosol mass is a process subject to uncertainty related to necessary assumptions on the treatment of the chemical species mixing state, density, refractive index, and hygroscopic growth. In the framework of the AQMEII-2 model in...
Designing a Leadership Legacy (L2) Framework
ERIC Educational Resources Information Center
Fierke, Kerry K.
2015-01-01
What does it mean to leave a "leadership legacy" in the organizations and communities in which we are involved? This mixed-methods research project will explore the stories of successful individuals who have left a leadership legacy. Specifically in this article, the preliminary research will share various components of a model to create…
NASA Astrophysics Data System (ADS)
Sund, Nicole L.; Porta, Giovanni M.; Bolster, Diogo
2017-05-01
The Spatial Markov Model (SMM) is an upscaled model that has been used successfully to predict effective mean transport across a broad range of hydrologic settings. Here we propose a novel variant of the SMM, applicable to spatially periodic systems. This SMM is built using particle trajectories, rather than travel times. By applying the proposed SMM to a simple benchmark problem we demonstrate that it can predict mean effective transport, when compared to data from fully resolved direct numerical simulations. Next we propose a methodology for using this SMM framework to predict measures of mixing and dilution, that do not just depend on mean concentrations, but are strongly impacted by pore-scale concentration fluctuations. We use information from trajectories of particles to downscale and reconstruct pore-scale approximate concentration fields from which mixing and dilution measures are then calculated. The comparison between measurements from fully resolved simulations and predictions with the SMM agree very favorably.
Linking deep convection and phytoplankton blooms in the northern Labrador Sea in a changing climate
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balaguru, Karthik; Doney, Scott C.; Bianucci, Laura
Wintertime convective mixing plays a pivotal role in the sub-polar North Atlantic spring phytoplankton blooms by favoring phytoplankton survival in the competition between light-dependent production and losses due to grazing and gravitational settling. We use satellite and ocean reanalyses to show that the area-averaged maximum winter mixed layer depth is positively correlated with April chlorophyll concentration in the northern Labrador Sea. A simple theoretical framework is developed to understand the relative roles of winter/spring convection and gravitational sedimentation in spring blooms in this region. Combining climate model simulations that project a weakening of wintertime Labrador Sea convection from Arctic seamore » ice melt with our framework suggests a potentially significant reduction in the initial fall phytoplankton population that survive the winter to seed the region's spring bloom by the end of the 21st century.« less
Linking deep convection and phytoplankton blooms in the northern Labrador Sea in a changing climate
Doney, Scott C.; Bianucci, Laura; Rasch, Philip J.; Leung, L. Ruby; Yoon, Jin-Ho; Lima, Ivan D.
2018-01-01
Wintertime convective mixing plays a pivotal role in the sub-polar North Atlantic spring phytoplankton blooms by favoring phytoplankton survival in the competition between light-dependent production and losses due to grazing and gravitational settling. We use satellite and ocean reanalyses to show that the area-averaged maximum winter mixed layer depth is positively correlated with April chlorophyll concentration in the northern Labrador Sea. A simple theoretical framework is developed to understand the relative roles of winter/spring convection and gravitational sedimentation in spring blooms in this region. Combining climate model simulations that project a weakening of wintertime Labrador Sea convection from Arctic sea ice melt with our framework suggests a potentially significant reduction in the initial fall phytoplankton population that survive the winter to seed the region’s spring bloom by the end of the 21st century. PMID:29370224
Linking deep convection and phytoplankton blooms in the northern Labrador Sea in a changing climate.
Balaguru, Karthik; Doney, Scott C; Bianucci, Laura; Rasch, Philip J; Leung, L Ruby; Yoon, Jin-Ho; Lima, Ivan D
2018-01-01
Wintertime convective mixing plays a pivotal role in the sub-polar North Atlantic spring phytoplankton blooms by favoring phytoplankton survival in the competition between light-dependent production and losses due to grazing and gravitational settling. We use satellite and ocean reanalyses to show that the area-averaged maximum winter mixed layer depth is positively correlated with April chlorophyll concentration in the northern Labrador Sea. A simple theoretical framework is developed to understand the relative roles of winter/spring convection and gravitational sedimentation in spring blooms in this region. Combining climate model simulations that project a weakening of wintertime Labrador Sea convection from Arctic sea ice melt with our framework suggests a potentially significant reduction in the initial fall phytoplankton population that survive the winter to seed the region's spring bloom by the end of the 21st century.
Chen, Innie; Money, Deborah; Yong, Paul; Williams, Christina; Allaire, Catherine
2015-09-01
Chronic pelvic pain (CPP) is a prevalent, debilitating, and costly condition. Although national guidelines and empiric evidence support the use of a multidisciplinary model of care for such patients, such clinics are uncommon in Canada. The BC Women's Centre for Pelvic Pain and Endometriosis was created to respond to this need, and there is interest in this model of care's impact on the burden of disease in British Columbia. We sought to create an approach to its evaluation using the RE-AIM (Reach, Efficacy, Adoption, Implementation, Maintenance) evaluation framework to assess the impact of the care model and to guide clinical decision-making and policy. The RE-AIM evaluation framework was applied to consider the different dimensions of impact of the BC Centre. The proposed measures, data sources, and data management strategies for this mixed-methods approach were identified. The five dimensions of impact were considered at individual and organizational levels, and corresponding indicators were proposed to enable integration into existing data infrastructure to facilitate collection and early program evaluation. The RE-AIM framework can be applied to the evaluation of a multidisciplinary chronic pelvic pain clinic. This will allow better assessment of the impact of innovative models of care for women with chronic pelvic pain.
Row, Jeffrey R.; Knick, Steven T.; Oyler-McCance, Sara J.; Lougheed, Stephen C.; Fedy, Bradley C.
2017-01-01
Dispersal can impact population dynamics and geographic variation, and thus, genetic approaches that can establish which landscape factors influence population connectivity have ecological and evolutionary importance. Mixed models that account for the error structure of pairwise datasets are increasingly used to compare models relating genetic differentiation to pairwise measures of landscape resistance. A model selection framework based on information criteria metrics or explained variance may help disentangle the ecological and landscape factors influencing genetic structure, yet there are currently no consensus for the best protocols. Here, we develop landscape-directed simulations and test a series of replicates that emulate independent empirical datasets of two species with different life history characteristics (greater sage-grouse; eastern foxsnake). We determined that in our simulated scenarios, AIC and BIC were the best model selection indices and that marginal R2 values were biased toward more complex models. The model coefficients for landscape variables generally reflected the underlying dispersal model with confidence intervals that did not overlap with zero across the entire model set. When we controlled for geographic distance, variables not in the underlying dispersal models (i.e., nontrue) typically overlapped zero. Our study helps establish methods for using linear mixed models to identify the features underlying patterns of dispersal across a variety of landscapes.
Reservoir Computing Properties of Neural Dynamics in Prefrontal Cortex
Procyk, Emmanuel; Dominey, Peter Ford
2016-01-01
Primates display a remarkable ability to adapt to novel situations. Determining what is most pertinent in these situations is not always possible based only on the current sensory inputs, and often also depends on recent inputs and behavioral outputs that contribute to internal states. Thus, one can ask how cortical dynamics generate representations of these complex situations. It has been observed that mixed selectivity in cortical neurons contributes to represent diverse situations defined by a combination of the current stimuli, and that mixed selectivity is readily obtained in randomly connected recurrent networks. In this context, these reservoir networks reproduce the highly recurrent nature of local cortical connectivity. Recombining present and past inputs, random recurrent networks from the reservoir computing framework generate mixed selectivity which provides pre-coded representations of an essentially universal set of contexts. These representations can then be selectively amplified through learning to solve the task at hand. We thus explored their representational power and dynamical properties after training a reservoir to perform a complex cognitive task initially developed for monkeys. The reservoir model inherently displayed a dynamic form of mixed selectivity, key to the representation of the behavioral context over time. The pre-coded representation of context was amplified by training a feedback neuron to explicitly represent this context, thereby reproducing the effect of learning and allowing the model to perform more robustly. This second version of the model demonstrates how a hybrid dynamical regime combining spatio-temporal processing of reservoirs, and input driven attracting dynamics generated by the feedback neuron, can be used to solve a complex cognitive task. We compared reservoir activity to neural activity of dorsal anterior cingulate cortex of monkeys which revealed similar network dynamics. We argue that reservoir computing is a pertinent framework to model local cortical dynamics and their contribution to higher cognitive function. PMID:27286251
Modelling volumetric growth in a thick walled fibre reinforced artery
NASA Astrophysics Data System (ADS)
Eriksson, T. S. E.; Watton, P. N.; Luo, X. Y.; Ventikos, Y.
2014-12-01
A novel framework for simulating growth and remodelling (G&R) of a fibre-reinforced artery, including volumetric adaption, is proposed. We show how to implement this model into a finite element framework and propose and examine two underlying assumptions for modelling growth, namely constant individual density (CID) or adaptive individual density (AID). Moreover, we formulate a novel approach which utilises a combination of both AID and CID to simulate volumetric G&R for a tissue composed of several different constituents. We consider a special case of the G&R of an artery subjected to prescribed elastin degradation and we theorise on the assumptions and suitability of CID, AID and the mixed approach for modelling arterial biology. For simulating the volumetric changes that occur during aneurysm enlargement, we observe that it is advantageous to describe the growth of collagen using CID whilst it is preferable to model the atrophy of elastin using AID.
A Flexible Electronic Commerce Recommendation System
NASA Astrophysics Data System (ADS)
Gong, Songjie
Recommendation systems have become very popular in E-commerce websites. Many of the largest commerce websites are already using recommender technologies to help their customers find products to purchase. An electronic commerce recommendation system learns from a customer and recommends products that the customer will find most valuable from among the available products. But most recommendation methods are hard-wired into the system and they support only fixed recommendations. This paper presented a framework of flexible electronic commerce recommendation system. The framework is composed by user model interface, recommendation engine, recommendation strategy model, recommendation technology group, user interest model and database interface. In the recommender strategy model, the method can be collaborative filtering, content-based filtering, mining associate rules method, knowledge-based filtering method or the mixed method. The system mapped the implementation and demand through strategy model, and the whole system would be design as standard parts to adapt to the change of the recommendation strategy.
Radiative corrections to the solar lepton mixing sum rule
NASA Astrophysics Data System (ADS)
Zhang, Jue; Zhou, Shun
2016-08-01
The simple correlation among three lepton flavor mixing angles ( θ 12, θ 13, θ 23) and the leptonic Dirac CP-violating phase δ is conventionally called a sum rule of lepton flavor mixing, which may be derived from a class of neutrino mass models with flavor symmetries. In this paper, we consider the solar lepton mixing sum rule θ 12 ≈ θ 12 ν + θ 13 cos δ, where θ 12 ν stems from a constant mixing pattern in the neutrino sector and takes the value of θ 12 ν = 45 ° for the bi-maximal mixing (BM), {θ}_{12}^{ν } = { tan}^{-1}(1/√{2}) ≈ 35.3° for the tri-bimaximal mixing (TBM) or {θ}_{12}^{ν } = { tan}^{-1}(1/√{5+1}) ≈ 31.7° for the golden-ratio mixing (GR), and investigate the renormalization-group (RG) running effects on lepton flavor mixing parameters when this sum rule is assumed at a superhigh-energy scale. For illustration, we work within the framework of the minimal supersymmetric standard model (MSSM), and implement the Bayesian approach to explore the posterior distribution of δ at the low-energy scale, which becomes quite broad when the RG running effects are significant. Moreover, we also discuss the compatibility of the above three mixing scenarios with current neutrino oscillation data, and observe that radiative corrections can increase such a compatibility for the BM scenario, resulting in a weaker preference for the TBM and GR ones.
Model-independent analysis of quark mass matrices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choudhury, D.; Sarkar, U.
1989-06-01
In view of the apparent inconsistency of the Stech, Fritzsch-Stech, and Fritzsch-Shin models and only marginal agreement of the Fritzsch and modified Fritzsch-Stech models with recent data on /ital B//sub /ital d///sup 0/-/bar B/ /sub /ital d///sup 0/ mixing, we analyze the general quark mass matrices for three generations. Phenomenological considerations restrict the range of parameters involved to different sectors. In the present framework, the constraints corresponding to various /ital Ansa/$/ital uml/---/ital tze/ have been discussed.
Inferring network structure in non-normal and mixed discrete-continuous genomic data.
Bhadra, Anindya; Rao, Arvind; Baladandayuthapani, Veerabhadran
2018-03-01
Inferring dependence structure through undirected graphs is crucial for uncovering the major modes of multivariate interaction among high-dimensional genomic markers that are potentially associated with cancer. Traditionally, conditional independence has been studied using sparse Gaussian graphical models for continuous data and sparse Ising models for discrete data. However, there are two clear situations when these approaches are inadequate. The first occurs when the data are continuous but display non-normal marginal behavior such as heavy tails or skewness, rendering an assumption of normality inappropriate. The second occurs when a part of the data is ordinal or discrete (e.g., presence or absence of a mutation) and the other part is continuous (e.g., expression levels of genes or proteins). In this case, the existing Bayesian approaches typically employ a latent variable framework for the discrete part that precludes inferring conditional independence among the data that are actually observed. The current article overcomes these two challenges in a unified framework using Gaussian scale mixtures. Our framework is able to handle continuous data that are not normal and data that are of mixed continuous and discrete nature, while still being able to infer a sparse conditional sign independence structure among the observed data. Extensive performance comparison in simulations with alternative techniques and an analysis of a real cancer genomics data set demonstrate the effectiveness of the proposed approach. © 2017, The International Biometric Society.
Inferring network structure in non-normal and mixed discrete-continuous genomic data
Bhadra, Anindya; Rao, Arvind; Baladandayuthapani, Veerabhadran
2017-01-01
Inferring dependence structure through undirected graphs is crucial for uncovering the major modes of multivariate interaction among high-dimensional genomic markers that are potentially associated with cancer. Traditionally, conditional independence has been studied using sparse Gaussian graphical models for continuous data and sparse Ising models for discrete data. However, there are two clear situations when these approaches are inadequate. The first occurs when the data are continuous but display non-normal marginal behavior such as heavy tails or skewness, rendering an assumption of normality inappropriate. The second occurs when a part of the data is ordinal or discrete (e.g., presence or absence of a mutation) and the other part is continuous (e.g., expression levels of genes or proteins). In this case, the existing Bayesian approaches typically employ a latent variable framework for the discrete part that precludes inferring conditional independence among the data that are actually observed. The current article overcomes these two challenges in a unified framework using Gaussian scale mixtures. Our framework is able to handle continuous data that are not normal and data that are of mixed continuous and discrete nature, while still being able to infer a sparse conditional sign independence structure among the observed data. Extensive performance comparison in simulations with alternative techniques and an analysis of a real cancer genomics data set demonstrate the effectiveness of the proposed approach. PMID:28437848
Can Condensing Organic Aerosols Lead to Less Cloud Particles?
NASA Astrophysics Data System (ADS)
Gao, C. Y.; Tsigaridis, K.; Bauer, S.
2017-12-01
We examined the impact of condensing organic aerosols on activated cloud number concentration in a new aerosol microphysics box model, MATRIX-VBS. The model includes the volatility-basis set (VBS) framework in an aerosol microphysical scheme MATRIX (Multiconfiguration Aerosol TRacker of mIXing state) that resolves aerosol mass and number concentrations and aerosol mixing state. Preliminary results show that by including the condensation of organic aerosols, the new model (MATRIX-VBS) has less activated particles compared to the original model (MATRIX), which treats organic aerosols as non-volatile. Parameters such as aerosol chemical composition, mass and number concentrations, and particle sizes which affect activated cloud number concentration are thoroughly evaluated via a suite of Monte-Carlo simulations. The Monte-Carlo simulations also provide information on which climate-relevant parameters play a critical role in the aerosol evolution in the atmosphere. This study also helps simplifying the newly developed box model which will soon be implemented in the global model GISS ModelE as a module.
Modeled forest inventory data suggest climate benefits from fuels management
Jeremy S. Fried; Theresa B. Jain; Jonathan. Sandquist
2013-01-01
As part of a recent synthesis addressing fuel management in dry, mixed-conifer forests we analyzed more than 5,000 Forest Inventory and Analysis (FIA) plots, a probability sample that represents 33 million acres of these forests throughout Washington, Oregon, Idaho, Montana, Utah, and extreme northern California. We relied on the BioSum analysis framework that...
A Framework for Understanding of Bilingual Education in Turkey: A Mixed Method Approach
ERIC Educational Resources Information Center
Ozfidan, Burhan; Burlbaw, Lynn M.
2017-01-01
This study seeks to identify the obstacles and opportunities involved in setting up a bilingual education system and to identify the challenges and benefits associated with the daily experience of maintaining a bilingual education model. This study discusses the benefits of developing a bilingual education program and what these programs can offer…
ERIC Educational Resources Information Center
Weinberg, Andrea E.; Basile, Carole G.; Albright, Leonard
2011-01-01
A mixed methods design was used to evaluate the effects of four experiential learning programs on the interest and motivation of middle school students toward mathematics and science. The Expectancy-Value model provided a theoretical framework for the exploration of 336 middle school student participants. Initially, participants were generally…
A Unified Framework for Bounded and Unbounded Numerical Estimation
ERIC Educational Resources Information Center
Kim, Dan; Opfer, John E.
2017-01-01
Representations of numerical value have been assessed by using bounded (e.g., 0-1,000) and unbounded (e.g., 0-?) number-line tasks, with considerable debate regarding whether 1 or both tasks elicit unique cognitive strategies (e.g., addition or subtraction) and require unique cognitive models. To test this, we examined how well a mixed log-linear…
NASA Astrophysics Data System (ADS)
Mudunuru, M. K.; Karra, S.; Vesselinov, V. V.
2017-12-01
The efficiency of many hydrogeological applications such as reactive-transport and contaminant remediation vastly depends on the macroscopic mixing occurring in the aquifer. In the case of remediation activities, it is fundamental to enhancement and control of the mixing through impact of the structure of flow field which is impacted by groundwater pumping/extraction, heterogeneity, and anisotropy of the flow medium. However, the relative importance of these hydrogeological parameters to understand mixing process is not well studied. This is partially because to understand and quantify mixing, one needs to perform multiple runs of high-fidelity numerical simulations for various subsurface model inputs. Typically, high-fidelity simulations of existing subsurface models take hours to complete on several thousands of processors. As a result, they may not be feasible to study the importance and impact of model inputs on mixing. Hence, there is a pressing need to develop computationally efficient models to accurately predict the desired QoIs for remediation and reactive-transport applications. An attractive way to construct computationally efficient models is through reduced-order modeling using machine learning. These approaches can substantially improve our capabilities to model and predict remediation process. Reduced-Order Models (ROMs) are similar to analytical solutions or lookup tables. However, the method in which ROMs are constructed is different. Here, we present a physics-informed ML framework to construct ROMs based on high-fidelity numerical simulations. First, random forests, F-test, and mutual information are used to evaluate the importance of model inputs. Second, SVMs are used to construct ROMs based on these inputs. These ROMs are then used to understand mixing under perturbed vortex flows. Finally, we construct scaling laws for certain important QoIs such as degree of mixing and product yield. Scaling law parameters dependence on model inputs are evaluated using cluster analysis. We demonstrate application of the developed method for model analyses of reactive-transport and contaminant remediation at the Los Alamos National Laboratory (LANL) chromium contamination sites. The developed method is directly applicable for analyses of alternative site remediation scenarios.
Sun, WaiChing; Cai, Zhijun; Choo, Jinhyun
2016-11-18
An Arlequin poromechanics model is introduced to simulate the hydro-mechanical coupling effects of fluid-infiltrated porous media across different spatial scales within a concurrent computational framework. A two-field poromechanics problem is first recast as the twofold saddle point of an incremental energy functional. We then introduce Lagrange multipliers and compatibility energy functionals to enforce the weak compatibility of hydro-mechanical responses in the overlapped domain. Here, to examine the numerical stability of this hydro-mechanical Arlequin model, we derive a necessary condition for stability, the twofold inf–sup condition for multi-field problems, and establish a modified inf–sup test formulated in the product space ofmore » the solution field. We verify the implementation of the Arlequin poromechanics model through benchmark problems covering the entire range of drainage conditions. Finally, through these numerical examples, we demonstrate the performance, robustness, and numerical stability of the Arlequin poromechanics model.« less
A Big Bang model of human colorectal tumor growth.
Sottoriva, Andrea; Kang, Haeyoun; Ma, Zhicheng; Graham, Trevor A; Salomon, Matthew P; Zhao, Junsong; Marjoram, Paul; Siegmund, Kimberly; Press, Michael F; Shibata, Darryl; Curtis, Christina
2015-03-01
What happens in early, still undetectable human malignancies is unknown because direct observations are impractical. Here we present and validate a 'Big Bang' model, whereby tumors grow predominantly as a single expansion producing numerous intermixed subclones that are not subject to stringent selection and where both public (clonal) and most detectable private (subclonal) alterations arise early during growth. Genomic profiling of 349 individual glands from 15 colorectal tumors showed an absence of selective sweeps, uniformly high intratumoral heterogeneity (ITH) and subclone mixing in distant regions, as postulated by our model. We also verified the prediction that most detectable ITH originates from early private alterations and not from later clonal expansions, thus exposing the profile of the primordial tumor. Moreover, some tumors appear 'born to be bad', with subclone mixing indicative of early malignant potential. This new model provides a quantitative framework to interpret tumor growth dynamics and the origins of ITH, with important clinical implications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Surzhikov, S.T.
1996-12-31
Two-dimensional radiative gas dynamics model for numerical simulation of oxygen-hydrogen fire ball which may be generated by an explosion of a launch vehicle with cryogenic (LO{sub 2}-LH{sub 2}) fuel components is presented. The following physical-chemical processes are taken into account in the numerical model: and effective chemical reaction between the gaseous components (O{sub 2}-H{sub 2}) of the propellant, turbulent mixing and diffusion of the components, and radiative heat transfer. The results of numerical investigations of the following problems are presented: The influence of radiative heat transfer on fire ball gas dynamics during the first 13 sec after explosion, the effectmore » of the fuel gaseous components afterburning on fire ball gas dynamics, and the effect of turbulence on fire ball gas dynamics (in a framework of algebraic model of turbulent mixing).« less
Hamad, Eradah O; Savundranayagam, Marie Y; Holmes, Jeffrey D; Kinsella, Elizabeth Anne; Johnson, Andrew M
2016-03-08
Twitter's 140-character microblog posts are increasingly used to access information and facilitate discussions among health care professionals and between patients with chronic conditions and their caregivers. Recently, efforts have emerged to investigate the content of health care-related posts on Twitter. This marks a new area for researchers to investigate and apply content analysis (CA). In current infodemiology, infoveillance and digital disease detection research initiatives, quantitative and qualitative Twitter data are often combined, and there are no clear guidelines for researchers to follow when collecting and evaluating Twitter-driven content. The aim of this study was to identify studies on health care and social media that used Twitter feeds as a primary data source and CA as an analysis technique. We evaluated the resulting 18 studies based on a narrative review of previous methodological studies and textbooks to determine the criteria and main features of quantitative and qualitative CA. We then used the key features of CA and mixed-methods research designs to propose the combined content-analysis (CCA) model as a solid research framework for designing, conducting, and evaluating investigations of Twitter-driven content. We conducted a PubMed search to collect studies published between 2010 and 2014 that used CA to analyze health care-related tweets. The PubMed search and reference list checks of selected papers identified 21 papers. We excluded 3 papers and further analyzed 18. Results suggest that the methods used in these studies were not purely quantitative or qualitative, and the mixed-methods design was not explicitly chosen for data collection and analysis. A solid research framework is needed for researchers who intend to analyze Twitter data through the use of CA. We propose the CCA model as a useful framework that provides a straightforward approach to guide Twitter-driven studies and that adds rigor to health care social media investigations. We provide suggestions for the use of the CCA model in elder care-related contexts.
Hamad, Eradah O; Savundranayagam, Marie Y; Holmes, Jeffrey D; Kinsella, Elizabeth Anne
2016-01-01
Background Twitter’s 140-character microblog posts are increasingly used to access information and facilitate discussions among health care professionals and between patients with chronic conditions and their caregivers. Recently, efforts have emerged to investigate the content of health care-related posts on Twitter. This marks a new area for researchers to investigate and apply content analysis (CA). In current infodemiology, infoveillance and digital disease detection research initiatives, quantitative and qualitative Twitter data are often combined, and there are no clear guidelines for researchers to follow when collecting and evaluating Twitter-driven content. Objective The aim of this study was to identify studies on health care and social media that used Twitter feeds as a primary data source and CA as an analysis technique. We evaluated the resulting 18 studies based on a narrative review of previous methodological studies and textbooks to determine the criteria and main features of quantitative and qualitative CA. We then used the key features of CA and mixed-methods research designs to propose the combined content-analysis (CCA) model as a solid research framework for designing, conducting, and evaluating investigations of Twitter-driven content. Methods We conducted a PubMed search to collect studies published between 2010 and 2014 that used CA to analyze health care-related tweets. The PubMed search and reference list checks of selected papers identified 21 papers. We excluded 3 papers and further analyzed 18. Results Results suggest that the methods used in these studies were not purely quantitative or qualitative, and the mixed-methods design was not explicitly chosen for data collection and analysis. A solid research framework is needed for researchers who intend to analyze Twitter data through the use of CA. Conclusions We propose the CCA model as a useful framework that provides a straightforward approach to guide Twitter-driven studies and that adds rigor to health care social media investigations. We provide suggestions for the use of the CCA model in elder care-related contexts. PMID:26957477
Caring for caregivers of high-needs children.
Peckham, Allie; Spalding, Karen; Watkins, Jillian; Bruce-Barrett, Cindy; Grasic, Marta; Williams, A Paul
2014-01-01
The Caregiver Framework for Children with Medical Complexity, led by the Hospital for Sick Children, is a ground-breaking initiative that validates and supports the vital role of unpaid, family caregivers. The project uses a supported self-management model that includes a modest amount of funding to address pressing needs, and relies on Key Workers who provide ongoing education, counselling and care management to assist caregivers in planning over the longer-term. This paper describes the findings from a multi-stage, mixed-methods evaluation to examine the design and outcomes of the Caregiver Framework. Copyright © 2014 Longwoods Publishing.
Foundations of chaotic mixing.
Wiggins, Stephen; Ottino, Julio M
2004-05-15
The simplest mixing problem corresponds to the mixing of a fluid with itself; this case provides a foundation on which the subject rests. The objective here is to study mixing independently of the mechanisms used to create the motion and review elements of theory focusing mostly on mathematical foundations and minimal models. The flows under consideration will be of two types: two-dimensional (2D) 'blinking flows', or three-dimensional (3D) duct flows. Given that mixing in continuous 3D duct flows depends critically on cross-sectional mixing, and that many microfluidic applications involve continuous flows, we focus on the essential aspects of mixing in 2D flows, as they provide a foundation from which to base our understanding of more complex cases. The baker's transformation is taken as the centrepiece for describing the dynamical systems framework. In particular, a hierarchy of characterizations of mixing exist, Bernoulli --> mixing --> ergodic, ordered according to the quality of mixing (the strongest first). Most importantly for the design process, we show how the so-called linked twist maps function as a minimal picture of mixing, provide a mathematical structure for understanding the type of 2D flows that arise in many micromixers already built, and give conditions guaranteeing the best quality mixing. Extensions of these concepts lead to first-principle-based designs without resorting to lengthy computations.
Four-Wave-Mixing Oscillations in a simplified Boltzmannian semiconductor model with LO-phonons
NASA Astrophysics Data System (ADS)
Tamborenea, P. I.; Bányai, L.; Haug, H.
1996-03-01
The recently discovered(L. Bányai, D. B. Tran Thoai, E. Reitsamer, H. Haug, D. Steinbach, M. U. Wehner, M. Wegener, T. Marschner and W. Stolz, Phys. Rev. Lett. 75), 2188 (1995). oscillations of the integrated four-wave-mixing signal in semiconductors due to electron-LO-phonon scattering are studied within a simplified Boltzmann-type model. Although several aspects of the experimental results require a description within the framework of non-Markovian quantum-kinetic theory, our simplified Boltzmannian model is well suited to analyze the origin of the observed novel oscillations of frequency (1+m_e/m_h) hbarω_LO. To this end, we developed a third-order, analytic solution of the semiconductor Bloch equations (SBE) with Boltzmann-type, LO-phonon collision terms. Results of this theory along with numerical solutions of the SBE will be presented.
Shape coexistence and the role of axial asymmetry in 72Ge
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ayangeakaa, A. D.; Janssens, R. F.; Wu, C. Y.
2016-01-22
The quadrupole collectivity of low-lying states and the anomalous behavior of the0 + 2 and 2 + 3 levels in 72Ge are investigated via projectile multi-step Coulomb excitation with GRETINA and CHICO-2. A total of forty six E2 and M1 matrix elements connecting fourteen low-lying levels were determined using the least-squares search code, GOSIA. Evidence for triaxiality and shape coexistence, based on the model-independent shape invariants deduced from the Kumar–Cline sum rule, is presented. Moreover, these are interpreted using a simple two-state mixing model as well as multi-state mixing calculations carried out within the framework of the triaxial rotor model.more » Our results represent a significant milestone towards the understanding of the unusual structure of this nucleus.« less
Coupled charge migration and fluid mixing in reactive fronts
NASA Astrophysics Data System (ADS)
Ghosh, Uddipta; Bandopadhyay, Aditya; Jougnot, Damien; Le Borgne, Tanguy; Meheust, Yves
2017-04-01
Quantifying fluid mixing in subsurface environments and its consequence on biogeochemical reactions is of paramount importance owing to its role in processes such as contaminant migration, aquifer remediation, CO2 sequestration or clogging processes, to name a few (Dentz et al. 2011). The presence of strong velocity gradients in porous media is expected to lead to enhanced diffusive mixing and augmented reaction rates (Le Borgne et al. 2014). Accurate in situ imaging of subsurface reactive solute transport and mixing remains to date a challenging proposition: the opacity of the medium prevents optical imaging and field methods based on tracer tests do not provide spatial information. Recently developed geophysical methods based on the temporal monitoring of electrical conductivity and polarization have shown promises for mapping and monitoring biogeochemical reactions in the subsurface although it remains challenging to decipher the multiple sources of electrical signals (e.g. Knight et al. 2010). In this work, we explore the coupling between fluid mixing, reaction and charge migration in porous media to evaluate the potential of mapping reaction rates from electrical measurements. To this end, we develop a new theoretical framework based on a lamellar mixing model (Le Borgne et al. 2013) to quantify changes in electrical mobility induced by chemical reactions across mixing fronts. Electrical conductivity and induced polarization are strongly dependent on the concentration of ionic species, which in turn depend on the local reaction rates. Hence, our results suggest that variation in real and complex electrical conductivity may be quantitatively related to the mixing and reaction dynamics. Thus, the presented theory provides a novel upscaling framework for quantifying the coupling between mixing, reaction and charge migration in heterogeneous porous media flows. References: Dentz. et al., Mixing, spreading and reaction in heterogeneous media: A brief review J. Contam. Hydrol. 120-121, 1 (2011). Le Borgne et al. Impact of Fluid Deformation on Mixing-Induced Chemical Reactions in heterogeneous Flows, Geophys. Res. Lett. 41, 7898 (2014). Knight, et al., Geophysics at the interface: Response of geophysical properties to solid-fluid, fluid-fluid, and solid-solid interfaces. Rev. Geophys. 48, (2010). Le Borgne et al. (2013) Stretching, coalescence and mixing in porous media, Phys. Rev. Lett., 110, 204501
Correlation and simple linear regression.
Eberly, Lynn E
2007-01-01
This chapter highlights important steps in using correlation and simple linear regression to address scientific questions about the association of two continuous variables with each other. These steps include estimation and inference, assessing model fit, the connection between regression and ANOVA, and study design. Examples in microbiology are used throughout. This chapter provides a framework that is helpful in understanding more complex statistical techniques, such as multiple linear regression, linear mixed effects models, logistic regression, and proportional hazards regression.
[Mixed depressions: clinical and neurophysiological biomarkers].
Micoulaud Franchi, J-A; Geoffroy, P-A; Vion-Dury, J; Balzani, C; Belzeaux, R; Maurel, M; Cermolacce, M; Fakra, E; Azorin, J-M
2013-12-01
Epidemiological studies of major depressive episodes (MDE) highlighted the frequent association of symptoms or signs of mania or hypomania with depressive syndrome. Beyond the strict definition of DSM-IV, epidemiological recognition of a subset of MDE characterized by the presence of symptoms or signs of the opposite polarity is clinically important because it is associated with pejorative prognosis and therapeutic response compared to the subgroup of "typical MDE". The development of DSM-5 took into account the epidemiological data. DSM-5 opted for a more dimensional perspective in implementing the concept of "mixed features" from an "episode" to a "specification" of mood disorder. As outlined in the DSM-5: "Mixed features associated with a major depressive episode have been found to be a significant risk factor for the development of bipolar I and II disorder. As a result, it is clinically useful to note the presence of this specifier for treatment planning and monitoring of response to therapeutic". However, the mixed features are sometimes difficult to identify, and neurophysiological biomarkers would be useful to make a more specific diagnosis. Two neurophysiological models make it possible to better understand MDE with mixed features : i) the emotional regulation model that highlights a tendency to hyper-reactive and unstable emotion response, and ii) the vigilance regulation model that highlights, through EEG recording, a tendency to unstable vigilance. Further research is required to better understand relationships between these two models. These models provide the opportunity of a neurophysiological framework to better understand the mixed features associated with MDE and to identify potential neurophysiological biomarkers to guide therapeutic strategies. Copyright © 2013 L’Encéphale. Published by Elsevier Masson SAS.. All rights reserved.
NASA Astrophysics Data System (ADS)
Shuler, Christopher K.; El-Kadi, Aly I.; Dulai, Henrietta; Glenn, Craig R.; Fackrell, Joseph
2017-12-01
This study presents a modeling framework for quantifying human impacts and for partitioning the sources of contamination related to water quality in the mixed-use landscape of a small tropical volcanic island. On Tutuila, the main island of American Samoa, production wells in the most populated region (the Tafuna-Leone Plain) produce most of the island's drinking water. However, much of this water has been deemed unsafe to drink since 2009. Tutuila has three predominant anthropogenic non-point-groundwater-pollution sources of concern: on-site disposal systems (OSDS), agricultural chemicals, and pig manure. These sources are broadly distributed throughout the landscape and are located near many drinking-water wells. Water quality analyses show a link between elevated levels of total dissolved groundwater nitrogen (TN) and areas with high non-point-source pollution density, suggesting that TN can be used as a tracer of groundwater contamination from these sources. The modeling framework used in this study integrates land-use information, hydrological data, and water quality analyses with nitrogen loading and transport models. The approach utilizes a numerical groundwater flow model, a nitrogen-loading model, and a multi-species contaminant transport model. Nitrogen from each source is modeled as an independent component in order to trace the impact from individual land-use activities. Model results are calibrated and validated with dissolved groundwater TN concentrations and inorganic δ15N values, respectively. Results indicate that OSDS contribute significantly more TN to Tutuila's aquifers than other sources, and thus should be prioritized in future water-quality management efforts.
Drislane, Laura E.; Patrick, Christopher J.; Arsal, Güler
2014-01-01
The Triarchic Model of psychopathy (Patrick, Fowles, and Krueger, 2009) was formulated as an integrative framework for reconciling differing conceptions of psychopathy. The model characterizes psychopathy in terms of three distinguishable phenotypic components: boldness, meanness, and disinhibition. Data from a large mixed-gender undergraduate sample (N = 618) were used to examine relations of several of the best-known measures for assessing psychopathic traits with scores on the Triarchic Psychopathy Measure (TriPM), an inventory developed to operationalize the Triarchic Model through separate facet scales. Analyses revealed that established inventories of psychopathy index components of the model as indexed by the TriPM to varying degrees. While each inventory provided effective coverage of meanness and disinhibition components, instruments differed in their representation of boldness. Current results demonstrate the heuristic value of the Triarchic Model for delineating commonalities and differences among alternative measures of psychopathy, and provide support for the utility of the Triarchic Model as a framework for reconciling alternative conceptions of psychopathy. PMID:24320762
ERIC Educational Resources Information Center
Kouyoumdjian, Claudia; Guzmán, Bianca L.; Garcia, Nichole M.; Talavera-Bustillos, Valerie
2017-01-01
Growth of Latino students in postsecondary education merits an examination of their resources/challenges. A community cultural wealth model provided a framework to examine unacknowledged student resources and challenges. A mixed method approach found that first- and second-generation college students report equal numbers of sources of…
ERIC Educational Resources Information Center
Kemeny, M. Elizabeth
2010-01-01
Using the conceptual model of social structure and personality framework (House, 1981) as a theoretical guide, this cross sectional mixed-method design examined how organizational structure and culture relate to practices for training direct care workers in 328 aging and disability network service provider organizations in Pennsylvania. To…
High-Performance Mixed Models Based Genome-Wide Association Analysis with omicABEL software
Fabregat-Traver, Diego; Sharapov, Sodbo Zh.; Hayward, Caroline; Rudan, Igor; Campbell, Harry; Aulchenko, Yurii; Bientinesi, Paolo
2014-01-01
To raise the power of genome-wide association studies (GWAS) and avoid false-positive results in structured populations, one can rely on mixed model based tests. When large samples are used, and when multiple traits are to be studied in the ’omics’ context, this approach becomes computationally challenging. Here we consider the problem of mixed-model based GWAS for arbitrary number of traits, and demonstrate that for the analysis of single-trait and multiple-trait scenarios different computational algorithms are optimal. We implement these optimal algorithms in a high-performance computing framework that uses state-of-the-art linear algebra kernels, incorporates optimizations, and avoids redundant computations, increasing throughput while reducing memory usage and energy consumption. We show that, compared to existing libraries, our algorithms and software achieve considerable speed-ups. The OmicABEL software described in this manuscript is available under the GNU GPL v. 3 license as part of the GenABEL project for statistical genomics at http: //www.genabel.org/packages/OmicABEL. PMID:25717363
High-Performance Mixed Models Based Genome-Wide Association Analysis with omicABEL software.
Fabregat-Traver, Diego; Sharapov, Sodbo Zh; Hayward, Caroline; Rudan, Igor; Campbell, Harry; Aulchenko, Yurii; Bientinesi, Paolo
2014-01-01
To raise the power of genome-wide association studies (GWAS) and avoid false-positive results in structured populations, one can rely on mixed model based tests. When large samples are used, and when multiple traits are to be studied in the 'omics' context, this approach becomes computationally challenging. Here we consider the problem of mixed-model based GWAS for arbitrary number of traits, and demonstrate that for the analysis of single-trait and multiple-trait scenarios different computational algorithms are optimal. We implement these optimal algorithms in a high-performance computing framework that uses state-of-the-art linear algebra kernels, incorporates optimizations, and avoids redundant computations, increasing throughput while reducing memory usage and energy consumption. We show that, compared to existing libraries, our algorithms and software achieve considerable speed-ups. The OmicABEL software described in this manuscript is available under the GNU GPL v. 3 license as part of the GenABEL project for statistical genomics at http: //www.genabel.org/packages/OmicABEL.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bolme, David S; Mikkilineni, Aravind K; Rose, Derek C
Analog computational circuits have been demonstrated to provide substantial improvements in power and speed relative to digital circuits, especially for applications requiring extreme parallelism but only modest precision. Deep machine learning is one such area and stands to benefit greatly from analog and mixed-signal implementations. However, even at modest precisions, offsets and non-linearity can degrade system performance. Furthermore, in all but the simplest systems, it is impossible to directly measure the intermediate outputs of all sub-circuits. The result is that circuit designers are unable to accurately evaluate the non-idealities of computational circuits in-situ and are therefore unable to fully utilizemore » measurement results to improve future designs. In this paper we present a technique to use deep learning frameworks to model physical systems. Recently developed libraries like TensorFlow make it possible to use back propagation to learn parameters in the context of modeling circuit behavior. Offsets and scaling errors can be discovered even for sub-circuits that are deeply embedded in a computational system and not directly observable. The learned parameters can be used to refine simulation methods or to identify appropriate compensation strategies. We demonstrate the framework using a mixed-signal convolution operator as an example circuit.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ching, Ping Pui; Zaveri, Rahul A.; Easter, Richard C.
2016-05-27
Light absorption by black carbon (BC) particles emitted from fossil fuel combustion depends on the how thickly they are coated with non-refractory species such as ammonium, sulfate, nitrate, organics, and water. The cloud condensation nuclei (CCN) activation property of a particle depends on its dry size and the hygroscopicities of all the individual species mixed together. It is therefore necessary to represent both size and mixing state of aerosols to reliably predict their climate-relevant properties in atmospheric models. Here we describe and evaluate a novel sectional framework in the Model for Simulating Aerosol Interactions and Chemistry, referred to as MOSAIC-mix,more » that represents the mixing state by resolving aerosol dry size (Ddry), BC dry mass fraction (wBC), and hygroscopicity (κ). Using ten idealized urban plume scenarios in which different types of aerosols evolve over 24 hours under a range of atmospherically relevant environmental conditions, we examine errors in CCN concentrations and optical properties with respect to a more explicit aerosol mixing state representation. We find that only a small number of wBC and κ bins are needed to achieve significant reductions in the errors, and propose a configuration consisting of 24 Ddry bins, 2 wBC bins, and 2 κ bins that gives 24-hour average errors of about 5% or less in CCN concentrations and optical properties, 3-4 times lower than those from size-only-resolved simulations. These results show that MOSAIC-mix is suitable for use in regional and global models to examine the effects of evolving aerosol mixing states on aerosol-radiation-cloud feedbacks.« less
Single-particle dynamics of the Anderson model: a local moment approach
NASA Astrophysics Data System (ADS)
Glossop, Matthew T.; Logan, David E.
2002-07-01
A non-perturbative local moment approach to single-particle dynamics of the general asymmetric Anderson impurity model is developed. The approach encompasses all energy scales and interaction strengths. It captures thereby strong coupling Kondo behaviour, including the resultant universal scaling behaviour of the single-particle spectrum; as well as the mixed valence and essentially perturbative empty orbital regimes. The underlying approach is physically transparent and innately simple, and as such is capable of practical extension to lattice-based models within the framework of dynamical mean-field theory.
Varenyk, O. V.; Silibin, M. V.; Kiselev, Dmitri A.; ...
2015-08-19
The frequency dependent Electrochemical Strain Microscopy (ESM) response of mixed ionic-electronic conductors is analyzed within the framework of Fermi-Dirac statistics and the Vegard law, accounting for steric effects from mobile donors. The emergence of dynamic charge waves and nonlinear deformation of the surface in response to bias applied to the tip-surface junction is numerically explored. The 2D maps of the strain and concentration distributions across the mixed ionic-electronic conductor and bias-induced surface displacements are calculated. Furthermore, the obtained numerical results can be applied to quantify the ESM response of Li-based solid electrolytes, materials with resistive switching, and electroactive ferroelectric polymers,more » which are of potential interest for flexible and high-density non-volatile memory devices.« less
NASA Astrophysics Data System (ADS)
Varenyk, O. V.; Silibin, M. V.; Kiselev, D. A.; Eliseev, E. A.; Kalinin, S. V.; Morozovska, A. N.
2015-08-01
The frequency dependent Electrochemical Strain Microscopy (ESM) response of mixed ionic-electronic conductors is analyzed within the framework of Fermi-Dirac statistics and the Vegard law, accounting for steric effects from mobile donors. The emergence of dynamic charge waves and nonlinear deformation of the surface in response to bias applied to the tip-surface junction is numerically explored. The 2D maps of the strain and concentration distributions across the mixed ionic-electronic conductor and bias-induced surface displacements are calculated. The obtained numerical results can be applied to quantify the ESM response of Li-based solid electrolytes, materials with resistive switching, and electroactive ferroelectric polymers, which are of potential interest for flexible and high-density non-volatile memory devices.
NASA Astrophysics Data System (ADS)
Palevsky, Hilary I.; Doney, Scott C.
2018-05-01
Estimated rates and efficiency of ocean carbon export flux are sensitive to differences in the depth horizons used to define export, which often vary across methodological approaches. We evaluate sinking particulate organic carbon (POC) flux rates and efficiency (e-ratios) in a global earth system model, using a range of commonly used depth horizons: the seasonal mixed layer depth, the particle compensation depth, the base of the euphotic zone, a fixed depth horizon of 100 m, and the maximum annual mixed layer depth. Within this single dynamically consistent model framework, global POC flux rates vary by 30% and global e-ratios by 21% across different depth horizon choices. Zonal variability in POC flux and e-ratio also depends on the export depth horizon due to pronounced influence of deep winter mixing in subpolar regions. Efforts to reconcile conflicting estimates of export need to account for these systematic discrepancies created by differing depth horizon choices.
Advances in mixed-integer programming methods for chemical production scheduling.
Velez, Sara; Maravelias, Christos T
2014-01-01
The goal of this paper is to critically review advances in the area of chemical production scheduling over the past three decades and then present two recently proposed solution methods that have led to dramatic computational enhancements. First, we present a general framework and problem classification and discuss modeling and solution methods with an emphasis on mixed-integer programming (MIP) techniques. Second, we present two solution methods: (a) a constraint propagation algorithm that allows us to compute parameters that are then used to tighten MIP scheduling models and (b) a reformulation that introduces new variables, thus leading to effective branching. We also present computational results and an example illustrating how these methods are implemented, as well as the resulting enhancements. We close with a discussion of open research challenges and future research directions.
Mode and Intermediate Waters in Earth System Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gnanadesikan, Anand; Sarmiento, Jorge L.
This report describes work done as part of a joint Princeton-Johns Hopkins project to look at the impact of mode and intermediate waters in Earth System Models. The Johns Hopkins portion of this work focussed on the role of lateral mixing in ventilating such waters, with important implications for hypoxia, the uptake of anthropogenic carbon, the dynamics of El Nino and carbon pumps. The Johns Hopkins group also collaborated with the Princeton Group to help develop a watermass diagnostics framework.
Quark-lepton flavor democracy and the nonexistence of the fourth generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cvetic, G.; Kim, C.S.
1995-01-01
In the standard model with two Higgs doublets (type II), which has a consistent trend to a flavor gauge theory and its related flavor democracy in the quark and the leptonic sectors (unlike the minimal standard model) when the energy of the probes increases, we impose the mixed quark-lepton flavor democracy at high transition'' energy and assume the usual seesaw mechanism, and consequently find out that the existence of the fourth generation of fermions in this framework is practically ruled out.
2014-07-01
powder x-ray diffraction (PXRD), thermogravimentric analysis (TGA), and Fourier transform infrared (FTIR). 15. SUBJECT TERMS Metal organic frame work...the inclusion by using a variety of analytical techniques, such as powder x-ray diffraction (PXRD), thermo-gravimetric analysis (TGA), Fourier...Characterizations Analysis of the MOF and the complexes with the MOF and the guest molecules was performed using an Agilent GC-MS (Model 6890N GC and Model 5973N
ERIC Educational Resources Information Center
Gray, James E.
2010-01-01
This research serves as a mixed methodological study that presents a conceptual framework which focuses on the relationship between professional learning communities, high yield literacy strategies, and their phases of change. As a result, the purpose of this study is threefold. First, a conceptual framework integrating professional learning…
ERIC Educational Resources Information Center
Cole, Patricia Ann
2013-01-01
This sequential explanatory mixed methods study investigated 24 college and university syllabi for content consisting of multicultural education that used the framework for multicultural education devised by James A. Banks (2006). This framework was used to analyze data collected using descriptive statistics for quantitative phase one. The four…
Functional Nonlinear Mixed Effects Models For Longitudinal Image Data
Luo, Xinchao; Zhu, Lixing; Kong, Linglong; Zhu, Hongtu
2015-01-01
Motivated by studying large-scale longitudinal image data, we propose a novel functional nonlinear mixed effects modeling (FN-MEM) framework to model the nonlinear spatial-temporal growth patterns of brain structure and function and their association with covariates of interest (e.g., time or diagnostic status). Our FNMEM explicitly quantifies a random nonlinear association map of individual trajectories. We develop an efficient estimation method to estimate the nonlinear growth function and the covariance operator of the spatial-temporal process. We propose a global test and a simultaneous confidence band for some specific growth patterns. We conduct Monte Carlo simulation to examine the finite-sample performance of the proposed procedures. We apply FNMEM to investigate the spatial-temporal dynamics of white-matter fiber skeletons in a national database for autism research. Our FNMEM may provide a valuable tool for charting the developmental trajectories of various neuropsychiatric and neurodegenerative disorders. PMID:26213453
PharmML in Action: an Interoperable Language for Modeling and Simulation.
Bizzotto, R; Comets, E; Smith, G; Yvon, F; Kristensen, N R; Swat, M J
2017-10-01
PharmML is an XML-based exchange format created with a focus on nonlinear mixed-effect (NLME) models used in pharmacometrics, but providing a very general framework that also allows describing mathematical and statistical models such as single-subject or nonlinear and multivariate regression models. This tutorial provides an overview of the structure of this language, brief suggestions on how to work with it, and use cases demonstrating its power and flexibility. © 2017 The Authors CPT: Pharmacometrics & Systems Pharmacology published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.
Towards a bulk approach to local interactions of hydrometeors
NASA Astrophysics Data System (ADS)
Baumgartner, Manuel; Spichtinger, Peter
2018-02-01
The growth of small cloud droplets and ice crystals is dominated by the diffusion of water vapor. Usually, Maxwell's approach to growth for isolated particles is used in describing this process. However, recent investigations show that local interactions between particles can change diffusion properties of cloud particles. In this study we develop an approach for including these local interactions into a bulk model approach. For this purpose, a simplified framework of local interaction is proposed and governing equations are derived from this setup. The new model is tested against direct simulations and incorporated into a parcel model framework. Using the parcel model, possible implications of the new model approach for clouds are investigated. The results indicate that for specific scenarios the lifetime of cloud droplets in subsaturated air may be longer (e.g., for an initially water supersaturated air parcel within a downdraft). These effects might have an impact on mixed-phase clouds, for example in terms of riming efficiencies.
Quality choice in a health care market: a mixed duopoly approach.
Sanjo, Yasuo
2009-05-01
We investigate a health care market with uncertainty in a mixed duopoly, where a partially privatized public hospital competes against a private hospital in terms of quality choice. We use a simple Hotelling-type spatial competition model by incorporating mean-variance analysis and the framework of partial privatization. We show how the variance in the quality perceived by patients affects the true quality of medical care provided by hospitals. In addition, we show that a case exists in which the quality of the partially privatized hospital becomes higher than that of the private hospital when the patient's preference for quality is relatively high.
Z' portal to Chern-Simons Dark Matter
NASA Astrophysics Data System (ADS)
Arcadi, Giorgio; Ghosh, Pradipta; Mambrini, Yann; Pierre, Mathias; Queiroz, Farinaldo S.
2017-11-01
We study the phenomenological credibility of a vectorial dark matter, coupled to a Z' portal through Chern-Simons interaction. We scrutinize two possibilities of connecting a Z' with the Standard Model: (1) through kinetic mixing and (2) from a second Chern-Simons interaction. Both scenarios are characterized by suppressed nuclear recoil scatterings, rendering direct detection searches not promising. Indirect detection experiments, on the other hand, furnish complementary limits for TeV scale masses, specially with the CTA. Searches for mono-jet and dileptons signals at the LHC are important to partially probe the kinetic mixing setup. Finally we propose an UV completion of the Chern-Simons Dark Matter framework.
Relative importance of climatic, geographic and socio-economic determinants of malaria in Malawi
2013-01-01
Background Malaria transmission is influenced by variations in meteorological conditions, which impact the biology of the parasite and its vector, but also socio-economic conditions, such as levels of urbanization, poverty and education, which impact human vulnerability and vector habitat. The many potential drivers of malaria, both extrinsic, such as climate, and intrinsic, such as population immunity are often difficult to disentangle. This presents a challenge for the modelling of malaria risk in space and time. Methods A statistical mixed model framework is proposed to model malaria risk at the district level in Malawi, using an age-stratified spatio-temporal dataset of malaria cases from July 2004 to June 2011. Several climatic, geographic and socio-economic factors thought to influence malaria incidence were tested in an exploratory model. In order to account for the unobserved confounding factors that influence malaria, which are not accounted for using measured covariates, a generalized linear mixed model was adopted, which included structured and unstructured spatial and temporal random effects. A hierarchical Bayesian framework using Markov chain Monte Carlo simulation was used for model fitting and prediction. Results Using a stepwise model selection procedure, several explanatory variables were identified to have significant associations with malaria including climatic, cartographic and socio-economic data. Once intervention variations, unobserved confounding factors and spatial correlation were considered in a Bayesian framework, a final model emerged with statistically significant predictor variables limited to average precipitation (quadratic relation) and average temperature during the three months previous to the month of interest. Conclusions When modelling malaria risk in Malawi it is important to account for spatial and temporal heterogeneity and correlation between districts. Once observed and unobserved confounding factors are allowed for, precipitation and temperature in the months prior to the malaria season of interest are found to significantly determine spatial and temporal variations of malaria incidence. Climate information was found to improve the estimation of malaria relative risk in 41% of the districts in Malawi, particularly at higher altitudes where transmission is irregular. This highlights the potential value of climate-driven seasonal malaria forecasts. PMID:24228784
Koseff, Jeffrey R.; Holen, Jacqueline K.; Monismith, Stephen G.; Cloern, James E.
1993-01-01
Coastal ocean waters tend to have very different patterns of phytoplankton biomass variability from the open ocean, and the connections between physical variability and phytoplankton bloom dynamics are less well established for these shallow systems. Predictions of biological responses to physical variability in these environments is inherently difficult because the recurrent seasonal patterns of mixing are complicated by aperiodic fluctuations in river discharge and the high-frequency components of tidal variability. We might expect, then, less predictable and more complex bloom dynamics in these shallow coastal systems compared with the open ocean. Given this complex and dynamic physical environment, can we develop a quantitative framework to define the physical regimes necessary for bloom inception, and can we identify the important mechanisms of physical-biological coupling that lead to the initiation and termination of blooms in estuaries and shallow coastal waters? Numerical modeling provides one approach to address these questions. Here we present results of simulation experiments with a refined version of Cloern's (1991) model in which mixing processes are treated more realistically to reflect the dynamic nature of turbulence generation in estuaries. We investigated several simple models for the turbulent mixing coefficient. We found that the addition of diurnal tidal variation to Cloern's model greatly reduces biomass growth indicating that variations of mixing on the time scale of hours are crucial. Furthermore, we found that for conditions representative of South San Francisco Bay, numerical simulations only allowed for bloom development when the water column was stratified and when minimal mixing was prescribed in the upper layer. Stratification, however, itself is not sufficient to ensure that a bloom will develop: minimal wind stirring is a further prerequisite to bloom development in shallow turbid estuaries with abundant populations of benthic suspension feeders.
NASA Technical Reports Server (NTRS)
Santanello, Joseph A., Jr.; Peters-Lidard, Christa D.; Kumar, Sujay V.; Alonge, Charles; Tao, Wei-Kuo
2009-01-01
Land-atmosphere interactions play a critical role in determining the diurnal evolution of both planetary boundary layer (PBL) and land surface temperature and moisture states. The degree of coupling between the land surface and PBL in numerical weather prediction and climate models remains largely unexplored and undiagnosed due to the complex interactions and feedbacks present across a range of scales. Further, uncoupled systems or experiments (e.g., the Project for Intercomparison of Land Parameterization Schemes, PILPS) may lead to inaccurate water and energy cycle process understanding by neglecting feedback processes such as PBL-top entrainment. In this study, a framework for diagnosing local land-atmosphere coupling is presented using a coupled mesoscale model with a suite of PBL and land surface model (LSM) options along with observations during field experiments in the U. S. Southern Great Plains. Specifically, the Weather Research and Forecasting (WRF) model has been coupled to the Land Information System (LIS), which provides a flexible and high-resolution representation and initialization of land surface physics and states. Within this framework, the coupling established by each pairing of the available PBL schemes in WRF with the LSMs in LIS is evaluated in terms of the diurnal temperature and humidity evolution in the mixed layer. The co-evolution of these variables and the convective PBL is sensitive to and, in fact, integrative of the dominant processes that govern the PBL budget, which are synthesized through the use of mixing diagrams. Results show how the sensitivity of land-atmosphere interactions to the specific choice of PBL scheme and LSM varies across surface moisture regimes and can be quantified and evaluated against observations. As such, this methodology provides a potential pathway to study factors controlling local land-atmosphere coupling (LoCo) using the LIS-WRF system, which will serve as a testbed for future experiments to evaluate coupling diagnostics within the community.
A Finite Element Model for Mixed Porohyperelasticity with Transport, Swelling, and Growth.
Armstrong, Michelle Hine; Buganza Tepole, Adrián; Kuhl, Ellen; Simon, Bruce R; Vande Geest, Jonathan P
2016-01-01
The purpose of this manuscript is to establish a unified theory of porohyperelasticity with transport and growth and to demonstrate the capability of this theory using a finite element model developed in MATLAB. We combine the theories of volumetric growth and mixed porohyperelasticity with transport and swelling (MPHETS) to derive a new method that models growth of biological soft tissues. The conservation equations and constitutive equations are developed for both solid-only growth and solid/fluid growth. An axisymmetric finite element framework is introduced for the new theory of growing MPHETS (GMPHETS). To illustrate the capabilities of this model, several example finite element test problems are considered using model geometry and material parameters based on experimental data from a porcine coronary artery. Multiple growth laws are considered, including time-driven, concentration-driven, and stress-driven growth. Time-driven growth is compared against an exact analytical solution to validate the model. For concentration-dependent growth, changing the diffusivity (representing a change in drug) fundamentally changes growth behavior. We further demonstrate that for stress-dependent, solid-only growth of an artery, growth of an MPHETS model results in a more uniform hoop stress than growth in a hyperelastic model for the same amount of growth time using the same growth law. This may have implications in the context of developing residual stresses in soft tissues under intraluminal pressure. To our knowledge, this manuscript provides the first full description of an MPHETS model with growth. The developed computational framework can be used in concert with novel in-vitro and in-vivo experimental approaches to identify the governing growth laws for various soft tissues.
A Finite Element Model for Mixed Porohyperelasticity with Transport, Swelling, and Growth
Armstrong, Michelle Hine; Buganza Tepole, Adrián; Kuhl, Ellen; Simon, Bruce R.; Vande Geest, Jonathan P.
2016-01-01
The purpose of this manuscript is to establish a unified theory of porohyperelasticity with transport and growth and to demonstrate the capability of this theory using a finite element model developed in MATLAB. We combine the theories of volumetric growth and mixed porohyperelasticity with transport and swelling (MPHETS) to derive a new method that models growth of biological soft tissues. The conservation equations and constitutive equations are developed for both solid-only growth and solid/fluid growth. An axisymmetric finite element framework is introduced for the new theory of growing MPHETS (GMPHETS). To illustrate the capabilities of this model, several example finite element test problems are considered using model geometry and material parameters based on experimental data from a porcine coronary artery. Multiple growth laws are considered, including time-driven, concentration-driven, and stress-driven growth. Time-driven growth is compared against an exact analytical solution to validate the model. For concentration-dependent growth, changing the diffusivity (representing a change in drug) fundamentally changes growth behavior. We further demonstrate that for stress-dependent, solid-only growth of an artery, growth of an MPHETS model results in a more uniform hoop stress than growth in a hyperelastic model for the same amount of growth time using the same growth law. This may have implications in the context of developing residual stresses in soft tissues under intraluminal pressure. To our knowledge, this manuscript provides the first full description of an MPHETS model with growth. The developed computational framework can be used in concert with novel in-vitro and in-vivo experimental approaches to identify the governing growth laws for various soft tissues. PMID:27078495
Tian, Yuxi; Schuemie, Martijn J; Suchard, Marc A
2018-06-22
Propensity score adjustment is a popular approach for confounding control in observational studies. Reliable frameworks are needed to determine relative propensity score performance in large-scale studies, and to establish optimal propensity score model selection methods. We detail a propensity score evaluation framework that includes synthetic and real-world data experiments. Our synthetic experimental design extends the 'plasmode' framework and simulates survival data under known effect sizes, and our real-world experiments use a set of negative control outcomes with presumed null effect sizes. In reproductions of two published cohort studies, we compare two propensity score estimation methods that contrast in their model selection approach: L1-regularized regression that conducts a penalized likelihood regression, and the 'high-dimensional propensity score' (hdPS) that employs a univariate covariate screen. We evaluate methods on a range of outcome-dependent and outcome-independent metrics. L1-regularization propensity score methods achieve superior model fit, covariate balance and negative control bias reduction compared with the hdPS. Simulation results are mixed and fluctuate with simulation parameters, revealing a limitation of simulation under the proportional hazards framework. Including regularization with the hdPS reduces commonly reported non-convergence issues but has little effect on propensity score performance. L1-regularization incorporates all covariates simultaneously into the propensity score model and offers propensity score performance superior to the hdPS marginal screen.
Mosimann, Laura; Traoré, Abdallah; Mauti, Stephanie; Léchenne, Monique; Obrist, Brigit; Véron, René; Hattendorf, Jan; Zinsstag, Jakob
2017-01-01
In the framework of the research network on integrated control of zoonoses in Africa (ICONZ) a dog rabies mass vaccination campaign was carried out in two communes of Bamako (Mali) in September 2014. A mixed method approach, combining quantitative and qualitative tools, was developed to evaluate the effectiveness of the intervention towards optimization for future scale-up. Actions to control rabies occur on one level in households when individuals take the decision to vaccinate their dogs. However, control also depends on provision of vaccination services and community participation at the intermediate level of social resilience. Mixed methods seem necessary as the problem-driven transdisciplinary project includes epidemiological components in addition to social dynamics and cultural, political and institutional issues. Adapting earlier effectiveness models for health intervention to rabies control, we propose a mixed method assessment of individual effectiveness parameters like availability, affordability, accessibility, adequacy or acceptability. Triangulation of quantitative methods (household survey, empirical coverage estimation and spatial analysis) with qualitative findings (participant observation, focus group discussions) facilitate a better understanding of the weight of each effectiveness determinant, and the underlying reasons embedded in the local understandings, cultural practices, and social and political realities of the setting. Using this method, a final effectiveness of 33% for commune Five and 28% for commune Six was estimated, with vaccination coverage of 27% and 20%, respectively. Availability was identified as the most sensitive effectiveness parameter, attributed to lack of information about the campaign. We propose a mixed methods approach to optimize intervention design, using an "intervention effectiveness optimization cycle" with the aim of maximizing effectiveness. Empirical vaccination coverage estimation is compared to the effectiveness model with its determinants. In addition, qualitative data provide an explanatory framework for deeper insight, validation and interpretation of results which should improve the intervention design while involving all stakeholders and increasing community participation. This work contributes vital information for the optimization and scale-up of future vaccination campaigns in Bamako, Mali. The proposed mixed method, although incompletely applied in this case study, should be applicable to similar rabies interventions targeting elimination in other settings. Copyright © 2016 Elsevier B.V. All rights reserved.
Charting the Unknown: A Hunt in the Dark
NASA Astrophysics Data System (ADS)
Mohlabeng, Gopolang Mokoka
Astrophysical and cosmological observations have pointed strongly to the existence of dark matter in the Universe, yet its nature remains elusive. It may be hidden in a vast unknown parameter space in which exhaustively searching for a signal is not feasible. We are, therefore, compelled to consider a robust program based on a wide range of new theoretical ideas and complementary strategies for detection. The aim of this dissertation is to investigate the phenomenology of diverse dark sectors with the objective of understanding and characterizing dark matter. We do so by exploring dark matter phenomenology under three main frameworks of study: (I) the model dependent approach, (II) model independent approach and (III) considering simplified models. In each framework we focus on unexplored and well motivated dark matter scenarios as well as their prospects of detection at current and future experiments. First, we concentrate on the model dependent method where we consider minimal dark matter in the form of mixed fermionic stable states in a gauge extension of the standard model. In particular, we incorporate the fermion mixings governed by gauge invariant interactions with the heavier degrees of freedom. We find that the manner of mixing has an impact on the detectability of the dark matter at experiments. Pursuing this model dependent direction, we explore a space-time extension of the standard model which houses a vector dark matter candidate. We incorporate boundary terms arising from the topology of the model and find that these control the way dark matter may interact with baryonic matter. Next we investigate the model independent approach in which we examine a non-minimal dark sector in the form of boosted dark matter. In this study, we consider an effective field theory involving two stable fermionic states. We probe the sensitivity of this type of dark matter coming from the galactic center and the center of the Sun, and investigate its detection prospects at current and future large volume experiments. Finally, we explore an intermediate approach in the form of a simplified model. Here we analyze a different non-minimal dark sector in which its interactions with the standard model sector are mediated primarily by the Higgs Boson. We discuss for the first time a vector and fermion dark matter preserved under the same stabilization symmetry. We find that the presence of both species in the early Universe results in rare processes contributing to the dark matter relic abundance. We conclude that connecting these three frameworks under one main dark matter program, instead of concentrating on them individually, could help us understand what we are missing, and may assist us to produce ground breaking ideas which lead to the discovery of a signal in the near future.
NASA Technical Reports Server (NTRS)
Nakazawa, Shohei
1991-01-01
Formulations and algorithms implemented in the MHOST finite element program are discussed. The code uses a novel concept of the mixed iterative solution technique for the efficient 3-D computations of turbine engine hot section components. The general framework of variational formulation and solution algorithms are discussed which were derived from the mixed three field Hu-Washizu principle. This formulation enables the use of nodal interpolation for coordinates, displacements, strains, and stresses. Algorithmic description of the mixed iterative method includes variations for the quasi static, transient dynamic and buckling analyses. The global-local analysis procedure referred to as the subelement refinement is developed in the framework of the mixed iterative solution, of which the detail is presented. The numerically integrated isoparametric elements implemented in the framework is discussed. Methods to filter certain parts of strain and project the element discontinuous quantities to the nodes are developed for a family of linear elements. Integration algorithms are described for linear and nonlinear equations included in MHOST program.
NASA Astrophysics Data System (ADS)
Tsuruoka, Takaaki; Miyanaga, Ayumi; Ohhashi, Takashi; Hata, Manami; Takashima, Yohei; Akamatsu, Kensuke
2017-09-01
A simple composition control route to mixed-lanthanide metal-organic frameworks (MOFs) was developed based on an interfacial reaction with mixed-lanthanide metal ion-doped polymer substrates. By controlling the composition of lanthanide ion (Eu3+ and Tb3+) dopants in polymer substrates to be used as metal ion precursors and scaffolding for the formation of MOFs, [EuxTb2-x(bdc)3(H2O)4]n crystals with a tunable metal composition could be routinely prepared on polymer substrates. Inductively coupled plasma (ICP) measurements revealed that the composition of the obtained frameworks was almost the same as that of the initial polymer substrates. In addition, the resulting [EuxTb2-x(bdc)3(H2O)4]n crystals showed strong phosphorescence because of Eu3+ transitions, indicating that the energy transfer from Tb3+ to Eu3+ ions in the frameworks could be achieved with high efficiency.
NASA Technical Reports Server (NTRS)
King, Sun-Kun
1996-01-01
The variances of the quantum-mechanical noise in a two-input-port Michelson interferometer within the framework of the Loudon-Ni model were solved exactly in two general cases: (1) one coherent state input and one squeezed state input, and (2) two photon number states inputs. Low intensity limit, exponential decaying signal and the noise due to mixing were discussed briefly.
Designing a mixed methods study in primary care.
Creswell, John W; Fetters, Michael D; Ivankova, Nataliya V
2004-01-01
Mixed methods or multimethod research holds potential for rigorous, methodologically sound investigations in primary care. The objective of this study was to use criteria from the literature to evaluate 5 mixed methods studies in primary care and to advance 3 models useful for designing such investigations. We first identified criteria from the social and behavioral sciences to analyze mixed methods studies in primary care research. We then used the criteria to evaluate 5 mixed methods investigations published in primary care research journals. Of the 5 studies analyzed, 3 included a rationale for mixing based on the need to develop a quantitative instrument from qualitative data or to converge information to best understand the research topic. Quantitative data collection involved structured interviews, observational checklists, and chart audits that were analyzed using descriptive and inferential statistical procedures. Qualitative data consisted of semistructured interviews and field observations that were analyzed using coding to develop themes and categories. The studies showed diverse forms of priority: equal priority, qualitative priority, and quantitative priority. Data collection involved quantitative and qualitative data gathered both concurrently and sequentially. The integration of the quantitative and qualitative data in these studies occurred between data analysis from one phase and data collection from a subsequent phase, while analyzing the data, and when reporting the results. We recommend instrument-building, triangulation, and data transformation models for mixed methods designs as useful frameworks to add rigor to investigations in primary care. We also discuss the limitations of our study and the need for future research.
Kelay, Tanika; Chan, Kah Leong; Ako, Emmanuel; Yasin, Mohammad; Costopoulos, Charis; Gold, Matthew; Kneebone, Roger K; Malik, Iqbal S; Bello, Fernando
2017-01-01
Distributed Simulation is the concept of portable, high-fidelity immersive simulation. Here, it is used for the development of a simulation-based training programme for cardiovascular specialities. We present an evidence base for how accessible, portable and self-contained simulated environments can be effectively utilised for the modelling, development and testing of a complex training framework and assessment methodology. Iterative user feedback through mixed-methods evaluation techniques resulted in the implementation of the training programme. Four phases were involved in the development of our immersive simulation-based training programme: ( 1) initial conceptual stage for mapping structural criteria and parameters of the simulation training framework and scenario development ( n = 16), (2) training facility design using Distributed Simulation , (3) test cases with clinicians ( n = 8) and collaborative design, where evaluation and user feedback involved a mixed-methods approach featuring (a) quantitative surveys to evaluate the realism and perceived educational relevance of the simulation format and framework for training and (b) qualitative semi-structured interviews to capture detailed feedback including changes and scope for development. Refinements were made iteratively to the simulation framework based on user feedback, resulting in (4) transition towards implementation of the simulation training framework, involving consistent quantitative evaluation techniques for clinicians ( n = 62). For comparative purposes, clinicians' initial quantitative mean evaluation scores for realism of the simulation training framework, realism of the training facility and relevance for training ( n = 8) are presented longitudinally, alongside feedback throughout the development stages from concept to delivery, including the implementation stage ( n = 62). Initially, mean evaluation scores fluctuated from low to average, rising incrementally. This corresponded with the qualitative component, which augmented the quantitative findings; trainees' user feedback was used to perform iterative refinements to the simulation design and components (collaborative design), resulting in higher mean evaluation scores leading up to the implementation phase. Through application of innovative Distributed Simulation techniques, collaborative design, and consistent evaluation techniques from conceptual, development, and implementation stages, fully immersive simulation techniques for cardiovascular specialities are achievable and have the potential to be implemented more broadly.
Robust, Adaptive Functional Regression in Functional Mixed Model Framework.
Zhu, Hongxiao; Brown, Philip J; Morris, Jeffrey S
2011-09-01
Functional data are increasingly encountered in scientific studies, and their high dimensionality and complexity lead to many analytical challenges. Various methods for functional data analysis have been developed, including functional response regression methods that involve regression of a functional response on univariate/multivariate predictors with nonparametrically represented functional coefficients. In existing methods, however, the functional regression can be sensitive to outlying curves and outlying regions of curves, so is not robust. In this paper, we introduce a new Bayesian method, robust functional mixed models (R-FMM), for performing robust functional regression within the general functional mixed model framework, which includes multiple continuous or categorical predictors and random effect functions accommodating potential between-function correlation induced by the experimental design. The underlying model involves a hierarchical scale mixture model for the fixed effects, random effect and residual error functions. These modeling assumptions across curves result in robust nonparametric estimators of the fixed and random effect functions which down-weight outlying curves and regions of curves, and produce statistics that can be used to flag global and local outliers. These assumptions also lead to distributions across wavelet coefficients that have outstanding sparsity and adaptive shrinkage properties, with great flexibility for the data to determine the sparsity and the heaviness of the tails. Together with the down-weighting of outliers, these within-curve properties lead to fixed and random effect function estimates that appear in our simulations to be remarkably adaptive in their ability to remove spurious features yet retain true features of the functions. We have developed general code to implement this fully Bayesian method that is automatic, requiring the user to only provide the functional data and design matrices. It is efficient enough to handle large data sets, and yields posterior samples of all model parameters that can be used to perform desired Bayesian estimation and inference. Although we present details for a specific implementation of the R-FMM using specific distributional choices in the hierarchical model, 1D functions, and wavelet transforms, the method can be applied more generally using other heavy-tailed distributions, higher dimensional functions (e.g. images), and using other invertible transformations as alternatives to wavelets.
Robust, Adaptive Functional Regression in Functional Mixed Model Framework
Zhu, Hongxiao; Brown, Philip J.; Morris, Jeffrey S.
2012-01-01
Functional data are increasingly encountered in scientific studies, and their high dimensionality and complexity lead to many analytical challenges. Various methods for functional data analysis have been developed, including functional response regression methods that involve regression of a functional response on univariate/multivariate predictors with nonparametrically represented functional coefficients. In existing methods, however, the functional regression can be sensitive to outlying curves and outlying regions of curves, so is not robust. In this paper, we introduce a new Bayesian method, robust functional mixed models (R-FMM), for performing robust functional regression within the general functional mixed model framework, which includes multiple continuous or categorical predictors and random effect functions accommodating potential between-function correlation induced by the experimental design. The underlying model involves a hierarchical scale mixture model for the fixed effects, random effect and residual error functions. These modeling assumptions across curves result in robust nonparametric estimators of the fixed and random effect functions which down-weight outlying curves and regions of curves, and produce statistics that can be used to flag global and local outliers. These assumptions also lead to distributions across wavelet coefficients that have outstanding sparsity and adaptive shrinkage properties, with great flexibility for the data to determine the sparsity and the heaviness of the tails. Together with the down-weighting of outliers, these within-curve properties lead to fixed and random effect function estimates that appear in our simulations to be remarkably adaptive in their ability to remove spurious features yet retain true features of the functions. We have developed general code to implement this fully Bayesian method that is automatic, requiring the user to only provide the functional data and design matrices. It is efficient enough to handle large data sets, and yields posterior samples of all model parameters that can be used to perform desired Bayesian estimation and inference. Although we present details for a specific implementation of the R-FMM using specific distributional choices in the hierarchical model, 1D functions, and wavelet transforms, the method can be applied more generally using other heavy-tailed distributions, higher dimensional functions (e.g. images), and using other invertible transformations as alternatives to wavelets. PMID:22308015
Assessing Discriminative Performance at External Validation of Clinical Prediction Models
Nieboer, Daan; van der Ploeg, Tjeerd; Steyerberg, Ewout W.
2016-01-01
Introduction External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting. Methods We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1) the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2) the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury. Results The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples) and heterogeneous in scenario 2 (in 17%-39% of simulated samples). Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2. Conclusion The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect regression coefficients. PMID:26881753
Assessing Discriminative Performance at External Validation of Clinical Prediction Models.
Nieboer, Daan; van der Ploeg, Tjeerd; Steyerberg, Ewout W
2016-01-01
External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting. We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1) the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2) the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury. The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples) and heterogeneous in scenario 2 (in 17%-39% of simulated samples). Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2. The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect regression coefficients.
Martin, Jordan S; Suarez, Scott A
2017-08-01
Interest in quantifying consistent among-individual variation in primate behavior, also known as personality, has grown rapidly in recent decades. Although behavioral coding is the most frequently utilized method for assessing primate personality, limitations in current statistical practice prevent researchers' from utilizing the full potential of their coding datasets. These limitations include the use of extensive data aggregation, not modeling biologically relevant sources of individual variance during repeatability estimation, not partitioning between-individual (co)variance prior to modeling personality structure, the misuse of principal component analysis, and an over-reliance upon exploratory statistical techniques to compare personality models across populations, species, and data collection methods. In this paper, we propose a statistical framework for primate personality research designed to address these limitations. Our framework synthesizes recently developed mixed-effects modeling approaches for quantifying behavioral variation with an information-theoretic model selection paradigm for confirmatory personality research. After detailing a multi-step analytic procedure for personality assessment and model comparison, we employ this framework to evaluate seven models of personality structure in zoo-housed bonobos (Pan paniscus). We find that differences between sexes, ages, zoos, time of observation, and social group composition contributed to significant behavioral variance. Independently of these factors, however, personality nonetheless accounted for a moderate to high proportion of variance in average behavior across observational periods. A personality structure derived from past rating research receives the strongest support relative to our model set. This model suggests that personality variation across the measured behavioral traits is best described by two correlated but distinct dimensions reflecting individual differences in affiliation and sociability (Agreeableness) as well as activity level, social play, and neophilia toward non-threatening stimuli (Openness). These results underscore the utility of our framework for quantifying personality in primates and facilitating greater integration between the behavioral ecological and comparative psychological approaches to personality research. © 2017 Wiley Periodicals, Inc.
A VGI data integration framework based on linked data model
NASA Astrophysics Data System (ADS)
Wan, Lin; Ren, Rongrong
2015-12-01
This paper aims at the geographic data integration and sharing method for multiple online VGI data sets. We propose a semantic-enabled framework for online VGI sources cooperative application environment to solve a target class of geospatial problems. Based on linked data technologies - which is one of core components of semantic web, we can construct the relationship link among geographic features distributed in diverse VGI platform by using linked data modeling methods, then deploy these semantic-enabled entities on the web, and eventually form an interconnected geographic data network to support geospatial information cooperative application across multiple VGI data sources. The mapping and transformation from VGI sources to RDF linked data model is presented to guarantee the unique data represent model among different online social geographic data sources. We propose a mixed strategy which combined spatial distance similarity and feature name attribute similarity as the measure standard to compare and match different geographic features in various VGI data sets. And our work focuses on how to apply Markov logic networks to achieve interlinks of the same linked data in different VGI-based linked data sets. In our method, the automatic generating method of co-reference object identification model according to geographic linked data is discussed in more detail. It finally built a huge geographic linked data network across loosely-coupled VGI web sites. The results of the experiment built on our framework and the evaluation of our method shows the framework is reasonable and practicable.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Broeer, Torsten; Fuller, Jason C.; Tuffner, Francis K.
2014-01-31
Electricity generation from wind power and other renewable energy sources is increasing, and their variability introduces new challenges to the power system. The emergence of smart grid technologies in recent years has seen a paradigm shift in redefining the electrical system of the future, in which controlled response of the demand side is used to balance fluctuations and intermittencies from the generation side. This paper presents a modeling framework for an integrated electricity system where loads become an additional resource. The agent-based model represents a smart grid power system integrating generators, transmission, distribution, loads and market. The model incorporates generatormore » and load controllers, allowing suppliers and demanders to bid into a Real-Time Pricing (RTP) electricity market. The modeling framework is applied to represent a physical demonstration project conducted on the Olympic Peninsula, Washington, USA, and validation simulations are performed using actual dynamic data. Wind power is then introduced into the power generation mix illustrating the potential of demand response to mitigate the impact of wind power variability, primarily through thermostatically controlled loads. The results also indicate that effective implementation of Demand Response (DR) to assist integration of variable renewable energy resources requires a diversity of loads to ensure functionality of the overall system.« less
A Big Bang model of human colorectal tumor growth
Sottoriva, Andrea; Kang, Haeyoun; Ma, Zhicheng; Graham, Trevor A.; Salomon, Matthew P.; Zhao, Junsong; Marjoram, Paul; Siegmund, Kimberly; Press, Michael F.; Shibata, Darryl; Curtis, Christina
2015-01-01
What happens in the early, still undetectable human malignancy is unknown because direct observations are impractical. Here we present and validate a “Big Bang” model, whereby tumors grow predominantly as a single expansion producing numerous intermixed sub-clones that are not subject to stringent selection, and where both public (clonal) and most detectable private (subclonal) alterations arise early during growth. Genomic profiling of 349 individual glands from 15 colorectal tumors revealed the absence of selective sweeps, uniformly high intra-tumor heterogeneity (ITH), and sub-clone mixing in distant regions, as postulated by our model. We also verified the prediction that most detectable ITH originates from early private alterations, and not from later clonal expansions, thus exposing the profile of the primordial tumor. Moreover, some tumors appear born-to-be-bad, with sub-clone mixing indicative of early malignant potential. This new model provides a quantitative framework to interpret tumor growth dynamics and the origins of ITH with significant clinical implications. PMID:25665006
Cheng, Guanhui; Huang, Guohe; Dong, Cong; Xu, Ye; Chen, Xiujuan; Chen, Jiapei
2017-03-01
Due to the existence of complexities of heterogeneities, hierarchy, discreteness, and interactions in municipal solid waste management (MSWM) systems such as Beijing, China, a series of socio-economic and eco-environmental problems may emerge or worsen and result in irredeemable damages in the following decades. Meanwhile, existing studies, especially ones focusing on MSWM in Beijing, could hardly reflect these complexities in system simulations and provide reliable decision support for management practices. Thus, a framework of distributed mixed-integer fuzzy hierarchical programming (DMIFHP) is developed in this study for MSWM under these complexities. Beijing is selected as a representative case. The Beijing MSWM system is comprehensively analyzed in many aspects such as socio-economic conditions, natural conditions, spatial heterogeneities, treatment facilities, and system complexities, building a solid foundation for system simulation and optimization. Correspondingly, the MSWM system in Beijing is discretized as 235 grids to reflect spatial heterogeneity. A DMIFHP model which is a nonlinear programming problem is constructed to parameterize the Beijing MSWM system. To enable scientific solving of it, a solution algorithm is proposed based on coupling of fuzzy programming and mixed-integer linear programming. Innovations and advantages of the DMIFHP framework are discussed. The optimal MSWM schemes and mechanism revelations will be discussed in another companion paper due to length limitation.
Interventional radiology virtual simulator for liver biopsy.
Villard, P F; Vidal, F P; ap Cenydd, L; Holbrey, R; Pisharody, S; Johnson, S; Bulpitt, A; John, N W; Bello, F; Gould, D
2014-03-01
Training in Interventional Radiology currently uses the apprenticeship model, where clinical and technical skills of invasive procedures are learnt during practice in patients. This apprenticeship training method is increasingly limited by regulatory restrictions on working hours, concerns over patient risk through trainees' inexperience and the variable exposure to case mix and emergencies during training. To address this, we have developed a computer-based simulation of visceral needle puncture procedures. A real-time framework has been built that includes: segmentation, physically based modelling, haptics rendering, pseudo-ultrasound generation and the concept of a physical mannequin. It is the result of a close collaboration between different universities, involving computer scientists, clinicians, clinical engineers and occupational psychologists. The technical implementation of the framework is a robust and real-time simulation environment combining a physical platform and an immersive computerized virtual environment. The face, content and construct validation have been previously assessed, showing the reliability and effectiveness of this framework, as well as its potential for teaching visceral needle puncture. A simulator for ultrasound-guided liver biopsy has been developed. It includes functionalities and metrics extracted from cognitive task analysis. This framework can be useful during training, particularly given the known difficulties in gaining significant practice of core skills in patients.
Tian, Dan; Chen, Qiang; Li, Yue; Zhang, Ying-Hui; Chang, Ze; Bu, Xian-He
2014-01-13
A mixed molecular building block (MBB) strategy for the synthesis of double-walled cage-based porous metal-organic frameworks (MOFs) is presented. By means of this method, two isostructural porous MOFs built from unprecedented double-walled metal-organic octahedron were obtained by introducing two size-matching C3 -symmetric molecular building blocks with different rigidities. With their unique framework structures, these MOFs provide, to the best of our knowledge, the first examples of double-walled octahedron-based MOFs. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Technical Reports Server (NTRS)
Smith, Eric A.; Mugnai, Alberto; Cooper, Harry J.; Tripoli, Gregory J.; Xiang, Xuwu
1992-01-01
The relationship between emerging microwave brightness temperatures (T(B)s) and vertically distributed mixtures of liquid and frozen hydrometeors was investigated, using a cloud-radiation model, in order to establish the framework for a hybrid statistical-physical rainfall retrieval algorithm. Although strong relationships were found between the T(B) values and various rain parameters, these correlations are misleading in that the T(B)s are largely controlled by fluctuations in the ice-particle mixing ratios, which in turn are highly correlated to fluctuations in liquid-particle mixing ratios. However, the empirically based T(B)-rain-rate (T(B)-RR) algorithms can still be used as tools for estimating precipitation if the hydrometeor profiles used for T(B)-RR algorithms are not specified in an ad hoc fashion.
Carroll, Linda J; Rothe, J Peter
2010-09-01
Like other areas of health research, there has been increasing use of qualitative methods to study public health problems such as injuries and injury prevention. Likewise, the integration of qualitative and quantitative research (mixed-methods) is beginning to assume a more prominent role in public health studies. Likewise, using mixed-methods has great potential for gaining a broad and comprehensive understanding of injuries and their prevention. However, qualitative and quantitative research methods are based on two inherently different paradigms, and their integration requires a conceptual framework that permits the unity of these two methods. We present a theory-driven framework for viewing qualitative and quantitative research, which enables us to integrate them in a conceptually sound and useful manner. This framework has its foundation within the philosophical concept of complementarity, as espoused in the physical and social sciences, and draws on Bergson's metaphysical work on the 'ways of knowing'. Through understanding how data are constructed and reconstructed, and the different levels of meaning that can be ascribed to qualitative and quantitative findings, we can use a mixed-methods approach to gain a conceptually sound, holistic knowledge about injury phenomena that will enhance our development of relevant and successful interventions.
Robust model predictive control for multi-step short range spacecraft rendezvous
NASA Astrophysics Data System (ADS)
Zhu, Shuyi; Sun, Ran; Wang, Jiaolong; Wang, Jihe; Shao, Xiaowei
2018-07-01
This work presents a robust model predictive control (MPC) approach for the multi-step short range spacecraft rendezvous problem. During the specific short range phase concerned, the chaser is supposed to be initially outside the line-of-sight (LOS) cone. Therefore, the rendezvous process naturally includes two steps: the first step is to transfer the chaser into the LOS cone and the second step is to transfer the chaser into the aimed region with its motion confined within the LOS cone. A novel MPC framework named after Mixed MPC (M-MPC) is proposed, which is the combination of the Variable-Horizon MPC (VH-MPC) framework and the Fixed-Instant MPC (FI-MPC) framework. The M-MPC framework enables the optimization for the two steps to be implemented jointly rather than to be separated factitiously, and its computation workload is acceptable for the usually low-power processors onboard spacecraft. Then considering that disturbances including modeling error, sensor noise and thrust uncertainty may induce undesired constraint violations, a robust technique is developed and it is attached to the above M-MPC framework to form a robust M-MPC approach. The robust technique is based on the chance-constrained idea, which ensures that constraints can be satisfied with a prescribed probability. It improves the robust technique proposed by Gavilan et al., because it eliminates the unnecessary conservativeness by explicitly incorporating known statistical properties of the navigation uncertainty. The efficacy of the robust M-MPC approach is shown in a simulation study.
Grefenstette, John J; Brown, Shawn T; Rosenfeld, Roni; DePasse, Jay; Stone, Nathan T B; Cooley, Phillip C; Wheaton, William D; Fyshe, Alona; Galloway, David D; Sriram, Anuroop; Guclu, Hasan; Abraham, Thomas; Burke, Donald S
2013-10-08
Mathematical and computational models provide valuable tools that help public health planners to evaluate competing health interventions, especially for novel circumstances that cannot be examined through observational or controlled studies, such as pandemic influenza. The spread of diseases like influenza depends on the mixing patterns within the population, and these mixing patterns depend in part on local factors including the spatial distribution and age structure of the population, the distribution of size and composition of households, employment status and commuting patterns of adults, and the size and age structure of schools. Finally, public health planners must take into account the health behavior patterns of the population, patterns that often vary according to socioeconomic factors such as race, household income, and education levels. FRED (a Framework for Reconstructing Epidemic Dynamics) is a freely available open-source agent-based modeling system based closely on models used in previously published studies of pandemic influenza. This version of FRED uses open-access census-based synthetic populations that capture the demographic and geographic heterogeneities of the population, including realistic household, school, and workplace social networks. FRED epidemic models are currently available for every state and county in the United States, and for selected international locations. State and county public health planners can use FRED to explore the effects of possible influenza epidemics in specific geographic regions of interest and to help evaluate the effect of interventions such as vaccination programs and school closure policies. FRED is available under a free open source license in order to contribute to the development of better modeling tools and to encourage open discussion of modeling tools being used to evaluate public health policies. We also welcome participation by other researchers in the further development of FRED.
Van Ael, Evy; De Cooman, Ward; Blust, Ronny; Bervoets, Lieven
2015-01-01
Large datasets from total and dissolved metal concentrations in Flemish (Belgium) fresh water systems and the associated macroinvertebrate-based biotic index MMIF (Multimetric Macroinvertebrate Index Flanders) were used to estimate critical metal concentrations for good ecological water quality, as imposed by the European Water Framework Directive (2000). The contribution of different stressors (metals and water characteristics) to the MMIF were studied by constructing generalized linear mixed effect models. Comparison between estimated critical concentrations and the European and Flemish EQS, shows that the EQS for As, Cd, Cu and Zn seem to be sufficient to reach a good ecological quality status as expressed by the invertebrate-based biotic index. In contrast, the EQS for Cr, Hg and Pb are higher than the estimated critical concentrations, which suggests that when environmental concentrations are at the same level as the EQS a good quality status might not be reached. The construction of mixed models that included metal concentrations in their structure did not lead to a significant outcome. However, mixed models showed the primary importance of water characteristics (oxygen level, temperature, ammonium concentration and conductivity) for the MMIF. Copyright © 2014 Elsevier Ltd. All rights reserved.
Partial dynamical symmetry and the vibrational structure of Cd isotopes
NASA Astrophysics Data System (ADS)
Leviatan, A.; Gavrielov, N.; García-Ramos, J. E.; Van Isacker, P.
2018-05-01
The recently reported deviations of selected non-yrast states in 110Cd from the expected sphericalvibrator behaviour, is addressed by means of an Hamiltonian with U(5) partial dynamical symmetry. The latter preserves the U(5) symmetry in a segment of the spectrum and breaks it in other states. The effect of intruder states is treated in the framework of the interacting boson model with configuration mixing.
Mixed mechanisms of multi-site phosphorylation
Suwanmajo, Thapanar; Krishnan, J.
2015-01-01
Multi-site phosphorylation is ubiquitous in cell biology and has been widely studied experimentally and theoretically. The underlying chemical modification mechanisms are typically assumed to be distributive or processive. In this paper, we study the behaviour of mixed mechanisms that can arise either because phosphorylation and dephosphorylation involve different mechanisms or because phosphorylation and/or dephosphorylation can occur through a combination of mechanisms. We examine a hierarchy of models to assess chemical information processing through different mixed mechanisms, using simulations, bifurcation analysis and analytical work. We demonstrate how mixed mechanisms can show important and unintuitive differences from pure distributive and processive mechanisms, in some cases resulting in monostable behaviour with simple dose–response behaviour, while in other cases generating new behaviour-like oscillations. Our results also suggest patterns of information processing that are relevant as the number of modification sites increases. Overall, our work creates a framework to examine information processing arising from complexities of multi-site modification mechanisms and their impact on signal transduction. PMID:25972433
A Markov model for blind image separation by a mean-field EM algorithm.
Tonazzini, Anna; Bedini, Luigi; Salerno, Emanuele
2006-02-01
This paper deals with blind separation of images from noisy linear mixtures with unknown coefficients, formulated as a Bayesian estimation problem. This is a flexible framework, where any kind of prior knowledge about the source images and the mixing matrix can be accounted for. In particular, we describe local correlation within the individual images through the use of Markov random field (MRF) image models. These are naturally suited to express the joint pdf of the sources in a factorized form, so that the statistical independence requirements of most independent component analysis approaches to blind source separation are retained. Our model also includes edge variables to preserve intensity discontinuities. MRF models have been proved to be very efficient in many visual reconstruction problems, such as blind image restoration, and allow separation and edge detection to be performed simultaneously. We propose an expectation-maximization algorithm with the mean field approximation to derive a procedure for estimating the mixing matrix, the sources, and their edge maps. We tested this procedure on both synthetic and real images, in the fully blind case (i.e., no prior information on mixing is exploited) and found that a source model accounting for local autocorrelation is able to increase robustness against noise, even space variant. Furthermore, when the model closely fits the source characteristics, independence is no longer a strict requirement, and cross-correlated sources can be separated, as well.
Estimating prefledging survival: Allowing for brood mixing and dependence among brood mates
Flint, Paul L.; Pollock, Kenneth H.; Thomas, Dana; Sedinger, James S.
1995-01-01
Estimates of juvenile survival from hatch to fledging provide important information on waterfowl productivity. We develop a model for estimating survival of young waterfowl from hatch to fledging. Our model enables interchange of individuals among broods and relaxes the assumption that individuals within broods have independent survival probabilities. The model requires repeated observations of individually identifiable adults and their offspring that are not individually identifiable. A modified Kaplan-Meier procedure (Pollock et al. 1989a,b) and a modified Mayfield procedure (Mayfield 1961, 1975; Johnson 1979) can be used under this general modeling framework, and survival rates and corresponding variances of the point estimators can be determined.
A Monte-Carlo Analysis of Organic Volatility with Aerosol Microphysics
NASA Astrophysics Data System (ADS)
Gao, Chloe; Tsigaridis, Kostas; Bauer, Susanne E.
2017-04-01
A newly developed box model, MATRIX-VBS, includes the volatility-basis set (VBS) framework in an aerosol microphysical scheme MATRIX (Multiconfiguration Aerosol TRacker of mIXing state), which resolves aerosol mass and number concentrations and aerosol mixing state. The new scheme advanced the representation of organic aerosols in models by improving the traditional and simplistic treatment of organic aerosols as non-volatile and with a fixed size distribution. Further development includes adding the condensation of organics on coarse mode aerosols - dust and sea salt, thus making all organics in the system semi-volatile. To test and simplify the model, a Monte-Carlo analysis is performed to pin point which processes affect organics the most under varied chemical and meteorological conditions. Since the model's parameterizations have the ability to capture a very wide range of conditions, all possible scenarios on Earth across the whole parameter space, including temperature, humidity, location, emissions and oxidant levels, are examined. The Monte-Carlo simulations provide quantitative information on the sensitivity of the newly developed model and help us understand how organics are affecting the size distribution, mixing state and volatility distribution at varying levels of meteorological conditions and pollution levels. In addition, these simulations give information on which parameters play a critical role in the aerosol distribution and evolution in the atmosphere and which do not, that will facilitate the simplification of the box model, an important step in its implementation in the global model GISS ModelE as a module.
Evidence from mixed hydrate nucleation for a funnel model of crystallization.
Hall, Kyle Wm; Carpendale, Sheelagh; Kusalik, Peter G
2016-10-25
The molecular-level details of crystallization remain unclear for many systems. Previous work has speculated on the phenomenological similarities between molecular crystallization and protein folding. Here we demonstrate that molecular crystallization can involve funnel-shaped potential energy landscapes through a detailed analysis of mixed gas hydrate nucleation, a prototypical multicomponent crystallization process. Through this, we contribute both: (i) a powerful conceptual framework for exploring and rationalizing molecular crystallization, and (ii) an explanation of phenomenological similarities between protein folding and crystallization. Such funnel-shaped potential energy landscapes may be typical of broad classes of molecular ordering processes, and can provide a new perspective for both studying and understanding these processes.
Evidence from mixed hydrate nucleation for a funnel model of crystallization
Hall, Kyle Wm.; Carpendale, Sheelagh; Kusalik, Peter G.
2016-01-01
The molecular-level details of crystallization remain unclear for many systems. Previous work has speculated on the phenomenological similarities between molecular crystallization and protein folding. Here we demonstrate that molecular crystallization can involve funnel-shaped potential energy landscapes through a detailed analysis of mixed gas hydrate nucleation, a prototypical multicomponent crystallization process. Through this, we contribute both: (i) a powerful conceptual framework for exploring and rationalizing molecular crystallization, and (ii) an explanation of phenomenological similarities between protein folding and crystallization. Such funnel-shaped potential energy landscapes may be typical of broad classes of molecular ordering processes, and can provide a new perspective for both studying and understanding these processes. PMID:27790987
Smith, Emma M; Gowran, Rosemary Joan; Mannan, Hasheem; Donnelly, Brian; Alvarez, Liliana; Bell, Diane; Contepomi, Silvana; Ennion Wegner, Liezel; Hoogerwerf, Evert-Jan; Howe, Tracey; Jan, Yih-Kuen; Kagwiza, Jeanne; Layton, Natasha; Ledgerd, Ritchard; MacLachlan, Malcolm; Oggero, Giulia; Pettersson, Cecilia; Pousada, Thais; Scheffler, Elsje; Wu, Sam
2018-05-17
This paper reviews the current capacity of personnel in enabling access to assistive technology (AT) as well as the systems and processes within which they work, and was reviewed, discussed, and refined during and following the Global Research, Innovation, and Education in Assistive Technology (GREAT) Summit. Key concepts addressed include a person-centred team approach; sustainability indicators to monitor, measure, and respond to needs for service design and delivery; education, research, and training for competent practice, using the six rehab-workforce challenges framework; and credentialing frameworks. We propose development of a competence framework and associated education and training programs, and development and implementation of a certification framework for AT personnel. There is a resolve to address the challenges faced by People globally to access assistive technology. Context specific needs assessment is required to understand the AT Personnel landscape, to shape and strengthen credentialing frameworks through competencies and certification, acknowledging both general and specific skill mix requirements. Implications for Rehabilitation Personnel in assistive technology (AT) provision should be trained using a person-centred team approach, which emphasizes appropriate skill-mix to address multiple needs within the community. Sustainability indicators should be used which allow personnel to monitor, measure and respond to needs for service design and delivery. A competence framework with associated education and training program, coupled with the development and implementation of a certification framework for AT personnel needs, will promote quality in AT personnel training globally.
Binary encoding of multiplexed images in mixed noise.
Lalush, David S
2008-09-01
Binary coding of multiplexed signals and images has been studied in the context of spectroscopy with models of either purely constant or purely proportional noise, and has been shown to result in improved noise performance under certain conditions. We consider the case of mixed noise in an imaging system consisting of multiple individually-controllable sources (X-ray or near-infrared, for example) shining on a single detector. We develop a mathematical model for the noise in such a system and show that the noise is dependent on the properties of the binary coding matrix and on the average number of sources used for each code. Each binary matrix has a characteristic linear relationship between the ratio of proportional-to-constant noise and the noise level in the decoded image. We introduce a criterion for noise level, which is minimized via a genetic algorithm search. The search procedure results in the discovery of matrices that outperform the Hadamard S-matrices at certain levels of mixed noise. Simulation of a seven-source radiography system demonstrates that the noise model predicts trends and rank order of performance in regions of nonuniform images and in a simple tomosynthesis reconstruction. We conclude that the model developed provides a simple framework for analysis, discovery, and optimization of binary coding patterns used in multiplexed imaging systems.
s-Processing from MHD-induced mixing and isotopic abundances in presolar SiC grains
NASA Astrophysics Data System (ADS)
Palmerini, S.; Trippella, O.; Busso, M.; Vescovi, D.; Petrelli, M.; Zucchini, A.; Frondini, F.
2018-01-01
In the past years the observational evidence that s-process elements from Sr to Pb are produced by stars ascending the so-called Asymptotic Giant Branch (or "AGB") could not be explained by self-consistent models, forcing researchers to extensive parameterizations. The crucial point is to understand how protons can be injected from the envelope into the He-rich layers, yielding the formation of 13C and then the activation of the 13C (α,n)16O reaction. Only recently, attempts to solve this problem started to consider quantitatively physically-based mixing mechanisms. Among them, MHD processes in the plasma were suggested to yield mass transport through magnetic buoyancy. In this framework, we compare results of nucleosynthesis models for Low Mass AGB Stars (M≲ 3M⊙), developed from the MHD scenario, with the record of isotopic abundance ratios of s-elements in presolar SiC grains, which were shown to offer precise constraints on the 13C reservoir. We find that n-captures driven by magnetically-induced mixing can indeed account for the SiC data quite well and that this is due to the fact that our 13C distribution fulfils the above constraints rather accurately. We suggest that similar tests should be now performed using different physical models for mixing. Such comparisons would indeed improve decisively our understanding of the formation of the neutron source.
Adjusting case mix payment amounts for inaccurately reported comorbidity data.
Sutherland, Jason M; Hamm, Jeremy; Hatcher, Jeff
2010-03-01
Case mix methods such as diagnosis related groups have become a basis of payment for inpatient hospitalizations in many countries. Specifying cost weight values for case mix system payment has important consequences; recent evidence suggests case mix cost weight inaccuracies influence the supply of some hospital-based services. To begin to address the question of case mix cost weight accuracy, this paper is motivated by the objective of improving the accuracy of cost weight values due to inaccurate or incomplete comorbidity data. The methods are suitable to case mix methods that incorporate disease severity or comorbidity adjustments. The methods are based on the availability of detailed clinical and cost information linked at the patient level and leverage recent results from clinical data audits. A Bayesian framework is used to synthesize clinical data audit information regarding misclassification probabilities into cost weight value calculations. The models are implemented through Markov chain Monte Carlo methods. An example used to demonstrate the methods finds that inaccurate comorbidity data affects cost weight values by biasing cost weight values (and payments) downward. The implications for hospital payments are discussed and the generalizability of the approach is explored.
3D numerical simulations of oblique droplet impact onto a deep liquid pool
NASA Astrophysics Data System (ADS)
Gelderblom, Hanneke; Reijers, Sten A.; Gielen, Marise; Sleutel, Pascal; Lohse, Detlef; Xie, Zhihua; Pain, Christopher C.; Matar, Omar K.
2017-11-01
We study the fluid dynamics of three-dimensional oblique droplet impact, which results in phenomena that include splashing and cavity formation. An adaptive, unstructured mesh modelling framework is employed here, which can modify and adapt unstructured meshes to better represent the underlying physics of droplet dynamics, and reduce computational effort without sacrificing accuracy. The numerical framework consists of a mixed control-volume and finite-element formulation, a volume-of-fluid-type method for the interface-capturing based on a compressive control-volume advection method. The framework also features second-order finite-element methods, and a force-balanced algorithm for the surface tension implementation, minimising the spurious velocities often found in many simulations involving capillary-driven flows. The numerical results generated using this framework are compared with high-speed images of the interfacial shapes of the deformed droplet, and the cavity formed upon impact, yielding good agreement. EPSRC, UK, MEMPHIS program Grant (EP/K003976/1), RAEng Research Chair (OKM).
Waste biomass toward hydrogen fuel supply chain management for electricity: Malaysia perspective
NASA Astrophysics Data System (ADS)
Zakaria, Izatul Husna; Ibrahim, Jafni Azhan; Othman, Abdul Aziz
2016-08-01
Green energy is becoming an important aspect of every country in the world toward energy security by reducing dependence on fossil fuel import and enhancing better life quality by living in the healthy environment. This conceptual paper is an approach toward determining physical flow's characteristic of waste wood biomass in high scale plantation toward producing gas fuel for electricity using gasification technique. The scope of this study is supply chain management of syngas fuel from wood waste biomass using direct gasification conversion technology. Literature review on energy security, Malaysia's energy mix, Biomass SCM and technology. This paper uses the theoretical framework of a model of transportation (Lumsden, 2006) and the function of the terminal (Hulten, 1997) for research purpose. To incorporate biomass unique properties, Biomass Element Life Cycle Analysis (BELCA) which is a novel technique develop to understand the behaviour of biomass supply. Theoretical framework used to answer the research questions are Supply Chain Operations Reference (SCOR) framework and Sustainable strategy development in supply chain management framework
The Case for Case-Mix: A New Construct for Hospital Management
Plomann, Marilyn Peacock; Garzino, Fred R.
1981-01-01
Case-mix is a useful methodology for health care management, planning and control. It provides managers with a powerful tool by providing a framework for relating resource consumption profiles with specific treatment patterns. In the long run, it will assist hospital planners in analyzing the demands which different classes of patients bring to the hospital. Decisions concerning capital financing, facilities planning, new services, and the medical and financial implications of physician activities are more efficiently analyzed within a case-mix framework. In the near term, inventory management, staffing policies and the on-going need for the astute management of cash flow will be postively and decisively affected by the use of case-mix measures. The benefits derived from a case-mix system are not limited to hospitals possessing sophisticated management information systems. The case-mix methodology also provides a useful tool for hospitals with less advanced data processing systems and management practices in applying a variety of management science techniques to their planning and control activities.
Small-scale multi-axial hybrid simulation of a shear-critical reinforced concrete frame
NASA Astrophysics Data System (ADS)
Sadeghian, Vahid; Kwon, Oh-Sung; Vecchio, Frank
2017-10-01
This study presents a numerical multi-scale simulation framework which is extended to accommodate hybrid simulation (numerical-experimental integration). The framework is enhanced with a standardized data exchange format and connected to a generalized controller interface program which facilitates communication with various types of laboratory equipment and testing configurations. A small-scale experimental program was conducted using a six degree-of-freedom hydraulic testing equipment to verify the proposed framework and provide additional data for small-scale testing of shearcritical reinforced concrete structures. The specimens were tested in a multi-axial hybrid simulation manner under a reversed cyclic loading condition simulating earthquake forces. The physical models were 1/3.23-scale representations of a beam and two columns. A mixed-type modelling technique was employed to analyze the remainder of the structures. The hybrid simulation results were compared against those obtained from a large-scale test and finite element analyses. The study found that if precautions are taken in preparing model materials and if the shear-related mechanisms are accurately considered in the numerical model, small-scale hybrid simulations can adequately simulate the behaviour of shear-critical structures. Although the findings of the study are promising, to draw general conclusions additional test data are required.
Designing A Mixed Methods Study In Primary Care
Creswell, John W.; Fetters, Michael D.; Ivankova, Nataliya V.
2004-01-01
BACKGROUND Mixed methods or multimethod research holds potential for rigorous, methodologically sound investigations in primary care. The objective of this study was to use criteria from the literature to evaluate 5 mixed methods studies in primary care and to advance 3 models useful for designing such investigations. METHODS We first identified criteria from the social and behavioral sciences to analyze mixed methods studies in primary care research. We then used the criteria to evaluate 5 mixed methods investigations published in primary care research journals. RESULTS Of the 5 studies analyzed, 3 included a rationale for mixing based on the need to develop a quantitative instrument from qualitative data or to converge information to best understand the research topic. Quantitative data collection involved structured interviews, observational checklists, and chart audits that were analyzed using descriptive and inferential statistical procedures. Qualitative data consisted of semistructured interviews and field observations that were analyzed using coding to develop themes and categories. The studies showed diverse forms of priority: equal priority, qualitative priority, and quantitative priority. Data collection involved quantitative and qualitative data gathered both concurrently and sequentially. The integration of the quantitative and qualitative data in these studies occurred between data analysis from one phase and data collection from a subsequent phase, while analyzing the data, and when reporting the results. DISCUSSION We recommend instrument-building, triangulation, and data transformation models for mixed methods designs as useful frameworks to add rigor to investigations in primary care. We also discuss the limitations of our study and the need for future research. PMID:15053277
Chiabai, Aline; Quiroga, Sonia; Martinez-Juarez, Pablo; Higgins, Sahran; Taylor, Tim
2018-09-01
This paper addresses the impact that changes in natural ecosystems can have on health and wellbeing focusing on the potential co-benefits that green spaces could provide when introduced as climate change adaptation measures. Ignoring such benefits could lead to sub-optimal planning and decision-making. A conceptual framework, building on the ecosystem-enriched Driver, Pressure, State, Exposure, Effect, Action model (eDPSEEA), is presented to aid in clarifying the relational structure between green spaces and human health, taking climate change as the key driver. The study has the double intention of (i) summarising the literature with a special emphasis on the ecosystem and health perspectives, as well as the main theories behind these impacts, and (ii) modelling these findings into a framework that allows for multidisciplinary approaches to the underlying relations between human health and green spaces. The paper shows that while the literature based on the ecosystem perspective presents a well-documented association between climate, health and green spaces, the literature using a health-based perspective presents mixed evidence in some cases. The role of contextual factors and the exposure mechanism are rarely addressed. The proposed framework could serve as a multidisciplinary knowledge platform for multi-perspecitve analysis and discussion among experts and stakeholders, as well as to support the operationalization of quantitative assessment and modelling exercises. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Gao, Chloe Y.; Tsigaridis, Kostas; Bauer, Susanne E.
2017-01-01
The gas-particle partitioning and chemical aging of semi-volatile organic aerosol are presented in a newly developed box model scheme, where its effect on the growth, composition, and mixing state of particles is examined. The volatility-basis set (VBS) framework is implemented into the aerosol microphysical scheme MATRIX (Multiconfiguration Aerosol TRacker of mIXing state), which resolves mass and number aerosol concentrations and in multiple mixing-state classes. The new scheme, MATRIX-VBS, has the potential to significantly advance the representation of organic aerosols in Earth system models by improving upon the conventional representation as non-volatile particulate organic matter, often also with an assumed fixed size distribution. We present results from idealized cases representing Beijing, Mexico City, a Finnish forest, and a southeastern US forest, and investigate the evolution of mass concentrations and volatility distributions for organic species across the gas and particle phases, as well as assessing their mixing state among aerosol populations. Emitted semi-volatile primary organic aerosols evaporate almost completely in the intermediate-volatility range, while they remain in the particle phase in the low-volatility range. Their volatility distribution at any point in time depends on the applied emission factors, oxidation by OH radicals, and temperature. We also compare against parallel simulations with the original scheme, which represented only the particulate and non-volatile component of the organic aerosol, examining how differently the condensed-phase organic matter is distributed across the mixing states in the model. The results demonstrate the importance of representing organic aerosol as a semi-volatile aerosol, and explicitly calculating the partitioning of organic species between the gas and particulate phases.
Scientist role models in the classroom: how important is gender matching?
NASA Astrophysics Data System (ADS)
Conner, Laura D. Carsten; Danielson, Jennifer
2016-10-01
Gender-matched role models are often proposed as a mechanism to increase identification with science among girls, with the ultimate aim of broadening participation in science. While there is a great deal of evidence suggesting that role models can be effective, there is mixed support in the literature for the importance of gender matching. We used the Eccles Expectancy Value model as a framework to explore how female science role models impact a suite of factors that might predict future career choice among elementary students. We predicted that impacts of female scientist role models would be more pronounced among girls than among boys, as such role models have the potential to normalise what is often perceived as a gender-deviant role. Using a mixed-methods approach, we found that ideas about scientists, self-concept towards science, and level of science participation changed equally across both genders, contrary to our prediction. Our results suggest that engaging in authentic science and viewing the female scientist as personable were keys to changes among students, rather than gender matching between the role model and student. These results imply that scientists in the schools programmes should focus on preparing the visiting scientists in these areas.
Selected topics in high energy physics: Flavon, neutrino and extra-dimensional models
NASA Astrophysics Data System (ADS)
Dorsner, Ilja
There is already significant evidence, both experimental and theoretical, that the Standard Model of elementary particle physics is just another effective physical theory. Thus, it is crucial (a) to anticipate the experiments in search for signatures of the physics beyond the Standard Model, and (b) whether some theoretically preferred structure can reproduce the low-energy signature of the Standard Model. This work pursues these two directions by investigating various extensions of the Standard Model. One of them is a simple flavon model that accommodates the observed hierarchy of the charged fermion masses and mixings. We show that flavor changing and CP violating signatures of this model are equally near the present experimental limits. We find that, for a significant range of parameters, mu-e conversion can be the most sensitive place to look for such signatures. We then propose two variants of an SO(10) model in five-dimensional framework. The first variant demonstrates that one can embed a four-dimensional flipped SU(5) model into a five-dimensional SO(10) model. This allows one to maintain the advantages of flipped SU(5) while avoiding its well-known drawbacks. The second variant shows that exact unification of the gauge couplings is possible even in the higher dimensional setting. This unification yields low-energy values of the gauge couplings that are in a perfect agreement with experimental values. We show that the corrections to the usual four-dimensional running, due to the Kaluza-Klein towers of states, can be unambiguously and systematically evaluated. We also consider the various main types of models of neutrino masses and mixings from the point of view of how naturally they give the large mixing angle MSW solution to the solar neutrino problem. Special attention is given to one particular "lopsided" SU(5) model, which is then analyzed in a completely statistical manner. We suggest that this sort of statistical analysis should be applicable to other models of neutrino mixing.
Harris, M.S.; Gayes, P.T.; Kindinger, J.L.; Flocks, J.G.; Krantz, D.E.; Donovan, P.
2005-01-01
Coastal landscapes evolve over wide-ranging spatial and temporal scales in response to physical and biological pro-cesses that interact with a wide range of variables. To develop better predictive models for these dynamic areas, we must understand the influence of these variables on coastal morphologies and ultimately how they influence coastal processes. This study defines the influence of geologic framework variability on a classic mixed-energy coastline, and establishes four categorical scales of spatial and temporal influence on the coastal system. The near-surface, geologic framework was delineated using high-resolution seismic profiles, shallow vibracores, detailed geomorphic maps, historical shorelines, aerial photographs, and existing studies, and compared to the long- and short-term development of two coastal compartments near Charleston, South Carolina. Although it is clear that the imprint of a mixed-energy tidal and wave signal (basin-scale) dictates formation of drumstick barriers and that immediate responses to wave climate are dramatic, island size, position, and longer-term dynamics are influenced by a series of inherent, complex near-surface stratigraphic geometries. Major near-surface Tertiary geometries influence inlet placement and drainage development (island-scale) through multiple interglacial cycles and overall channel morphology (local-scale). During the modern marine transgression, the halo of ebb-tidal deltas greatly influence inlet region dynamics, while truncated beach ridges and exposed, differentially erodable Cenozoic deposits in the active system influence historical shoreline dynamics and active shoreface morphologies (blockscale). This study concludes that the mixed-energy imprint of wave and tide theories dominates general coastal morphology, but that underlying stratigraphic influences on the coast provide site-specific, long-standing imprints on coastal evolution.
An operational global-scale ocean thermal analysis system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clancy, R. M.; Pollak, K.D.; Phoebus, P.A.
1990-04-01
The Optimum Thermal Interpolation System (OTIS) is an ocean thermal analysis system designed for operational use at FNOC. It is based on the optimum interpolation of the assimilation technique and functions in an analysis-prediction-analysis data assimilation cycle with the TOPS mixed-layer model. OTIS provides a rigorous framework for combining real-time data, climatology, and predictions from numerical ocean prediction models to produce a large-scale synoptic representation of ocean thermal structure. The techniques and assumptions used in OTIS are documented and results of operational tests of global scale OTIS at FNOC are presented. The tests involved comparisons of OTIS against an existingmore » operational ocean thermal structure model and were conducted during February, March, and April 1988. Qualitative comparison of the two products suggests that OTIS gives a more realistic representation of subsurface anomalies and horizontal gradients and that it also gives a more accurate analysis of the thermal structure, with improvements largest below the mixed layer. 37 refs.« less
Dong, Yuwen; Deshpande, Sunil; Rivera, Daniel E; Downs, Danielle S; Savage, Jennifer S
2014-06-01
Control engineering offers a systematic and efficient method to optimize the effectiveness of individually tailored treatment and prevention policies known as adaptive or "just-in-time" behavioral interventions. The nature of these interventions requires assigning dosages at categorical levels, which has been addressed in prior work using Mixed Logical Dynamical (MLD)-based hybrid model predictive control (HMPC) schemes. However, certain requirements of adaptive behavioral interventions that involve sequential decision making have not been comprehensively explored in the literature. This paper presents an extension of the traditional MLD framework for HMPC by representing the requirements of sequential decision policies as mixed-integer linear constraints. This is accomplished with user-specified dosage sequence tables, manipulation of one input at a time, and a switching time strategy for assigning dosages at time intervals less frequent than the measurement sampling interval. A model developed for a gestational weight gain (GWG) intervention is used to illustrate the generation of these sequential decision policies and their effectiveness for implementing adaptive behavioral interventions involving multiple components.
Axion-assisted production of sterile neutrino dark matter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berlin, Asher; Hooper, Dan
2017-04-12
Sterile neutrinos can be generated in the early universe through oscillations with active neutrinos and represent a popular and well-studied candidate for our universe's dark matter. Stringent constraints from X-ray and gamma-ray line searches, however, have excluded the simplest of such models. In this letter, we propose a novel alternative to the standard scenario in which the mixing angle between the sterile and active neutrinos is a dynamical quantity, induced through interactions with a light axion-like field. As the energy density of the axion-like particles is diluted by Hubble expansion, the degree of mixing is reduced at late times, suppressingmore » the decay rate and easily alleviating any tension with X-ray or gamma-ray constraints. We present a simple model which illustrates the phenomenology of this scenario, and also describe a framework in which the QCD axion is responsible for the production of sterile neutrinos in the early universe.« less
Data mining in soft computing framework: a survey.
Mitra, S; Pal, S K; Mitra, P
2002-01-01
The present article provides a survey of the available literature on data mining using soft computing. A categorization has been provided based on the different soft computing tools and their hybridizations used, the data mining function implemented, and the preference criterion selected by the model. The utility of the different soft computing methodologies is highlighted. Generally fuzzy sets are suitable for handling the issues related to understandability of patterns, incomplete/noisy data, mixed media information and human interaction, and can provide approximate solutions faster. Neural networks are nonparametric, robust, and exhibit good learning and generalization capabilities in data-rich environments. Genetic algorithms provide efficient search algorithms to select a model, from mixed media data, based on some preference criterion/objective function. Rough sets are suitable for handling different types of uncertainty in data. Some challenges to data mining and the application of soft computing methodologies are indicated. An extensive bibliography is also included.
Evolutionary dynamics with fluctuating population sizes and strong mutualism.
Chotibut, Thiparat; Nelson, David R
2015-08-01
Game theory ideas provide a useful framework for studying evolutionary dynamics in a well-mixed environment. This approach, however, typically enforces a strictly fixed overall population size, deemphasizing natural growth processes. We study a competitive Lotka-Volterra model, with number fluctuations, that accounts for natural population growth and encompasses interaction scenarios typical of evolutionary games. We show that, in an appropriate limit, the model describes standard evolutionary games with both genetic drift and overall population size fluctuations. However, there are also regimes where a varying population size can strongly influence the evolutionary dynamics. We focus on the strong mutualism scenario and demonstrate that standard evolutionary game theory fails to describe our simulation results. We then analytically and numerically determine fixation probabilities as well as mean fixation times using matched asymptotic expansions, taking into account the population size degree of freedom. These results elucidate the interplay between population dynamics and evolutionary dynamics in well-mixed systems.
Evolutionary dynamics with fluctuating population sizes and strong mutualism
NASA Astrophysics Data System (ADS)
Chotibut, Thiparat; Nelson, David R.
2015-08-01
Game theory ideas provide a useful framework for studying evolutionary dynamics in a well-mixed environment. This approach, however, typically enforces a strictly fixed overall population size, deemphasizing natural growth processes. We study a competitive Lotka-Volterra model, with number fluctuations, that accounts for natural population growth and encompasses interaction scenarios typical of evolutionary games. We show that, in an appropriate limit, the model describes standard evolutionary games with both genetic drift and overall population size fluctuations. However, there are also regimes where a varying population size can strongly influence the evolutionary dynamics. We focus on the strong mutualism scenario and demonstrate that standard evolutionary game theory fails to describe our simulation results. We then analytically and numerically determine fixation probabilities as well as mean fixation times using matched asymptotic expansions, taking into account the population size degree of freedom. These results elucidate the interplay between population dynamics and evolutionary dynamics in well-mixed systems.
Axion-assisted production of sterile neutrino dark matter
NASA Astrophysics Data System (ADS)
Berlin, Asher; Hooper, Dan
2017-04-01
Sterile neutrinos can be generated in the early universe through oscillations with active neutrinos and represent a popular and well-studied candidate for our Universe's dark matter. Stringent constraints from X-ray and gamma-ray line searches, however, have excluded the simplest of such models. In this paper, we propose a novel alternative to the standard scenario in which the mixing angle between the sterile and active neutrinos is a dynamical quantity, induced through interactions with a light axionlike field. As the energy density of the axionlike particles is diluted by Hubble expansion, the degree of mixing is reduced at late times, suppressing the decay rate and easily alleviating any tension with X-ray or gamma-ray constraints. We present a simple model which illustrates the phenomenology of this scenario, and also describe a framework in which the QCD axion is responsible for the production of sterile neutrinos in the early universe.
ERIC Educational Resources Information Center
Hales, Patrick Dean
2016-01-01
Mixed methods research becomes more utilized in education research every year. As this pluralist paradigm begins to take hold, it becomes more and more necessary to take a critical eye to studies making use of different mixed methods approaches. An area of education research that has yet struggled to find a foothold with mixed methodology is…
A Numerical Approximation Framework for the Stochastic Linear Quadratic Regulator on Hilbert Spaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levajković, Tijana, E-mail: tijana.levajkovic@uibk.ac.at, E-mail: t.levajkovic@sf.bg.ac.rs; Mena, Hermann, E-mail: hermann.mena@uibk.ac.at; Tuffaha, Amjad, E-mail: atufaha@aus.edu
We present an approximation framework for computing the solution of the stochastic linear quadratic control problem on Hilbert spaces. We focus on the finite horizon case and the related differential Riccati equations (DREs). Our approximation framework is concerned with the so-called “singular estimate control systems” (Lasiecka in Optimal control problems and Riccati equations for systems with unbounded controls and partially analytic generators: applications to boundary and point control problems, 2004) which model certain coupled systems of parabolic/hyperbolic mixed partial differential equations with boundary or point control. We prove that the solutions of the approximate finite-dimensional DREs converge to the solutionmore » of the infinite-dimensional DRE. In addition, we prove that the optimal state and control of the approximate finite-dimensional problem converge to the optimal state and control of the corresponding infinite-dimensional problem.« less
Parameterizing correlations between hydrometeor species in mixed-phase Arctic clouds
NASA Astrophysics Data System (ADS)
Larson, Vincent E.; Nielsen, Brandon J.; Fan, Jiwen; Ovchinnikov, Mikhail
2011-01-01
Mixed-phase Arctic clouds, like other clouds, contain small-scale variability in hydrometeor fields, such as cloud water or snow mixing ratio. This variability may be worth parameterizing in coarse-resolution numerical models. In particular, for modeling multispecies processes such as accretion and aggregation, it would be useful to parameterize subgrid correlations among hydrometeor species. However, one difficulty is that there exist many hydrometeor species and many microphysical processes, leading to complexity and computational expense. Existing lower and upper bounds on linear correlation coefficients are too loose to serve directly as a method to predict subgrid correlations. Therefore, this paper proposes an alternative method that begins with the spherical parameterization framework of Pinheiro and Bates (1996), which expresses the correlation matrix in terms of its Cholesky factorization. The values of the elements of the Cholesky matrix are populated here using a "cSigma" parameterization that we introduce based on the aforementioned bounds on correlations. The method has three advantages: (1) the computational expense is tolerable; (2) the correlations are, by construction, guaranteed to be consistent with each other; and (3) the methodology is fairly general and hence may be applicable to other problems. The method is tested noninteractively using simulations of three Arctic mixed-phase cloud cases from two field experiments: the Indirect and Semi-Direct Aerosol Campaign and the Mixed-Phase Arctic Cloud Experiment. Benchmark simulations are performed using a large-eddy simulation (LES) model that includes a bin microphysical scheme. The correlations estimated by the new method satisfactorily approximate the correlations produced by the LES.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamm, James R; Shashkov, Mikhail J
2009-01-01
Despite decades of development, Lagrangian hydrodynamics of strengthfree materials presents numerous open issues, even in one dimension. We focus on the problem of closing a system of equations for a two-material cell under the assumption of a single velocity model. There are several existing models and approaches, each possessing different levels of fidelity to the underlying physics and each exhibiting unique features in the computed solutions. We consider the case in which the change in heat in the constituent materials in the mixed cell is assumed equal. An instantaneous pressure equilibration model for a mixed cell can be cast asmore » four equations in four unknowns, comprised of the updated values of the specific internal energy and the specific volume for each of the two materials in the mixed cell. The unique contribution of our approach is a physics-inspired, geometry-based model in which the updated values of the sub-cell, relaxing-toward-equilibrium constituent pressures are related to a local Riemann problem through an optimization principle. This approach couples the modeling problem of assigning sub-cell pressures to the physics associated with the local, dynamic evolution. We package our approach in the framework of a standard predictor-corrector time integration scheme. We evaluate our model using idealized, two material problems using either ideal-gas or stiffened-gas equations of state and compare these results to those computed with the method of Tipton and with corresponding pure-material calculations.« less
NASA Astrophysics Data System (ADS)
Made Tirta, I.; Anggraeni, Dian
2018-04-01
Statistical models have been developed rapidly into various directions to accommodate various types of data. Data collected from longitudinal, repeated measured, clustered data (either continuous, binary, count, or ordinal), are more likely to be correlated. Therefore statistical model for independent responses, such as Generalized Linear Model (GLM), Generalized Additive Model (GAM) are not appropriate. There are several models available to apply for correlated responses including GEEs (Generalized Estimating Equations), for marginal model and various mixed effect model such as GLMM (Generalized Linear Mixed Models) and HGLM (Hierarchical Generalized Linear Models) for subject spesific models. These models are available on free open source software R, but they can only be accessed through command line interface (using scrit). On the othe hand, most practical researchers very much rely on menu based or Graphical User Interface (GUI). We develop, using Shiny framework, standard pull down menu Web-GUI that unifies most models for correlated responses. The Web-GUI has accomodated almost all needed features. It enables users to do and compare various modeling for repeated measure data (GEE, GLMM, HGLM, GEE for nominal responses) much more easily trough online menus. This paper discusses the features of the Web-GUI and illustrates the use of them. In General we find that GEE, GLMM, HGLM gave very closed results.
Intergenerational Practice: Contributing to a Conceptual Framework
ERIC Educational Resources Information Center
Vieira, Sacha; Sousa, Liliana
2016-01-01
The ageing of the European population is creating a new demographic mix, increasing the relevance of intergenerational practice (IGP). To date, however, this field lacks an appropriate conceptual framework. This study aims to contribute to such a framework through an integrative review of peer-reviewed papers reporting on IGPs. Fifteen papers were…
Breaking from binaries - using a sequential mixed methods design.
Larkin, Patricia Mary; Begley, Cecily Marion; Devane, Declan
2014-03-01
To outline the traditional worldviews of healthcare research and discuss the benefits and challenges of using mixed methods approaches in contributing to the development of nursing and midwifery knowledge. There has been much debate about the contribution of mixed methods research to nursing and midwifery knowledge in recent years. A sequential exploratory design is used as an exemplar of a mixed methods approach. The study discussed used a combination of focus-group interviews and a quantitative instrument to obtain a fuller understanding of women's experiences of childbirth. In the mixed methods study example, qualitative data were analysed using thematic analysis and quantitative data using regression analysis. Polarised debates about the veracity, philosophical integrity and motivation for conducting mixed methods research have largely abated. A mixed methods approach can contribute to a deeper, more contextual understanding of a variety of subjects and experiences; as a result, it furthers knowledge that can be used in clinical practice. The purpose of the research study should be the main instigator when choosing from an array of mixed methods research designs. Mixed methods research offers a variety of models that can augment investigative capabilities and provide richer data than can a discrete method alone. This paper offers an example of an exploratory, sequential approach to investigating women's childbirth experiences. A clear framework for the conduct and integration of the different phases of the mixed methods research process is provided. This approach can be used by practitioners and policy makers to improve practice.
Bayesian estimation of multicomponent relaxation parameters in magnetic resonance fingerprinting.
McGivney, Debra; Deshmane, Anagha; Jiang, Yun; Ma, Dan; Badve, Chaitra; Sloan, Andrew; Gulani, Vikas; Griswold, Mark
2018-07-01
To estimate multiple components within a single voxel in magnetic resonance fingerprinting when the number and types of tissues comprising the voxel are not known a priori. Multiple tissue components within a single voxel are potentially separable with magnetic resonance fingerprinting as a result of differences in signal evolutions of each component. The Bayesian framework for inverse problems provides a natural and flexible setting for solving this problem when the tissue composition per voxel is unknown. Assuming that only a few entries from the dictionary contribute to a mixed signal, sparsity-promoting priors can be placed upon the solution. An iterative algorithm is applied to compute the maximum a posteriori estimator of the posterior probability density to determine the magnetic resonance fingerprinting dictionary entries that contribute most significantly to mixed or pure voxels. Simulation results show that the algorithm is robust in finding the component tissues of mixed voxels. Preliminary in vivo data confirm this result, and show good agreement in voxels containing pure tissue. The Bayesian framework and algorithm shown provide accurate solutions for the partial-volume problem in magnetic resonance fingerprinting. The flexibility of the method will allow further study into different priors and hyperpriors that can be applied in the model. Magn Reson Med 80:159-170, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Zero-field-cooled/field-cooled magnetization study of Dendrimer model
NASA Astrophysics Data System (ADS)
Arejdal, M.; Bahmad, L.; Benyoussef, A.
2017-01-01
Being motivated by Dendrimer model with mixed spins σ=3 and S=7/2, we investigated the magnetic nanoparticle system in this study. We analyzed and discussed the ground-state phase diagrams and the stable phases. Then, we elaborated and explained the magnetic properties of the system by using Monte Carlo Simulations (MCS) in the framework of the Ising model. In this way, we determined the blocking temperature, which is deduced through studying the partial-total magnetization and susceptibility as a function of the temperature, and we established the effects of both the exchange coupling interaction and the crystal field on the hysteresis loop.
General squark flavour mixing: constraints, phenomenology and benchmarks
De Causmaecker, Karen; Fuks, Benjamin; Herrmann, Bjorn; ...
2015-11-19
Here, we present an extensive study of non-minimal flavour violation in the squark sector in the framework of the Minimal Supersymmetric Standard Model. We investigate the effects of multiple non-vanishing flavour-violating elements in the squark mass matrices by means of a Markov Chain Monte Carlo scanning technique and identify parameter combinations that are favoured by both current data and theoretical constraints. We then detail the resulting distributions of the flavour-conserving and flavour-violating model parameters. Based on this analysis, we propose a set of benchmark scenarios relevant for future studies of non-minimal flavour violation in the Minimal Supersymmetric Standard Model.
Microscopic pressure-cooker model for studying molecules in confinement
NASA Astrophysics Data System (ADS)
Santamaria, Ruben; Adamowicz, Ludwik; Rosas-Acevedo, Hortensia
2015-04-01
A model for a system of a finite number of molecules in confinement is presented and expressions for determining the temperature, pressure, and volume of the system are derived. The present model is a generalisation of the Zwanzig-Langevin model because it includes pressure effects in the system. It also has general validity, preserves the ergodic hypothesis, and provides a formal framework for previous studies of hydrogen clusters in confinement. The application of the model is illustrated by an investigation of a set of prebiotic compounds exposed to varying pressure and temperature. The simulations performed within the model involve the use of a combination of molecular dynamics and density functional theory methods implemented on a computer system with a mixed CPU-GPU architecture.
a Framework for Distributed Mixed Language Scientific Applications
NASA Astrophysics Data System (ADS)
Quarrie, D. R.
The Object Management Group has defined an architecture (CORBA) for distributed object applications based on an Object Request Broker and Interface Definition Language. This project builds upon this architecture to establish a framework for the creation of mixed language scientific applications. A prototype compiler has been written that generates FORTRAN 90 or Eiffel stubs and skeletons and the required C++ glue code from an input IDL file that specifies object interfaces. This generated code can be used directly for non-distributed mixed language applications or in conjunction with the C++ code generated from a commercial IDL compiler for distributed applications. A feasibility study is presently underway to see whether a fully integrated software development environment for distributed, mixed-language applications can be created by modifying the back-end code generator of a commercial CASE tool to emit IDL.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Surdam, R.C.; MacGowan, D.B.
At temperatures less than 80/degree/C, the diagenetic reactions producing carbonate cements in sandstones can be explained nicely by the model proposed in 1986 by C.D. Curtis and M.L. Coleman. Briefly, the distribution of early carbonate cements is controlled by dissolved sulfate concentration and is a function of the processes which affect sulfate concentration (i.e., depositional water composition, microbial sulfate reduction, and water mixing). In order to use this model in a predictive sense, a knowledge of the original depositional environment's hydrology and hydrochemistry is necessary. Predictive models for sandstone diagenesis in the 80/degree/ to 130/degree/C thermal interval can be developedmore » based on carboxylic acid/CO/sub 2/ distributions and ratios. The model assumes that over this thermal interval the alkalinity in the reservoir facies is dominated by carboxylic acids and that a significant portion of CO/sub 2/ present is the product of decarboxylation of the acids (assuming there has been no significant mixing of water bodies). Furthermore, it is assumed that the stability of carbonates is a function of the carboxylic acid/CO/sub 2/ ratio, and the stability of framework grains is a function of the distribution and concentration of carboxylic acids. At temperatures greater than 130/degree/C, diagenetic reactions controlling the distribution of cements and the stability of framework grains in sandstones generally can be explained by thermocatalytic sulfate reduction. The determinative aspects of this process are the type of organics present in the system, the sulfate/organic ratio, and the presence or absence of iron. In addition to this information, if a time-temperature profile and kinetics for the redox reaction of interest are available, the process and resultant mineral reactions can be modeled.« less
NASA Astrophysics Data System (ADS)
Rohr, Tyler; Long, Matthew C.; Kavanaugh, Maria T.; Lindsay, Keith; Doney, Scott C.
2017-05-01
A coupled global numerical simulation (conducted with the Community Earth System Model) is used in conjunction with satellite remote sensing observations to examine the role of top-down (grazing pressure) and bottom-up (light, nutrients) controls on marine phytoplankton bloom dynamics in the Southern Ocean. Phytoplankton seasonal phenology is evaluated in the context of the recently proposed "disturbance-recovery" hypothesis relative to more traditional, exclusively "bottom-up" frameworks. All blooms occur when phytoplankton division rates exceed loss rates to permit sustained net population growth; however, the nature of this decoupling period varies regionally in Community Earth System Model. Regional case studies illustrate how unique pathways allow blooms to emerge despite very poor division rates or very strong grazing rates. In the Subantarctic, southeast Pacific small spring blooms initiate early cooccurring with deep mixing and low division rates, consistent with the disturbance-recovery hypothesis. Similar systematics are present in the Subantarctic, southwest Atlantic during the spring but are eclipsed by a subsequent, larger summer bloom that is coincident with shallow mixing and the annual maximum in division rates, consistent with a bottom-up, light limited framework. In the model simulation, increased iron stress prevents a similar summer bloom in the southeast Pacific. In the simulated Antarctic zone (70°S-65°S) seasonal sea ice acts as a dominant phytoplankton-zooplankton decoupling agent, triggering a delayed but substantial bloom as ice recedes. Satellite ocean color remote sensing and ocean physical reanalysis products do not precisely match model-predicted phenology, but observed patterns do indicate regional variability in mechanism across the Atlantic and Pacific.
Coalescent: an open-science framework for importance sampling in coalescent theory.
Tewari, Susanta; Spouge, John L
2015-01-01
Background. In coalescent theory, computer programs often use importance sampling to calculate likelihoods and other statistical quantities. An importance sampling scheme can exploit human intuition to improve statistical efficiency of computations, but unfortunately, in the absence of general computer frameworks on importance sampling, researchers often struggle to translate new sampling schemes computationally or benchmark against different schemes, in a manner that is reliable and maintainable. Moreover, most studies use computer programs lacking a convenient user interface or the flexibility to meet the current demands of open science. In particular, current computer frameworks can only evaluate the efficiency of a single importance sampling scheme or compare the efficiencies of different schemes in an ad hoc manner. Results. We have designed a general framework (http://coalescent.sourceforge.net; language: Java; License: GPLv3) for importance sampling that computes likelihoods under the standard neutral coalescent model of a single, well-mixed population of constant size over time following infinite sites model of mutation. The framework models the necessary core concepts, comes integrated with several data sets of varying size, implements the standard competing proposals, and integrates tightly with our previous framework for calculating exact probabilities. For a given dataset, it computes the likelihood and provides the maximum likelihood estimate of the mutation parameter. Well-known benchmarks in the coalescent literature validate the accuracy of the framework. The framework provides an intuitive user interface with minimal clutter. For performance, the framework switches automatically to modern multicore hardware, if available. It runs on three major platforms (Windows, Mac and Linux). Extensive tests and coverage make the framework reliable and maintainable. Conclusions. In coalescent theory, many studies of computational efficiency consider only effective sample size. Here, we evaluate proposals in the coalescent literature, to discover that the order of efficiency among the three importance sampling schemes changes when one considers running time as well as effective sample size. We also describe a computational technique called "just-in-time delegation" available to improve the trade-off between running time and precision by constructing improved importance sampling schemes from existing ones. Thus, our systems approach is a potential solution to the "2(8) programs problem" highlighted by Felsenstein, because it provides the flexibility to include or exclude various features of similar coalescent models or importance sampling schemes.
Transient Volcano Deformation Event Detection over Variable Spatial Scales in Alaska
NASA Astrophysics Data System (ADS)
Li, J. D.; Rude, C. M.; Gowanlock, M.; Herring, T.; Pankratius, V.
2016-12-01
Transient deformation events driven by volcanic activity can be monitored using increasingly dense networks of continuous Global Positioning System (GPS) ground stations. The wide spatial extent of GPS networks, the large number of GPS stations, and the spatially and temporally varying scale of deformation events result in the mixing of signals from multiple sources. Typical analysis then necessitates manual identification of times and regions of volcanic activity for further study and the careful tuning of algorithmic parameters to extract possible transient events. Here we present a computer-aided discovery system that facilitates the discovery of potential transient deformation events at volcanoes by providing a framework for selecting varying spatial regions of interest and for tuning the analysis parameters. This site specification step in the framework reduces the spatial mixing of signals from different volcanic sources before applying filters to remove interfering signals originating from other geophysical processes. We analyze GPS data recorded by the Plate Boundary Observatory network and volcanic activity logs from the Alaska Volcano Observatory to search for and characterize transient inflation events in Alaska. We find 3 transient inflation events between 2008 and 2015 at the Akutan, Westdahl, and Shishaldin volcanoes in the Aleutian Islands. The inflation event detected in the first half of 2008 at Akutan is validated other studies, while the inflation events observed in early 2011 at Westdahl and in early 2013 at Shishaldin are previously unreported. Our analysis framework also incorporates modelling of the transient inflation events and enables a comparison of different magma chamber inversion models. Here, we also estimate the magma sources that best describe the deformation observed by the GPS stations at Akutan, Westdahl, and Shishaldin. We acknowledge support from NASA AIST-NNX15AG84G (PI: V. Pankratius).
Li, W; Wang, B; Xie, Y L; Huang, G H; Liu, L
2015-02-01
Uncertainties exist in the water resources system, while traditional two-stage stochastic programming is risk-neutral and compares the random variables (e.g., total benefit) to identify the best decisions. To deal with the risk issues, a risk-aversion inexact two-stage stochastic programming model is developed for water resources management under uncertainty. The model was a hybrid methodology of interval-parameter programming, conditional value-at-risk measure, and a general two-stage stochastic programming framework. The method extends on the traditional two-stage stochastic programming method by enabling uncertainties presented as probability density functions and discrete intervals to be effectively incorporated within the optimization framework. It could not only provide information on the benefits of the allocation plan to the decision makers but also measure the extreme expected loss on the second-stage penalty cost. The developed model was applied to a hypothetical case of water resources management. Results showed that that could help managers generate feasible and balanced risk-aversion allocation plans, and analyze the trade-offs between system stability and economy.
Mixed-Income Schools and Housing: Advancing the Neoliberal Urban Agenda
ERIC Educational Resources Information Center
Lipman, Pauline
2008-01-01
This article uses a social justice framework to problematize national and local policies in housing and education which propose to reduce poverty and improve educational performance of low-income students through mixed-income strategies. Drawing on research on Chicago, the article argues mixed-income strategies are part of the neoliberal…
Estimating the Diets of Animals Using Stable Isotopes and a Comprehensive Bayesian Mixing Model
Hopkins, John B.; Ferguson, Jake M.
2012-01-01
Using stable isotope mixing models (SIMMs) as a tool to investigate the foraging ecology of animals is gaining popularity among researchers. As a result, statistical methods are rapidly evolving and numerous models have been produced to estimate the diets of animals—each with their benefits and their limitations. Deciding which SIMM to use is contingent on factors such as the consumer of interest, its food sources, sample size, the familiarity a user has with a particular framework for statistical analysis, or the level of inference the researcher desires to make (e.g., population- or individual-level). In this paper, we provide a review of commonly used SIMM models and describe a comprehensive SIMM that includes all features commonly used in SIMM analysis and two new features. We used data collected in Yosemite National Park to demonstrate IsotopeR's ability to estimate dietary parameters. We then examined the importance of each feature in the model and compared our results to inferences from commonly used SIMMs. IsotopeR's user interface (in R) will provide researchers a user-friendly tool for SIMM analysis. The model is also applicable for use in paleontology, archaeology, and forensic studies as well as estimating pollution inputs. PMID:22235246
Cook, James P; Mahajan, Anubha; Morris, Andrew P
2017-02-01
Linear mixed models are increasingly used for the analysis of genome-wide association studies (GWAS) of binary phenotypes because they can efficiently and robustly account for population stratification and relatedness through inclusion of random effects for a genetic relationship matrix. However, the utility of linear (mixed) models in the context of meta-analysis of GWAS of binary phenotypes has not been previously explored. In this investigation, we present simulations to compare the performance of linear and logistic regression models under alternative weighting schemes in a fixed-effects meta-analysis framework, considering designs that incorporate variable case-control imbalance, confounding factors and population stratification. Our results demonstrate that linear models can be used for meta-analysis of GWAS of binary phenotypes, without loss of power, even in the presence of extreme case-control imbalance, provided that one of the following schemes is used: (i) effective sample size weighting of Z-scores or (ii) inverse-variance weighting of allelic effect sizes after conversion onto the log-odds scale. Our conclusions thus provide essential recommendations for the development of robust protocols for meta-analysis of binary phenotypes with linear models.
Gas adsorption and gas mixture separations using mixed-ligand MOF material
Hupp, Joseph T [Northfield, IL; Mulfort, Karen L [Chicago, IL; Snurr, Randall Q [Evanston, IL; Bae, Youn-Sang [Evanston, IL
2011-01-04
A method of separating a mixture of carbon dioxiode and hydrocarbon gas using a mixed-ligand, metal-organic framework (MOF) material having metal ions coordinated to carboxylate ligands and pyridyl ligands.
NASA Technical Reports Server (NTRS)
Deutsch, A.; Buhl, D.; Brockmeyer, P.; Lakomy, R.; Flucks, M.
1992-01-01
Within the framework of the Sudbury project a considerable number of Sr-Nd isotope analyses were carried out on petrographically well-defined samples of different breccia units. Together with isotope data from the literature these data are reviewed under the aspect of a self-consistent impact model. The crucial point of this model is that the Sudbury Igneous Complex (SIC) is interpreted as a differentiated impact melt sheet without any need for an endogenic 'magmatic' component such as 'impact-triggered' magmatism or 'partial' impact melting of the crust and mixing with a mantle-derived magma.
2011-08-01
Each student completed an SCCM standardized and validated pretest and posttest , a survey of 10 five-point Likert scale questions on managing...knowledge improved from a pretest score of 60% to a posttest score of 80%. Pediatric residents reported feelings of preparation increased by an...This research uses a mixed- methods framework (qualitative and quantitative ) to demonstrate the importance of exploring alternative training models
NASA Astrophysics Data System (ADS)
Kuga, Kazuki; Tanimoto, Jun
2018-02-01
We consider two imperfect ways to protect against an infectious disease such as influenza, namely vaccination giving only partial immunity and a defense against contagion such as wearing a mask. We build up a new analytic framework considering those two cases instead of perfect vaccination, conventionally assumed as a premise, with the assumption of an infinite and well-mixed population. Our framework also considers three different strategy-updating rules based on evolutionary game theory: conventional pairwise comparison with one randomly selected agent, another concept of pairwise comparison referring to a social average, and direct alternative selection not depending on the usual copying concept. We successfully obtain a phase diagram in which vaccination coverage at equilibrium can be compared when assuming the model of either imperfect vaccination or a defense against contagion. The obtained phase diagram reveals that a defense against contagion is marginally inferior to an imperfect vaccination as long as the same coefficient value is used. Highlights - We build a new analytical framework for a vaccination game combined with the susceptible-infected-recovered (SIR) model. - Our model can evaluate imperfect provisions such as vaccination giving only partial immunity and a defense against contagion. - We obtain a phase diagram with which to compare the quantitative effects of partial vaccination and a defense against contagion.
Leontidis, Georgios
2017-11-01
Human retina is a diverse and important tissue, vastly studied for various retinal and other diseases. Diabetic retinopathy (DR), a leading cause of blindness, is one of them. This work proposes a novel and complete framework for the accurate and robust extraction and analysis of a series of retinal vascular geometric features. It focuses on studying the registered bifurcations in successive years of progression from diabetes (no DR) to DR, in order to identify the vascular alterations. Retinal fundus images are utilised, and multiple experimental designs are employed. The framework includes various steps, such as image registration and segmentation, extraction of features, statistical analysis and classification models. Linear mixed models are utilised for making the statistical inferences, alongside the elastic-net logistic regression, boruta algorithm, and regularised random forests for the feature selection and classification phases, in order to evaluate the discriminative potential of the investigated features and also build classification models. A number of geometric features, such as the central retinal artery and vein equivalents, are found to differ significantly across the experiments and also have good discriminative potential. The classification systems yield promising results with the area under the curve values ranging from 0.821 to 0.968, across the four different investigated combinations. Copyright © 2017 Elsevier Ltd. All rights reserved.
Aerosol-cloud interactions in Arctic mixed-phase stratocumulus
NASA Astrophysics Data System (ADS)
Solomon, A.
2017-12-01
Reliable climate projections require realistic simulations of Arctic cloud feedbacks. Of particular importance is accurately simulating Arctic mixed-phase stratocumuli (AMPS), which are ubiquitous and play an important role in regional climate due to their impact on the surface energy budget and atmospheric boundary layer structure through cloud-driven turbulence, radiative forcing, and precipitation. AMPS are challenging to model due to uncertainties in ice microphysical processes that determine phase partitioning between ice and radiatively important cloud liquid water. Since temperatures in AMPS are too warm for homogenous ice nucleation, ice must form through heterogeneous nucleation. In this presentation we discuss a relatively unexplored source of ice production-recycling of ice nuclei in regions of ice subsaturation. AMPS frequently have ice-subsaturated air near the cloud-driven mixed-layer base where falling ice crystals can sublimate, leaving behind IN. This study provides an idealized framework to understand feedbacks between dynamics and microphysics that maintain phase-partitioning in AMPS. In addition, the results of this study provide insight into the mechanisms and feedbacks that may maintain cloud ice in AMPS even when entrainment of IN at the mixed-layer boundaries is weak.
A Variable Turbulent Schmidt Number Formulation for Scramjet Application
NASA Technical Reports Server (NTRS)
Xiao, X.; Edwards, J. R.; Hassan, H. A.; Cutler, A. D.
2004-01-01
In high speed engines, thorough turbulent mixing of fuel and air is required to obtain high performance and high efficiency. Thus, the ability to predict turbulent mixing is crucial in obtaining accurate numerical simulation of an engine and its performance. Current state of the art in CFD simulation is to assume both turbulent Prandtl number and Schmidt numbers to be constants. However, since the mixing of fuel and air is inversely proportional to the Schmidt number, a value of 0.45 for the Schmidt number will produce twice as much diffusion as that with a value of 0.9. Because of this, current CFD tools and models have not been able to provide the needed guidance required for the efficient design of a scramjet engine. The goal of this investigation is to develop the framework needed to calculate turbulent Prandtl and Schmidt numbers as part of the solution. This requires four additional equations: two for the temperature variance and its dissipation rate and two for the concentration variance and its dissipation rate. In the current investigation emphasis will be placed on studying mixing without reactions. For such flows, variable Prandtl number does not play a major role in determining the flow. This, however, will have to be addressed when combustion is present. The approach to be used is similar to that used to develop the k-zeta model. In this approach, relevant equations are derived from the exact Navier-Stokes equations and each individual correlation is modeled. This ensures that relevant physics is incorporated into the model equations. This task has been accomplished. The final set of equations have no wall or damping functions. Moreover, they are tensorially consistent and Galilean invariant. The derivation of the model equations is rather lengthy and thus will not be incorporated into this abstract, but will be included in the final paper. As a preliminary to formulating the proposed model, the original k-zeta model with constant turbulent Prandtl and Schmidt numbers is used to model the supersonic coaxial jet mixing experiments involving He, O2 and air.
Linear mixed-effects modeling approach to FMRI group analysis
Chen, Gang; Saad, Ziad S.; Britton, Jennifer C.; Pine, Daniel S.; Cox, Robert W.
2013-01-01
Conventional group analysis is usually performed with Student-type t-test, regression, or standard AN(C)OVA in which the variance–covariance matrix is presumed to have a simple structure. Some correction approaches are adopted when assumptions about the covariance structure is violated. However, as experiments are designed with different degrees of sophistication, these traditional methods can become cumbersome, or even be unable to handle the situation at hand. For example, most current FMRI software packages have difficulty analyzing the following scenarios at group level: (1) taking within-subject variability into account when there are effect estimates from multiple runs or sessions; (2) continuous explanatory variables (covariates) modeling in the presence of a within-subject (repeated measures) factor, multiple subject-grouping (between-subjects) factors, or the mixture of both; (3) subject-specific adjustments in covariate modeling; (4) group analysis with estimation of hemodynamic response (HDR) function by multiple basis functions; (5) various cases of missing data in longitudinal studies; and (6) group studies involving family members or twins. Here we present a linear mixed-effects modeling (LME) methodology that extends the conventional group analysis approach to analyze many complicated cases, including the six prototypes delineated above, whose analyses would be otherwise either difficult or unfeasible under traditional frameworks such as AN(C)OVA and general linear model (GLM). In addition, the strength of the LME framework lies in its flexibility to model and estimate the variance–covariance structures for both random effects and residuals. The intraclass correlation (ICC) values can be easily obtained with an LME model with crossed random effects, even at the presence of confounding fixed effects. The simulations of one prototypical scenario indicate that the LME modeling keeps a balance between the control for false positives and the sensitivity for activation detection. The importance of hypothesis formulation is also illustrated in the simulations. Comparisons with alternative group analysis approaches and the limitations of LME are discussed in details. PMID:23376789
Development of WRF-CO2 4DVAR Data Assimilation System
NASA Astrophysics Data System (ADS)
Zheng, T.; French, N. H. F.
2016-12-01
Four dimensional variational (4DVar) assimilation systems have been widely used for CO2 inverse modeling at global scale. At regional scale, however, 4DVar assimilation systems have been lacking. At present, most regional CO2 inverse models use Lagrangian particle backward trajectory tools to compute influence function in an analytical/synthesis framework. To provide a 4DVar based alternative, we developed WRF-CO2 4DVAR based on Weather Research and Forecasting (WRF), its chemistry extension (WRF-Chem), and its data assimilation system (WRFDA/WRFPLUS). Different from WRFDA, WRF-CO2 4DVAR does not optimize meteorology initial condition, instead it solves for the optimized CO2 surface fluxes (sources/sink) constrained by atmospheric CO2 observations. Based on WRFPLUS, we developed tangent linear and adjoint code for CO2 emission, advection, vertical mixing in boundary layer, and convective transport. Furthermore, we implemented an incremental algorithm to solve for optimized CO2 emission scaling factors by iteratively minimizing the cost function in a Bayes framework. The model sensitivity (of atmospheric CO2 with respect to emission scaling factor) calculated by tangent linear and adjoint model agrees well with that calculated by finite difference, indicating the validity of the newly developed code. The effectiveness of WRF-CO2 4DVar for inverse modeling is tested using forward-model generated pseudo-observation data in two experiments: first-guess CO2 fluxes has a 50% overestimation in the first case and 50% underestimation in the second. In both cases, WRF-CO2 4DVar reduces cost function to less than 10-4 of its initial values in less than 20 iterations and successfully recovers the true values of emission scaling factors. We expect future applications of WRF-CO2 4DVar with satellite observations will provide insights for CO2 regional inverse modeling, including the impacts of model transport error in vertical mixing.
Linear mixed-effects modeling approach to FMRI group analysis.
Chen, Gang; Saad, Ziad S; Britton, Jennifer C; Pine, Daniel S; Cox, Robert W
2013-06-01
Conventional group analysis is usually performed with Student-type t-test, regression, or standard AN(C)OVA in which the variance-covariance matrix is presumed to have a simple structure. Some correction approaches are adopted when assumptions about the covariance structure is violated. However, as experiments are designed with different degrees of sophistication, these traditional methods can become cumbersome, or even be unable to handle the situation at hand. For example, most current FMRI software packages have difficulty analyzing the following scenarios at group level: (1) taking within-subject variability into account when there are effect estimates from multiple runs or sessions; (2) continuous explanatory variables (covariates) modeling in the presence of a within-subject (repeated measures) factor, multiple subject-grouping (between-subjects) factors, or the mixture of both; (3) subject-specific adjustments in covariate modeling; (4) group analysis with estimation of hemodynamic response (HDR) function by multiple basis functions; (5) various cases of missing data in longitudinal studies; and (6) group studies involving family members or twins. Here we present a linear mixed-effects modeling (LME) methodology that extends the conventional group analysis approach to analyze many complicated cases, including the six prototypes delineated above, whose analyses would be otherwise either difficult or unfeasible under traditional frameworks such as AN(C)OVA and general linear model (GLM). In addition, the strength of the LME framework lies in its flexibility to model and estimate the variance-covariance structures for both random effects and residuals. The intraclass correlation (ICC) values can be easily obtained with an LME model with crossed random effects, even at the presence of confounding fixed effects. The simulations of one prototypical scenario indicate that the LME modeling keeps a balance between the control for false positives and the sensitivity for activation detection. The importance of hypothesis formulation is also illustrated in the simulations. Comparisons with alternative group analysis approaches and the limitations of LME are discussed in details. Published by Elsevier Inc.
Contemplating case mix: A primer on case mix classification and management.
Costa, Andrew P; Poss, Jeffery W; McKillop, Ian
2015-01-01
Case mix classifications are the frameworks that underlie many healthcare funding schemes, including the so-called activity-based funding. Now more than ever, Canadian healthcare administrators are evaluating case mix-based funding and deciphering how they will influence their organization. Case mix is a topic fraught with technical jargon and largely relegated to government agencies or private industries. This article provides an abridged review of case mix classification as well as its implications for management in healthcare. © 2015 The Canadian College of Health Leaders.
Models to understand the population-level impact of mixed strain M. tuberculosis infections.
Sergeev, Rinat; Colijn, Caroline; Cohen, Ted
2011-07-07
Over the past decade, numerous studies have identified tuberculosis patients in whom more than one distinct strain of Mycobacterium tuberculosis is present. While it has been shown that these mixed strain infections can reduce the probability of treatment success for individuals simultaneously harboring both drug-sensitive and drug-resistant strains, it is not yet known if and how this phenomenon impacts the long-term dynamics for tuberculosis within communities. Strain-specific differences in immunogenicity and associations with drug resistance suggest that a better understanding of how strains compete within hosts will be necessary to project the effects of mixed strain infections on the future burden of drug-sensitive and drug-resistant tuberculosis. In this paper, we develop a modeling framework that allows us to investigate mechanisms of strain competition within hosts and to assess the long-term effects of such competition on the ecology of strains in a population. These models permit us to systematically evaluate the importance of unknown parameters and to suggest priority areas for future experimental research. Despite the current scarcity of data to inform the values of several model parameters, we are able to draw important qualitative conclusions from this work. We find that mixed strain infections may promote the coexistence of drug-sensitive and drug-resistant strains in two ways. First, mixed strain infections allow a strain with a lower basic reproductive number to persist in a population where it would otherwise be outcompeted if has competitive advantages within a co-infected host. Second, some individuals progressing to phenotypically drug-sensitive tuberculosis from a state of mixed drug-sensitive and drug-resistant infection may retain small subpopulations of drug-resistant bacteria that can flourish once the host is treated with antibiotics. We propose that these types of mixed infections, by increasing the ability of low fitness drug-resistant strains to persist, may provide opportunities for compensatory mutations to accumulate and for relatively fit, highly drug-resistant strains of M. tuberculosis to emerge. Published by Elsevier Ltd.
Use of the Transformative Framework in Mixed Methods Studies
ERIC Educational Resources Information Center
Sweetman, David; Badiee, Manijeh; Creswell, John W.
2010-01-01
A concern exists that mixed methods studies do not contain advocacy stances. Preliminary evidence suggests that this is not the case, but to address this issue in more depth the authors examined 13 mixed methods studies that contained an advocacy, transformative lens. Such a lens consisted of incorporating intent to advocate for an improvement in…
ERIC Educational Resources Information Center
O'Halloran, Kay L.; Tan, Sabine; Pham, Duc-Son; Bateman, John; Vande Moere, Andrew
2018-01-01
This article demonstrates how a digital environment offers new opportunities for transforming qualitative data into quantitative data in order to use data mining and information visualization for mixed methods research. The digital approach to mixed methods research is illustrated by a framework which combines qualitative methods of multimodal…
Parameterizing correlations between hydrometeor species in mixed-phase Arctic clouds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larson, Vincent E.; Nielsen, Brandon J.; Fan, Jiwen
2011-08-16
Mixed-phase Arctic clouds, like other clouds, contain small-scale variability in hydrometeor fields, such as cloud water or snow mixing ratio. This variability may be worth parameterizing in coarse-resolution numerical models. In particular, for modeling processes such as accretion and aggregation, it would be useful to parameterize subgrid correlations among hydrometeor species. However, one difficulty is that there exist many hydrometeor species and many microphysical processes, leading to complexity and computational expense.Existing lower and upper bounds (inequalities) on linear correlation coefficients provide useful guidance, but these bounds are too loose to serve directly as a method to predict subgrid correlations. Therefore,more » this paper proposes an alternative method that is based on a blend of theory and empiricism. The method begins with the spherical parameterization framework of Pinheiro and Bates (1996), which expresses the correlation matrix in terms of its Cholesky factorization. The values of the elements of the Cholesky matrix are parameterized here using a cosine row-wise formula that is inspired by the aforementioned bounds on correlations. The method has three advantages: 1) the computational expense is tolerable; 2) the correlations are, by construction, guaranteed to be consistent with each other; and 3) the methodology is fairly general and hence may be applicable to other problems. The method is tested non-interactively using simulations of three Arctic mixed-phase cloud cases from two different field experiments: the Indirect and Semi-Direct Aerosol Campaign (ISDAC) and the Mixed-Phase Arctic Cloud Experiment (M-PACE). Benchmark simulations are performed using a large-eddy simulation (LES) model that includes a bin microphysical scheme. The correlations estimated by the new method satisfactorily approximate the correlations produced by the LES.« less
Testing cloud microphysics parameterizations in NCAR CAM5 with ISDAC and M-PACE observations
NASA Astrophysics Data System (ADS)
Liu, Xiaohong; Xie, Shaocheng; Boyle, James; Klein, Stephen A.; Shi, Xiangjun; Wang, Zhien; Lin, Wuyin; Ghan, Steven J.; Earle, Michael; Liu, Peter S. K.; Zelenyuk, Alla
2011-01-01
Arctic clouds simulated by the National Center for Atmospheric Research (NCAR) Community Atmospheric Model version 5 (CAM5) are evaluated with observations from the U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Indirect and Semi-Direct Aerosol Campaign (ISDAC) and Mixed-Phase Arctic Cloud Experiment (M-PACE), which were conducted at its North Slope of Alaska site in April 2008 and October 2004, respectively. Model forecasts for the Arctic spring and fall seasons performed under the Cloud-Associated Parameterizations Testbed framework generally reproduce the spatial distributions of cloud fraction for single-layer boundary-layer mixed-phase stratocumulus and multilayer or deep frontal clouds. However, for low-level stratocumulus, the model significantly underestimates the observed cloud liquid water content in both seasons. As a result, CAM5 significantly underestimates the surface downward longwave radiative fluxes by 20-40 W m-2. Introducing a new ice nucleation parameterization slightly improves the model performance for low-level mixed-phase clouds by increasing cloud liquid water content through the reduction of the conversion rate from cloud liquid to ice by the Wegener-Bergeron-Findeisen process. The CAM5 single-column model testing shows that changing the instantaneous freezing temperature of rain to form snow from -5°C to -40°C causes a large increase in modeled cloud liquid water content through the slowing down of cloud liquid and rain-related processes (e.g., autoconversion of cloud liquid to rain). The underestimation of aerosol concentrations in CAM5 in the Arctic also plays an important role in the low bias of cloud liquid water in the single-layer mixed-phase clouds. In addition, numerical issues related to the coupling of model physics and time stepping in CAM5 are responsible for the model biases and will be explored in future studies.
NASA Astrophysics Data System (ADS)
Rice, J.; Halter, T.; Hejazi, M. I.; Jensen, E.; Liu, L.; Olson, J.; Patel, P.; Vernon, C. R.; Voisin, N.; Zuljevic, N.
2014-12-01
Integrated assessment models project the future electricity generation mix under different policy, technology, and socioeconomic scenarios, but they do not directly address site-specific factors such as interconnection costs, population density, land use restrictions, air quality, NIMBY concerns, or water availability that might affect the feasibility of achieving the technology mix. Moreover, since these factors can change over time due to climate, policy, socioeconomics, and so on, it is important to examine the dynamic feasibility of integrated assessment scenarios "on the ground." This paper explores insights from coupling an integrated assessment model (GCAM-USA) with a geospatial power plant siting model (the Capacity Expansion Regional Feasibility model, CERF) within a larger multi-model framework that includes regional climate, hydrologic, and water management modeling. GCAM-USA is a dynamic-recursive market equilibrium model simulating the impact of carbon policies on global and national markets for energy commodities and other goods; one of its outputs is the electricity generation mix and expansion at the state-level. It also simulates water demands from all sectors that are downscaled as input to the water management modeling. CERF simulates siting decisions by dynamically representing suitable areas for different generation technologies with geospatial analyses (informed by technology-specific siting criteria, such as required mean streamflow per the Clean Water Act), and then choosing siting locations to minimize interconnection costs (to electric transmission and gas pipelines). CERF results are compared across three scenarios simulated by GCAM-USA: 1) a non-mitigation scenario (RCP8.5) in which conventional fossil-fueled technologies prevail, 2) a mitigation scenario (RCP4.5) in which the carbon price causes a shift toward nuclear, carbon capture and sequestration (CCS), and renewables, and 3) a repeat of scenario (2) in which CCS technologies are made unavailable—resulting in a large increase in the nuclear fraction of the mix.
A Monte-Carlo Analysis of Organic Aerosol Volatility with Aerosol Microphysics
NASA Astrophysics Data System (ADS)
Gao, C. Y.; Tsigaridis, K.; Bauer, S. E.
2016-12-01
A newly developed box model scheme, MATRIX-VBS, includes the volatility-basis set (VBS) framework in an aerosol microphysical scheme MATRIX (Multiconfiguration Aerosol TRacker of mIXing state), which resolves aerosol mass and number concentrations and aerosol mixing state. The new scheme advanced the representation of organic aerosols in Earth system models by improving the traditional and simplistic treatment of organic aerosols as non-volatile and with a fixed size distribution. Further development includes adding the condensation of organics on coarse mode aerosols - dust and sea salt, thus making all organics in the system semi-volatile. To test and simplify the model, a Monte-Carlo analysis is performed to pin point which processes affect organics the most under which chemical and meteorological conditions. Since the model's parameterizations have the ability to capture a very wide range of conditions, from very clean to very polluted and for a wide range of meteorological conditions, all possible scenarios on Earth across the whole parameter space, including temperature, location, emissions and oxidant levels, are examined. The Monte-Carlo simulations provide quantitative information on the sensitivity of the newly developed model and help us understand how organics are affecting the size distribution, mixing state and volatility distribution at varying levels of meteorological conditions and pollution levels. In addition, these simulations give information on which parameters play a critical role in the aerosol distribution and evolution in the atmosphere and which do not, that will facilitate the simplification of the box model, an important step in its implementation in the global model.
ReEDS-Mexico: A Capacity Expansion Model of the Mexican Power System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ho, Jonathan L; Cole, Wesley J; Spyrou, Evangelia
This report documents the ReEDS-Mexico capacity expansion model, which is an extension of the ReEDS model to the Mexican power system. In recent years Mexico’s power sector has undergone considerable reform that has significant potential to impact the future electricity mix (Alpizar–Castro and Rodríguez–Monroy 2016). Day-ahead and real-time trading in Mexico’s power markets opened in early 2016. In addition to this reform, Mexico is striving to ensure that 35% of its electricity is generated from clean energy sources by 2024, 40% by 2035, and 50% by 2050 (Presidencia de la República 2016). These rapid changes in both the market andmore » the generation mix create a need for robust tools that can help electricity sector stakeholders make informed decisions. The purpose of this report is to document the extension of the National Renewable Energy Laboratory’s (NREL’s) Regional Energy Deployment System (ReEDS) model (Eurek et al. 2016) to cover the Mexico power system. This extension, which we will refer to throughout this paper as ReEDS-Mexico, provides a model of the Mexico power sector using a system-wide, least-cost optimization framework.« less
Tsipa, Argyro; Koutinas, Michalis; Usaku, Chonlatep; Mantalaris, Athanasios
2018-05-02
Currently, design and optimisation of biotechnological bioprocesses is performed either through exhaustive experimentation and/or with the use of empirical, unstructured growth kinetics models. Whereas, elaborate systems biology approaches have been recently explored, mixed-substrate utilisation is predominantly ignored despite its significance in enhancing bioprocess performance. Herein, bioprocess optimisation for an industrially-relevant bioremediation process involving a mixture of highly toxic substrates, m-xylene and toluene, was achieved through application of a novel experimental-modelling gene regulatory network - growth kinetic (GRN-GK) hybrid framework. The GRN model described the TOL and ortho-cleavage pathways in Pseudomonas putida mt-2 and captured the transcriptional kinetics expression patterns of the promoters. The GRN model informed the formulation of the growth kinetics model replacing the empirical and unstructured Monod kinetics. The GRN-GK framework's predictive capability and potential as a systematic optimal bioprocess design tool, was demonstrated by effectively predicting bioprocess performance, which was in agreement with experimental values, when compared to four commonly used models that deviated significantly from the experimental values. Significantly, a fed-batch biodegradation process was designed and optimised through the model-based control of TOL Pr promoter expression resulting in 61% and 60% enhanced pollutant removal and biomass formation, respectively, compared to the batch process. This provides strong evidence of model-based bioprocess optimisation at the gene level, rendering the GRN-GK framework as a novel and applicable approach to optimal bioprocess design. Finally, model analysis using global sensitivity analysis (GSA) suggests an alternative, systematic approach for model-driven strain modification for synthetic biology and metabolic engineering applications. Copyright © 2018. Published by Elsevier Inc.
Modeling sustainability in renewable energy supply chain systems
NASA Astrophysics Data System (ADS)
Xie, Fei
This dissertation aims at modeling sustainability of renewable fuel supply chain systems against emerging challenges. In particular, the dissertation focuses on the biofuel supply chain system design, and manages to develop advanced modeling framework and corresponding solution methods in tackling challenges in sustaining biofuel supply chain systems. These challenges include: (1) to integrate "environmental thinking" into the long-term biofuel supply chain planning; (2) to adopt multimodal transportation to mitigate seasonality in biofuel supply chain operations; (3) to provide strategies in hedging against uncertainty from conversion technology; and (4) to develop methodologies in long-term sequential planning of the biofuel supply chain under uncertainties. All models are mixed integer programs, which also involves multi-objective programming method and two-stage/multistage stochastic programming methods. In particular for the long-term sequential planning under uncertainties, to reduce the computational challenges due to the exponential expansion of the scenario tree, I also developed efficient ND-Max method which is more efficient than CPLEX and Nested Decomposition method. Through result analysis of four independent studies, it is found that the proposed modeling frameworks can effectively improve the economic performance, enhance environmental benefits and reduce risks due to systems uncertainties for the biofuel supply chain systems.
NASA Astrophysics Data System (ADS)
Vallis, Geoffrey K.; Colyer, Greg; Geen, Ruth; Gerber, Edwin; Jucker, Martin; Maher, Penelope; Paterson, Alexander; Pietschnig, Marianne; Penn, James; Thomson, Stephen I.
2018-03-01
Isca is a framework for the idealized modelling of the global circulation of planetary atmospheres at varying levels of complexity and realism. The framework is an outgrowth of models from the Geophysical Fluid Dynamics Laboratory in Princeton, USA, designed for Earth's atmosphere, but it may readily be extended into other planetary regimes. Various forcing and radiation options are available, from dry, time invariant, Newtonian thermal relaxation to moist dynamics with radiative transfer. Options are available in the dry thermal relaxation scheme to account for the effects of obliquity and eccentricity (and so seasonality), different atmospheric optical depths and a surface mixed layer. An idealized grey radiation scheme, a two-band scheme, and a multiband scheme are also available, all with simple moist effects and astronomically based solar forcing. At the complex end of the spectrum the framework provides a direct connection to comprehensive atmospheric general circulation models. For Earth modelling, options include an aquaplanet and configurable continental outlines and topography. Continents may be defined by changing albedo, heat capacity, and evaporative parameters and/or by using a simple bucket hydrology model. Oceanic Q fluxes may be added to reproduce specified sea surface temperatures, with arbitrary continental distributions. Planetary atmospheres may be configured by changing planetary size and mass, solar forcing, atmospheric mass, radiation, and other parameters. Examples are given of various Earth configurations as well as a giant planet simulation, a slowly rotating terrestrial planet simulation, and tidally locked and other orbitally resonant exoplanet simulations. The underlying model is written in Fortran and may largely be configured with Python scripts. Python scripts are also used to run the model on different architectures, to archive the output, and for diagnostics, graphics, and post-processing. All of these features are publicly available in a Git-based repository.
SIS and SIR epidemic models under virtual dispersal
Bichara, Derdei; Kang, Yun; Castillo-Chavez, Carlos; Horan, Richard; Perrings, Charles
2015-01-01
We develop a multi-group epidemic framework via virtual dispersal where the risk of infection is a function of the residence time and local environmental risk. This novel approach eliminates the need to define and measure contact rates that are used in the traditional multi-group epidemic models with heterogeneous mixing. We apply this approach to a general n-patch SIS model whose basic reproduction number R0 is computed as a function of a patch residence-times matrix ℙ. Our analysis implies that the resulting n-patch SIS model has robust dynamics when patches are strongly connected: there is a unique globally stable endemic equilibrium when R0 > 1 while the disease free equilibrium is globally stable when R0 ≤ 1. Our further analysis indicates that the dispersal behavior described by the residence-times matrix ℙ has profound effects on the disease dynamics at the single patch level with consequences that proper dispersal behavior along with the local environmental risk can either promote or eliminate the endemic in particular patches. Our work highlights the impact of residence times matrix if the patches are not strongly connected. Our framework can be generalized in other endemic and disease outbreak models. As an illustration, we apply our framework to a two-patch SIR single outbreak epidemic model where the process of disease invasion is connected to the final epidemic size relationship. We also explore the impact of disease prevalence driven decision using a phenomenological modeling approach in order to contrast the role of constant versus state dependent ℙ on disease dynamics. PMID:26489419
Analytic Closed-Form Solution of a Mixed Layer Model for Stratocumulus Clouds
NASA Astrophysics Data System (ADS)
Akyurek, Bengu Ozge
Stratocumulus clouds play an important role in climate cooling and are hard to predict using global climate and weather forecast models. Thus, previous studies in the literature use observations and numerical simulation tools, such as large-eddy simulation (LES), to solve the governing equations for the evolution of stratocumulus clouds. In contrast to the previous works, this work provides an analytic closed-form solution to the cloud thickness evolution of stratocumulus clouds in a mixed-layer model framework. With a focus on application over coastal lands, the diurnal cycle of cloud thickness and whether or not clouds dissipate are of particular interest. An analytic solution enables the sensitivity analysis of implicitly interdependent variables and extrema analysis of cloud variables that are hard to achieve using numerical solutions. In this work, the sensitivity of inversion height, cloud-base height, and cloud thickness with respect to initial and boundary conditions, such as Bowen ratio, subsidence, surface temperature, and initial inversion height, are studied. A critical initial cloud thickness value that can be dissipated pre- and post-sunrise is provided. Furthermore, an extrema analysis is provided to obtain the minima and maxima of the inversion height and cloud thickness within 24 h. The proposed solution is validated against LES results under the same initial and boundary conditions. Then, the proposed analytic framework is extended to incorporate multiple vertical columns that are coupled by advection through wind flow. This enables a bridge between the micro-scale and the mesoscale relations. The effect of advection on cloud evolution is studied and a sensitivity analysis is provided.
Magnitude and sources of bias in the detection of mixed strain M. tuberculosis infection.
Plazzotta, Giacomo; Cohen, Ted; Colijn, Caroline
2015-03-07
High resolution tests for genetic variation reveal that individuals may simultaneously host more than one distinct strain of Mycobacterium tuberculosis. Previous studies find that this phenomenon, which we will refer to as "mixed infection", may affect the outcomes of treatment for infected individuals and may influence the impact of population-level interventions against tuberculosis. In areas where the incidence of TB is high, mixed infections have been found in nearly 20% of patients; these studies may underestimate the actual prevalence of mixed infection given that tests may not be sufficiently sensitive for detecting minority strains. Specific reasons for failing to detect mixed infections would include low initial numbers of minority strain cells in sputum, stochastic growth in culture and the physical division of initial samples into parts (typically only one of which is genotyped). In this paper, we develop a mathematical framework that models the study designs aimed to detect mixed infections. Using both a deterministic and a stochastic approach, we obtain posterior estimates of the prevalence of mixed infection. We find that the posterior estimate of the prevalence of mixed infection may be substantially higher than the fraction of cases in which it is detected. We characterize this bias in terms of the sensitivity of the genotyping method and the relative growth rates and initial population sizes of the different strains collected in sputum. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Almquist, Joachim; Bendrioua, Loubna; Adiels, Caroline Beck; Goksör, Mattias; Hohmann, Stefan; Jirstrand, Mats
2015-01-01
The last decade has seen a rapid development of experimental techniques that allow data collection from individual cells. These techniques have enabled the discovery and characterization of variability within a population of genetically identical cells. Nonlinear mixed effects (NLME) modeling is an established framework for studying variability between individuals in a population, frequently used in pharmacokinetics and pharmacodynamics, but its potential for studies of cell-to-cell variability in molecular cell biology is yet to be exploited. Here we take advantage of this novel application of NLME modeling to study cell-to-cell variability in the dynamic behavior of the yeast transcription repressor Mig1. In particular, we investigate a recently discovered phenomenon where Mig1 during a short and transient period exits the nucleus when cells experience a shift from high to intermediate levels of extracellular glucose. A phenomenological model based on ordinary differential equations describing the transient dynamics of nuclear Mig1 is introduced, and according to the NLME methodology the parameters of this model are in turn modeled by a multivariate probability distribution. Using time-lapse microscopy data from nearly 200 cells, we estimate this parameter distribution according to the approach of maximizing the population likelihood. Based on the estimated distribution, parameter values for individual cells are furthermore characterized and the resulting Mig1 dynamics are compared to the single cell times-series data. The proposed NLME framework is also compared to the intuitive but limited standard two-stage (STS) approach. We demonstrate that the latter may overestimate variabilities by up to almost five fold. Finally, Monte Carlo simulations of the inferred population model are used to predict the distribution of key characteristics of the Mig1 transient response. We find that with decreasing levels of post-shift glucose, the transient response of Mig1 tend to be faster, more extended, and displays an increased cell-to-cell variability. PMID:25893847
Fractional noise destroys or induces a stochastic bifurcation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Qigui, E-mail: qgyang@scut.edu.cn; Zeng, Caibin, E-mail: zeng.cb@mail.scut.edu.cn; School of Automation Science and Engineering, South China University of Technology, Guangzhou 510640
2013-12-15
Little seems to be known about the stochastic bifurcation phenomena of non-Markovian systems. Our intention in this paper is to understand such complex dynamics by a simple system, namely, the Black-Scholes model driven by a mixed fractional Brownian motion. The most interesting finding is that the multiplicative fractional noise not only destroys but also induces a stochastic bifurcation under some suitable conditions. So it opens a possible way to explore the theory of stochastic bifurcation in the non-Markovian framework.
2015-09-30
Meneveau, C., and L. Shen (2014), Large-eddy simulation of offshore wind farm , Physics of Fluids, 26, 025101. Zhang, Z., Fringer, O.B., and S.R...being centimeter scale, surface mixed layer processes arising from the combined actions of tides, winds and mesoscale currents. Issues related to...the internal wave field and how it impacts the surface waves. APPROACH We are focusing on the problem of modification of the wind -wave field
KMgene: a unified R package for gene-based association analysis for complex traits.
Yan, Qi; Fang, Zhou; Chen, Wei; Stegle, Oliver
2018-02-09
In this report, we introduce an R package KMgene for performing gene-based association tests for familial, multivariate or longitudinal traits using kernel machine (KM) regression under a generalized linear mixed model (GLMM) framework. Extensive simulations were performed to evaluate the validity of the approaches implemented in KMgene. http://cran.r-project.org/web/packages/KMgene. qi.yan@chp.edu or wei.chen@chp.edu. Supplementary data are available at Bioinformatics online. © The Author(s) 2018. Published by Oxford University Press.
Henriques, David; González, Patricia; Doallo, Ramón; Saez-Rodriguez, Julio; Banga, Julio R.
2017-01-01
Background We consider a general class of global optimization problems dealing with nonlinear dynamic models. Although this class is relevant to many areas of science and engineering, here we are interested in applying this framework to the reverse engineering problem in computational systems biology, which yields very large mixed-integer dynamic optimization (MIDO) problems. In particular, we consider the framework of logic-based ordinary differential equations (ODEs). Methods We present saCeSS2, a parallel method for the solution of this class of problems. This method is based on an parallel cooperative scatter search metaheuristic, with new mechanisms of self-adaptation and specific extensions to handle large mixed-integer problems. We have paid special attention to the avoidance of convergence stagnation using adaptive cooperation strategies tailored to this class of problems. Results We illustrate its performance with a set of three very challenging case studies from the domain of dynamic modelling of cell signaling. The simpler case study considers a synthetic signaling pathway and has 84 continuous and 34 binary decision variables. A second case study considers the dynamic modeling of signaling in liver cancer using high-throughput data, and has 135 continuous and 109 binaries decision variables. The third case study is an extremely difficult problem related with breast cancer, involving 690 continuous and 138 binary decision variables. We report computational results obtained in different infrastructures, including a local cluster, a large supercomputer and a public cloud platform. Interestingly, the results show how the cooperation of individual parallel searches modifies the systemic properties of the sequential algorithm, achieving superlinear speedups compared to an individual search (e.g. speedups of 15 with 10 cores), and significantly improving (above a 60%) the performance with respect to a non-cooperative parallel scheme. The scalability of the method is also good (tests were performed using up to 300 cores). Conclusions These results demonstrate that saCeSS2 can be used to successfully reverse engineer large dynamic models of complex biological pathways. Further, these results open up new possibilities for other MIDO-based large-scale applications in the life sciences such as metabolic engineering, synthetic biology, drug scheduling. PMID:28813442
Penas, David R; Henriques, David; González, Patricia; Doallo, Ramón; Saez-Rodriguez, Julio; Banga, Julio R
2017-01-01
We consider a general class of global optimization problems dealing with nonlinear dynamic models. Although this class is relevant to many areas of science and engineering, here we are interested in applying this framework to the reverse engineering problem in computational systems biology, which yields very large mixed-integer dynamic optimization (MIDO) problems. In particular, we consider the framework of logic-based ordinary differential equations (ODEs). We present saCeSS2, a parallel method for the solution of this class of problems. This method is based on an parallel cooperative scatter search metaheuristic, with new mechanisms of self-adaptation and specific extensions to handle large mixed-integer problems. We have paid special attention to the avoidance of convergence stagnation using adaptive cooperation strategies tailored to this class of problems. We illustrate its performance with a set of three very challenging case studies from the domain of dynamic modelling of cell signaling. The simpler case study considers a synthetic signaling pathway and has 84 continuous and 34 binary decision variables. A second case study considers the dynamic modeling of signaling in liver cancer using high-throughput data, and has 135 continuous and 109 binaries decision variables. The third case study is an extremely difficult problem related with breast cancer, involving 690 continuous and 138 binary decision variables. We report computational results obtained in different infrastructures, including a local cluster, a large supercomputer and a public cloud platform. Interestingly, the results show how the cooperation of individual parallel searches modifies the systemic properties of the sequential algorithm, achieving superlinear speedups compared to an individual search (e.g. speedups of 15 with 10 cores), and significantly improving (above a 60%) the performance with respect to a non-cooperative parallel scheme. The scalability of the method is also good (tests were performed using up to 300 cores). These results demonstrate that saCeSS2 can be used to successfully reverse engineer large dynamic models of complex biological pathways. Further, these results open up new possibilities for other MIDO-based large-scale applications in the life sciences such as metabolic engineering, synthetic biology, drug scheduling.
Nere, Nandkishor K; Allen, Kimberley C; Marek, James C; Bordawekar, Shailendra V
2012-10-01
Drying an early stage active pharmaceutical ingredient candidate required excessively long cycle times in a pilot plant agitated filter dryer. The key to faster drying is to ensure sufficient heat transfer and minimize mass transfer limitations. Designing the right mixing protocol is of utmost importance to achieve efficient heat transfer. To this order, a composite model was developed for the removal of bound solvent that incorporates models for heat transfer and desolvation kinetics. The proposed heat transfer model differs from previously reported models in two respects: it accounts for the effects of a gas gap between the vessel wall and solids on the overall heat transfer coefficient, and headspace pressure on the mean free path length of the inert gas and thereby on the heat transfer between the vessel wall and the first layer of solids. A computational methodology was developed incorporating the effects of mixing and headspace pressure to simulate the drying profile using a modified model framework within the Dynochem software. A dryer operational protocol was designed based on the desolvation kinetics, thermal stability studies of wet and dry cake, and the understanding gained through model simulations, resulting in a multifold reduction in drying time. Copyright © 2012 Wiley-Liss, Inc.
Primal-mixed formulations for reaction-diffusion systems on deforming domains
NASA Astrophysics Data System (ADS)
Ruiz-Baier, Ricardo
2015-10-01
We propose a finite element formulation for a coupled elasticity-reaction-diffusion system written in a fully Lagrangian form and governing the spatio-temporal interaction of species inside an elastic, or hyper-elastic body. A primal weak formulation is the baseline model for the reaction-diffusion system written in the deformed domain, and a finite element method with piecewise linear approximations is employed for its spatial discretization. On the other hand, the strain is introduced as mixed variable in the equations of elastodynamics, which in turn acts as coupling field needed to update the diffusion tensor of the modified reaction-diffusion system written in a deformed domain. The discrete mechanical problem yields a mixed finite element scheme based on row-wise Raviart-Thomas elements for stresses, Brezzi-Douglas-Marini elements for displacements, and piecewise constant pressure approximations. The application of the present framework in the study of several coupled biological systems on deforming geometries in two and three spatial dimensions is discussed, and some illustrative examples are provided and extensively analyzed.
Agrawal, Prateek; Frugiuele, Claudia
2014-01-01
We study the phenomenology of a light stop NLSP in the presence of large mixing with either the first or the second generation. R-symmetric models provide a prime setting for this scenario, but our discussion also applies to the MSSM when a significant amount of mixing can be accommodated. In our framework the dominant stop decay is through the flavor violating mode into a light jet and the LSP in an extended region of parameter space. There are currently no limits from ATLAS and CMS in this region. We emulate shape-based hadronic SUSY searches for this topology, and find thatmore » they have potential sensitivity. If the extension of these analyses to this region is robust, we find that these searches can set strong exclusion limits on light stops. If not, then the flavor violating decay mode is challenging and may represent a blind spot in stop searches even at 13 TeV. Thus, an experimental investigation of this scenario is well motivated.« less
Wildlife adaptations and management in eastside interior forests with mixed severity fire regimes.
John F. Lehmkuhl
2004-01-01
Little is known about the effects of mixed severity fire on wildlife, but a population viability analysis framework that considers habitat quantity and quality, species life history, and species population structure can be used to analyze management options. Landscape-scale habitat patterns under a mixed severity fire regime are a mosaic of compositional and structural...
Martinez, Jorge L; Raiber, Matthias; Cendón, Dioni I
2017-01-01
The influence of mountain front recharge on the water balance of alluvial valley aquifers located in upland catchments of the Condamine River basin in Queensland, Australia, is investigated through the development of an integrated hydrogeological framework. A combination of three-dimensional (3D) geological modelling, hydraulic gradient maps, multivariate statistical analyses and hydrochemical mixing calculations is proposed for the identification of hydrochemical end-members and quantification of the relative contributions of each end-member to alluvial aquifer recharge. The recognised end-members correspond to diffuse recharge and lateral groundwater inflows from three hydrostratigraphic units directly connected to the alluvial aquifer. This approach allows mapping zones of potential inter-aquifer connectivity and areas of groundwater mixing between underlying units and the alluvium. Mixing calculations using samples collected under baseflow conditions reveal that lateral contribution from a regional volcanic aquifer system represents the majority (41%) of inflows to the alluvial aquifer. Diffuse recharge contribution (35%) and inflow from two sedimentary bedrock hydrostratigraphic units (collectively 24%) comprise the remainder of major recharge sources. A detailed geochemical assessment of alluvial groundwater evolution along a selected flowpath of a representative subcatchment of the Condamine River basin confirms mixing as a key process responsible for observed spatial variations in hydrochemistry. Dissolution of basalt-related minerals and dolomite, CO 2 uptake, ion-exchange, precipitation of clay minerals, and evapotranspiration further contribute to the hydrochemical evolution of groundwater in the upland alluvial aquifer. This study highlights the benefits of undertaking an integrated approach that combines multiple independent lines of evidence. The proposed methods can be applied to investigate processes associated with inter-aquifer mixing, including groundwater contamination resulting from depressurisation of underlying geological units hydraulically connected to the shallower water reservoirs. Copyright © 2016 Elsevier B.V. All rights reserved.
Smart licensing and environmental flows: Modeling framework and sensitivity testing
NASA Astrophysics Data System (ADS)
Wilby, R. L.; Fenn, C. R.; Wood, P. J.; Timlett, R.; Lequesne, T.
2011-12-01
Adapting to climate change is just one among many challenges facing river managers. The response will involve balancing the long-term water demands of society with the changing needs of the environment in sustainable and cost effective ways. This paper describes a modeling framework for evaluating the sensitivity of low river flows to different configurations of abstraction licensing under both historical climate variability and expected climate change. A rainfall-runoff model is used to quantify trade-offs among environmental flow (e-flow) requirements, potential surface and groundwater abstraction volumes, and the frequency of harmful low-flow conditions. Using the River Itchen in southern England as a case study it is shown that the abstraction volume is more sensitive to uncertainty in the regional climate change projection than to the e-flow target. It is also found that "smarter" licensing arrangements (involving a mix of hands off flows and "rising block" abstraction rules) could achieve e-flow targets more frequently than conventional seasonal abstraction limits, with only modest reductions in average annual yield, even under a hotter, drier climate change scenario.
Mixed methods for telehealth research.
Caffery, Liam J; Martin-Khan, Melinda; Wade, Victoria
2017-10-01
Mixed methods research is important to health services research because the integrated qualitative and quantitative investigation can give a more comprehensive understanding of complex interventions such as telehealth than can a single-method study. Further, mixed methods research is applicable to translational research and program evaluation. Study designs relevant to telehealth research are described and supported by examples. Quality assessment tools, frameworks to assist in the reporting and review of mixed methods research, and related methodologies are also discussed.
Axion-assisted production of sterile neutrino dark matter
Berlin, Asher; Hooper, Dan
2017-04-12
Sterile neutrinos can be generated in the early universe through oscillations with active neutrinos and represent a popular and well-studied candidate for our Universe’s dark matter. Stringent constraints from X-ray and gamma-ray line searches, however, have excluded the simplest of such models. Here in this paper, we propose a novel alternative to the standard scenario in which the mixing angle between the sterile and active neutrinos is a dynamical quantity, induced through interactions with a light axionlike field. As the energy density of the axionlike particles is diluted by Hubble expansion, the degree of mixing is reduced at late times,more » suppressing the decay rate and easily alleviating any tension with X-ray or gamma-ray constraints. Lastly, we present a simple model which illustrates the phenomenology of this scenario, and also describe a framework in which the QCD axion is responsible for the production of sterile neutrinos in the early universe.« less
NASA Astrophysics Data System (ADS)
Mezentsev, Yu A.; Baranova, N. V.
2018-05-01
A universal economical and mathematical model designed for determination of optimal strategies for managing subsystems (components of subsystems) of production and logistics of enterprises is considered. Declared universality allows taking into account on the system level both production components, including limitations on the ways of converting raw materials and components into sold goods, as well as resource and logical restrictions on input and output material flows. The presented model and generated control problems are developed within the framework of the unified approach that allows one to implement logical conditions of any complexity and to define corresponding formal optimization tasks. Conceptual meaning of used criteria and limitations are explained. The belonging of the generated tasks of the mixed programming with the class of NP is shown. An approximate polynomial algorithm for solving the posed optimization tasks for mixed programming of real dimension with high computational complexity is proposed. Results of testing the algorithm on the tasks in a wide range of dimensions are presented.
Research design: the methodology for interdisciplinary research framework.
Tobi, Hilde; Kampen, Jarl K
2018-01-01
Many of today's global scientific challenges require the joint involvement of researchers from different disciplinary backgrounds (social sciences, environmental sciences, climatology, medicine, etc.). Such interdisciplinary research teams face many challenges resulting from differences in training and scientific culture. Interdisciplinary education programs are required to train truly interdisciplinary scientists with respect to the critical factor skills and competences. For that purpose this paper presents the Methodology for Interdisciplinary Research (MIR) framework. The MIR framework was developed to help cross disciplinary borders, especially those between the natural sciences and the social sciences. The framework has been specifically constructed to facilitate the design of interdisciplinary scientific research, and can be applied in an educational program, as a reference for monitoring the phases of interdisciplinary research, and as a tool to design such research in a process approach. It is suitable for research projects of different sizes and levels of complexity, and it allows for a range of methods' combinations (case study, mixed methods, etc.). The different phases of designing interdisciplinary research in the MIR framework are described and illustrated by real-life applications in teaching and research. We further discuss the framework's utility in research design in landscape architecture, mixed methods research, and provide an outlook to the framework's potential in inclusive interdisciplinary research, and last but not least, research integrity.
A Comparative Study of Spatial Aggregation Methodologies under the BioEarth Framework
NASA Astrophysics Data System (ADS)
Chandrasekharan, B.; Rajagopalan, K.; Malek, K.; Stockle, C. O.; Adam, J. C.; Brady, M.
2014-12-01
The increasing probability of water resource scarcity due to climate change has highlighted the need for adopting an economic focus in modelling water resource uses. Hydro-economic models, developed by integrating economic optimization with biophysical crop models, are driven by the economic value of water, revealing it's most efficient uses and helping policymakers evaluate different water management strategies. One of the challenges in integrating biophysical models with economic models is the difference in the spatial scales in which they operate. Biophysical models that provide crop production functions typically run at smaller scale than economic models, and substantial spatial aggregation is required. However, any aggregation introduces a bias, i.e., a discrepancy between the functional value at the higher spatial scale and the value at the spatial scale of the aggregated units. The objective of this work is to study the sensitivity of net economic benefits in the Yakima River basin (YRB) to different spatial aggregation methods for crop production functions. The spatial aggregation methodologies that we compare involve agro-ecological zones (AEZs) and aggregation levels that reflect water management regimes (e.g. irrigation districts). Aggregation bias can distort the underlying data and result in extreme solutions. In order to avoid this we use an economic optimization model that incorporates the synthetic and historical crop mixes approach (Onal & Chen, 2012). This restricts the solutions between the weighted averages of historical and simulated feasible planting decisions, with the weights associated with crop mixes being treated as endogenous variables. This study is focused on 5 major irrigation districts of the YRB in the Pacific Northwest US. The biophysical modeling framework we use, BioEarth, includes the coupled hydrology and crop growth model, VIC-Cropsyst and an economic optimization model. Preliminary findings indicate that the standard approach of developing AEZs does not perform well when overlaid with irrigation districts. Moreover, net economic benefits were significantly different between the two aggregation methodologies. Therefore, while developing hydro-economic models, significant consideration should be placed on the aggregation methodology.
Novak, Laurie L; Johnson, Kevin B; Lorenzi, Nancy M
2010-01-01
The objective of this review was to describe methods used to study and model workflow. The authors included studies set in a variety of industries using qualitative, quantitative and mixed methods. Of the 6221 matching abstracts, 127 articles were included in the final corpus. The authors collected data from each article on researcher perspective, study type, methods type, specific methods, approaches to evaluating quality of results, definition of workflow and dependent variables. Ethnographic observation and interviews were the most frequently used methods. Long study durations revealed the large time commitment required for descriptive workflow research. The most frequently discussed technique for evaluating quality of study results was triangulation. The definition of the term “workflow” and choice of methods for studying workflow varied widely across research areas and researcher perspectives. The authors developed a conceptual framework of workflow-related terminology for use in future research and present this model for use by other researchers. PMID:20442143
Eluru, Naveen; Chakour, Vincent; Chamberlain, Morgan; Miranda-Moreno, Luis F
2013-10-01
Vehicle operating speed measured on roadways is a critical component for a host of analysis in the transportation field including transportation safety, traffic flow modeling, roadway geometric design, vehicle emissions modeling, and road user route decisions. The current research effort contributes to the literature on examining vehicle speed on urban roads methodologically and substantively. In terms of methodology, we formulate a new econometric model framework for examining speed profiles. The proposed model is an ordered response formulation of a fractional split model. The ordered nature of the speed variable allows us to propose an ordered variant of the fractional split model in the literature. The proposed formulation allows us to model the proportion of vehicles traveling in each speed interval for the entire segment of roadway. We extend the model to allow the influence of exogenous variables to vary across the population. Further, we develop a panel mixed version of the fractional split model to account for the influence of site-specific unobserved effects. The paper contributes substantively by estimating the proposed model using a unique dataset from Montreal consisting of weekly speed data (collected in hourly intervals) for about 50 local roads and 70 arterial roads. We estimate separate models for local roads and arterial roads. The model estimation exercise considers a whole host of variables including geometric design attributes, roadway attributes, traffic characteristics and environmental factors. The model results highlight the role of various street characteristics including number of lanes, presence of parking, presence of sidewalks, vertical grade, and bicycle route on vehicle speed proportions. The results also highlight the presence of site-specific unobserved effects influencing the speed distribution. The parameters from the modeling exercise are validated using a hold-out sample not considered for model estimation. The results indicate that the proposed panel mixed ordered probit fractional split model offers promise for modeling such proportional ordinal variables. Copyright © 2013 Elsevier Ltd. All rights reserved.
Huang, Chao; Wu, Jie; Song, Chuanjun; Ding, Ran; Qiao, Yan; Hou, Hongwei; Chang, Junbiao; Fan, Yaoting
2015-06-28
Upon single-crystal-to-single-crystal (SCSC) oxidation/reduction, reversible structural transformations take place between the anionic porous zeolite-like Cu(I) framework and a topologically equivalent neutral Cu(I)Cu(II) mixed-valent framework. The unique conversion behavior of the Cu(I) framework endowed it as a redox-switchable catalyst for the direct arylation of heterocycle C-H bonds.
Moorkanikkara, Srinivas Nageswaran; Blankschtein, Daniel
2010-12-21
How does one design a surfactant mixture using a set of available surfactants such that it exhibits a desired adsorption kinetics behavior? The traditional approach used to address this design problem involves conducting trial-and-error experiments with specific surfactant mixtures. This approach is typically time-consuming and resource-intensive and becomes increasingly challenging when the number of surfactants that can be mixed increases. In this article, we propose a new theoretical framework to identify a surfactant mixture that most closely meets a desired adsorption kinetics behavior. Specifically, the new theoretical framework involves (a) formulating the surfactant mixture design problem as an optimization problem using an adsorption kinetics model and (b) solving the optimization problem using a commercial optimization package. The proposed framework aims to identify the surfactant mixture that most closely satisfies the desired adsorption kinetics behavior subject to the predictive capabilities of the chosen adsorption kinetics model. Experiments can then be conducted at the identified surfactant mixture condition to validate the predictions. We demonstrate the reliability and effectiveness of the proposed theoretical framework through a realistic case study by identifying a nonionic surfactant mixture consisting of up to four alkyl poly(ethylene oxide) surfactants (C(10)E(4), C(12)E(5), C(12)E(6), and C(10)E(8)) such that it most closely exhibits a desired dynamic surface tension (DST) profile. Specifically, we use the Mulqueen-Stebe-Blankschtein (MSB) adsorption kinetics model (Mulqueen, M.; Stebe, K. J.; Blankschtein, D. Langmuir 2001, 17, 5196-5207) to formulate the optimization problem as well as the SNOPT commercial optimization solver to identify a surfactant mixture consisting of these four surfactants that most closely exhibits the desired DST profile. Finally, we compare the experimental DST profile measured at the surfactant mixture condition identified by the new theoretical framework with the desired DST profile and find good agreement between the two profiles.
Modeling of digestive processes in the stomach as a Fluid-Structure Interaction (FSI) phenomenon
NASA Astrophysics Data System (ADS)
Acharya, Shashank; Kou, Wenjun; Kahrilas, Peter J.; Pandolfino, John E.; Patankar, Neelesh A.
2017-11-01
The process of digestion in the gastro-intestinal (GI) tract is a complex mechanical and chemical process. Digestion in the stomach involves substantial mixing and breakup of food into smaller particles by muscular activity. In this work, we have developed a fully resolved model of the stomach (along with the esophagus) and its various muscle groups that deform the wall to agitate the contents inside. We use the Immersed Boundary finite-element method to model this FSI problem. From the resulting simulations, the mixing intensity is analyzed as a function of muscle deformation. As muscle deformation is controlled by changing the intensity of the neural signal, the material properties of the stomach wall will have a significant effect on the resultant kinematics. Thus, the model is then used to identify the source of common GI tract motility pathologies by replicating irregular motions as a consequence of varying the mechanical properties of the wall and the related activation signal patterns. This approach gives us an in-silico framework that can be used to study the effect of tissue properties & muscle activity on the mechanical response of the stomach wall. This work is supported by NIH Grant 5R01DK079902-09.
NASA Astrophysics Data System (ADS)
Groeskamp, S.; Zika, J. D.; McDougall, T. J.; Sloyan, B.
2016-02-01
I will present results of a new inverse technique that infers small-scale turbulent diffusivities and mesoscale eddy diffusivities from an ocean climatology of Salinity (S) and Temperature (T) in combination with surface freshwater and heat fluxes.First, the ocean circulation is represented in (S,T) coordinates, by the diathermohaline streamfunction. Framing the ocean circulation in (S,T) coordinates, isolates the component of the circulation that is directly related to water-mass transformation.Because water-mass transformation is directly related to fluxes of salt and heat, this framework allows for the formulation of an inverse method in which the diathermohaline streamfunction is balanced with known air-sea forcing and unknown mixing. When applying this inverse method to observations, we obtain observationally based estimates for both the streamfunction and the mixing. The results reveal new information about the component of the global ocean circulation due to water-mass transformation and its relation to surface freshwater and heat fluxes and small-scale and mesoscale mixing. The results provide global constraints on spatially varying patterns of diffusivities, in order to obtain a realistic overturning circulation. We find that mesoscale isopycnal mixing is much smaller than expected. These results are important for our understanding of the relation between global ocean circulation and mixing and may lead to improved parameterisations in numerical ocean models.
Testing constrained sequential dominance models of neutrinos
NASA Astrophysics Data System (ADS)
Björkeroth, Fredrik; King, Stephen F.
2015-12-01
Constrained sequential dominance (CSD) is a natural framework for implementing the see-saw mechanism of neutrino masses which allows the mixing angles and phases to be accurately predicted in terms of relatively few input parameters. We analyze a class of CSD(n) models where, in the flavour basis, two right-handed neutrinos are dominantly responsible for the ‘atmospheric’ and ‘solar’ neutrino masses with Yukawa couplings to ({ν }e,{ν }μ ,{ν }τ ) proportional to (0,1,1) and (1,n,n-2), respectively, where n is a positive integer. These coupling patterns may arise in indirect family symmetry models based on A 4. With two right-handed neutrinos, using a χ 2 test, we find a good agreement with data for CSD(3) and CSD(4) where the entire Pontecorvo-Maki-Nakagawa-Sakata mixing matrix is controlled by a single phase η, which takes simple values, leading to accurate predictions for mixing angles and the magnitude of the oscillation phase | {δ }{CP}| . We carefully study the perturbing effect of a third ‘decoupled’ right-handed neutrino, leading to a bound on the lightest physical neutrino mass {m}1{{≲ }}1 meV for the viable cases, corresponding to a normal neutrino mass hierarchy. We also discuss a direct link between the oscillation phase {δ }{CP} and leptogenesis in CSD(n) due to the same see-saw phase η appearing in both the neutrino mass matrix and leptogenesis.
Kelvin-Helmholtz instabilities as the source of inhomogeneous mixing in nova explosions.
Casanova, Jordi; José, Jordi; García-Berro, Enrique; Shore, Steven N; Calder, Alan C
2011-10-19
Classical novae are thermonuclear explosions in binary stellar systems containing a white dwarf accreting material from a close companion star. They repeatedly eject 10(-4)-10(-5) solar masses of nucleosynthetically enriched gas into the interstellar medium, recurring on intervals of decades to tens of millennia. They are probably the main sources of Galactic (15)N, (17)O and (13)C. The origin of the large enhancements and inhomogeneous distribution of these species observed in high-resolution spectra of ejected nova shells has, however, remained unexplained for almost half a century. Several mechanisms, including mixing by diffusion, shear or resonant gravity waves, have been proposed in the framework of one-dimensional or two-dimensional simulations, but none has hitherto proven successful because convective mixing can only be modelled accurately in three dimensions. Here we report the results of a three-dimensional nuclear-hydrodynamic simulation of mixing at the core-envelope interface during nova outbursts. We show that buoyant fingering drives vortices from the Kelvin-Helmholtz instability, which inevitably enriches the accreted envelope with material from the outer white-dwarf core. Such mixing also naturally produces large-scale chemical inhomogeneities. Both the metallicity enhancement and the intrinsic dispersions in the abundances are consistent with the observed values.
Bayesian mixture analysis for metagenomic community profiling.
Morfopoulou, Sofia; Plagnol, Vincent
2015-09-15
Deep sequencing of clinical samples is now an established tool for the detection of infectious pathogens, with direct medical applications. The large amount of data generated produces an opportunity to detect species even at very low levels, provided that computational tools can effectively profile the relevant metagenomic communities. Data interpretation is complicated by the fact that short sequencing reads can match multiple organisms and by the lack of completeness of existing databases, in particular for viral pathogens. Here we present metaMix, a Bayesian mixture model framework for resolving complex metagenomic mixtures. We show that the use of parallel Monte Carlo Markov chains for the exploration of the species space enables the identification of the set of species most likely to contribute to the mixture. We demonstrate the greater accuracy of metaMix compared with relevant methods, particularly for profiling complex communities consisting of several related species. We designed metaMix specifically for the analysis of deep transcriptome sequencing datasets, with a focus on viral pathogen detection; however, the principles are generally applicable to all types of metagenomic mixtures. metaMix is implemented as a user friendly R package, freely available on CRAN: http://cran.r-project.org/web/packages/metaMix sofia.morfopoulou.10@ucl.ac.uk Supplementary data are available at Bionformatics online. © The Author 2015. Published by Oxford University Press.
Radial mixing in turbomachines
NASA Astrophysics Data System (ADS)
Segaert, P.; Hirsch, Ch.; Deruyck, J.
1991-03-01
A method for computing the effects of radial mixing in a turbomachinery blade row has been developed. The method fits in the framework of a quasi-3D flow computation and hence is applied in a corrective fashion to through flow distributions. The method takes into account both secondary flows and turbulent diffusion as possible sources of mixing. Secondary flow velocities determine the magnitude of the convection terms in the energy redistribution equation while a turbulent diffusion coefficient determines the magnitude of the diffusion terms. Secondary flows are computed by solving a Poisson equation for a secondary streamfunction on a transversal S3-plane, whereby the right-hand side axial vorticity is composed of different contributions, each associated to a particular flow region: inviscid core flow, end-wall boundary layers, profile boundary layers and wakes. The turbulent mixing coefficient is estimated by a semi-empirical correlation. Secondary flow theory is applied to the VUB cascade testcase and comparisons are made between the computational results and the extensive experimental data available for this testcase. This comparison shows that the secondary flow computations yield reliable predictions of the secondary flow pattern, both qualitatively and quantitatively, taking into account the limitations of the model. However, the computations show that use of a uniform mixing coefficient has to be replaced by a more sophisticated approach.
Tripathy, P P
2015-03-01
Drying experiments have been performed with potato cylinders and slices using a laboratory scale designed natural convection mixed-mode solar dryer. The drying data were fitted to eight different mathematical models to predict the drying kinetics, and the validity of these models were evaluated statistically through coefficient of determination (R(2)), root mean square error (RMSE) and reduced chi-square (χ (2)). The present investigation showed that amongst all the mathematical models studied, the Modified Page model was in good agreement with the experimental drying data for both potato cylinders and slices. A mathematical framework has been proposed to estimate the performance of the food dryer in terms of net CO2 emissions mitigation potential along with unit cost of CO2 mitigation arising because of replacement of different fossil fuels by renewable solar energy. For each fossil fuel replaced, the gross annual amount of CO2 as well as net amount of annual CO2 emissions mitigation potential considering CO2 emissions embodied in the manufacture of mixed-mode solar dryer has been estimated. The CO2 mitigation potential and amount of fossil fuels saved while drying potato samples were found to be the maximum for coal followed by light diesel oil and natural gas. It was inferred from the present study that by the year 2020, 23 % of CO2 emissions can be mitigated by the use of mixed-mode solar dryer for drying of agricultural products.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ching, Ping Pui; Riemer, Nicole; West, Matthew
2016-05-27
Black carbon (BC) is usually mixed with other aerosol species within individual aerosol particles. This mixture, along with the particles' size and morphology, determines the particles' optical and cloud condensation nuclei properties, and hence black carbon's climate impacts. In this study the particle-resolved aerosol model PartMC-MOSAIC was used to quantify the importance of black carbon mixing state for predicting cloud microphysical quantities. Based on a set of about 100 cloud parcel simulations a process level analysis framework was developed to attribute the response in cloud microphysical properties to changes in the underlying aerosol population ("plume effect") and the cloud parcelmore » cooling rate ("parcel effect"). It shows that the response of cloud droplet number concentration to changes in BC emissions depends on the BC mixing state. When the aerosol population contains mainly aged BC particles an increase in BC emission results in increasing cloud droplet number concentrations ("additive effect"). In contrast, when the aerosol population contains mainly fresh BC particles they act as sinks for condensable gaseous species, resulting in a decrease in cloud droplet number concentration as BC emissions are increased ("competition effect"). Additionally, we quantified the error in cloud microphysical quantities when neglecting the information on BC mixing state, which is often done in aerosol models. The errors ranged from -12% to +45% for the cloud droplet number fraction, from 0% to +1022% for the nucleation-scavenged black carbon (BC) mass fraction, from -12% to +4% for the effective radius, and from -30% to +60% for the relative dispersion.« less
Hemispheric Differences in Tropical Lower Stratospheric Transport and Tracers Annual Cycle
NASA Technical Reports Server (NTRS)
Tweedy, Olga; Waugh, D.; Stolarski, R.; Oman, L.
2016-01-01
Transport of long-lived tracers (such as O, CO, and N O) in the lower stratosphere largely determines the composition of the entire stratosphere. Stratospheric transport includes the mean residual circulation (with air rising in the tropics and sinking in the polar and middle latitudes), plus two-way isentropic (quasi-horizontal) mixing by eddies. However, the relative importance of two transport components remains uncertain. Previous studies quantified the relative role of these processes based on tropics-wide average characteristics under common assumption of well-mixed tropics. However, multiple instruments provide us with evidence that show significant differences in the seasonal cycle of ozone between the Northern (0-20N) and Southern (0-20S) tropical (NT and ST respectively) lower stratosphere. In this study we investigate these differences in tracer seasonality and quantify transport processes affecting tracers annual cycle amplitude using simulations from Goddard Earth Observing System Chemistry Climate Model (GEOSCCM) and Whole Atmosphere Community Climate Model (WACCM) and compare them to observations from the Microwave Limb Sounder (MLS) on the Aura satellite. We detect the observed contrast between the ST and NT in GEOSCCM and WACCM: annual cycle in ozone and other chemical tracers is larger in the NT than in the ST but opposite is true for the annual cycle in vertical advection. Ozone budgets in the models, analyzed based on the Transformed Eulerian Mean (TEM) framework, demonstrate a major role of quasi-horizontal mixing vertical advection in determining the NTST ozone distribution and behavior. Analysis of zonal variations in the NT and ST ozone annual cycles further suggests important role of North American and Asian Summer Monsoons (associated with strong isentropic mixing) on the lower stratospheric ozone in the NT. Furthermore, multi model comparison shows that most CCMs reproduce the observed characteristic of ozone annual cycle quite well. Thus, latitudinal variations within the tropics have to be considered in order to understand the balance between upwelling and quasi- horizontal mixing in the tropical lower stratosphere and the paradigm of well mixed tropics has to be reconsidered.
Sun, Dengrong; Sun, Fangxiang; Deng, Xiaoyu; Li, Zhaohui
2015-09-08
Different amounts of Co-substituted Ni-MOF-74 have been prepared via a post-synthetic metal exchange. Inductively coupled plasma mass spectrometry, powder X-ray diffraction (XRD), N2 adsorption/desorption, and extended X-ray absorption fine structure (EXAFS) analyses indicated the successful metathesis between Co and Ni in Ni-MOF-74 to form the solid-solution-like mixed-metal Co/Ni-MOF-74. It was found that introduction of active Co into the Ni-MOF-74 framework enabled the inert Ni-MOF-74 to show activity for cyclohexene oxidation. Since Co was favorably substituted at positions more accessible to the substrate, the mixed-metal Co/Ni-MOF-74 showed superior catalytic performance, compared with pure Co-MOF-74 containing a similar amount of Co. This study provides a facile method to develop solid-solution-like MOFs for heterogeneous catalysis and highlights the great potential of this mixed-metal strategy in the development of MOFs with specific endowed functionalities.
NASA Astrophysics Data System (ADS)
Han, Yinfeng; Fu, Lianshe; Mafra, Luís; Shi, Fa-Nian
2012-02-01
Three mixed europium-yttrium organic frameworks: Eu2-xYx(Mel)(H2O)6 (Mel=mellitic acid or benzene-1,2,3,4,5,6-hexacarboxylic acid, x=0.38 1, 0.74 2, and 0.86 3) have been synthesized and characterized. All the compounds contain a 3-D net with (4, 8)-flu topology. The study indicates that the photoluminescence properties are effectively affected by the different ratios of europium and yttrium ions, the quantum efficiency is increased and the Eu3+ lifetime becomes longer in these MOFs than those of the Eu analog.
NASA Technical Reports Server (NTRS)
Pisaich, Gregory; Flueckiger, Lorenzo; Neukom, Christian; Wagner, Mike; Buchanan, Eric; Plice, Laura
2007-01-01
The Mission Simulation Toolkit (MST) is a flexible software system for autonomy research. It was developed as part of the Mission Simulation Facility (MSF) project that was started in 2001 to facilitate the development of autonomous planetary robotic missions. Autonomy is a key enabling factor for robotic exploration. There has been a large gap between autonomy software (at the research level), and software that is ready for insertion into near-term space missions. The MST bridges this gap by providing a simulation framework and a suite of tools for supporting research and maturation of autonomy. MST uses a distributed framework based on the High Level Architecture (HLA) standard. A key feature of the MST framework is the ability to plug in new models to replace existing ones with the same services. This enables significant simulation flexibility, particularly the mixing and control of fidelity level. In addition, the MST provides automatic code generation from robot interfaces defined with the Unified Modeling Language (UML), methods for maintaining synchronization across distributed simulation systems, XML-based robot description, and an environment server. Finally, the MSF supports a number of third-party products including dynamic models and terrain databases. Although the communication objects and some of the simulation components that are provided with this toolkit are specifically designed for terrestrial surface rovers, the MST can be applied to any other domain, such as aerial, aquatic, or space.
Evaluating the Sustainability of School-Based Health Centers.
Navarro, Stephanie; Zirkle, Dorothy L; Barr, Donald A
2017-01-01
The United States is facing a surge in the number of school-based health centers (SBHCs) owing to their success in delivering positive health outcomes and increasing access to care. To preserve this success, experts have developed frameworks for creating sustainable SBHCs; however, little research has affirmed or added to these models. This research seeks to analyze elements of sustainability in a case study of three SBHCs in San Diego, California, with the purpose of creating a research-based framework of SBHC sustainability to supplement expertly derived models. Using a mixed methods study design, data were collected from interviews with SBHC stakeholders, observations in SBHCs, and SBHC budgets. A grounded theory qualitative analysis and a quantitative budget analysis were completed to develop a theoretical framework for the sustainability of SBHCs. Forty-one interviews were conducted, 6 hours of observations were completed, and 3 years of SBHC budgets were analyzed to identify care coordination, community buy-in, community awareness, and SBHC partner cooperation as key themes of sustainability promoting patient retention for sustainable billing and reimbursement levels. These findings highlight the unique ways in which SBHCs gain community buy-in and awareness by becoming trusted sources of comprehensive and coordinated care within communities and among vulnerable populations. Findings also support ideas from expert models of SBHC sustainability calling for well-defined and executed community partnerships and quality coordinated care in the procurement of sustainable SBHC funding.
NASA Astrophysics Data System (ADS)
Fowler, Kathryn; Connolly, Paul J.; Topping, David O.; O'Meara, Simon
2018-02-01
The composition of atmospheric aerosol particles has been found to influence their micro-physical properties and their interaction with water vapour in the atmosphere. Core-shell models have been used to investigate the relationship between composition, viscosity and equilibration timescales. These models have traditionally relied on the Fickian laws of diffusion with no explicit account of non-ideal interactions. We introduce the Maxwell-Stefan diffusion framework as an alternative method, which explicitly accounts for non-ideal interactions through activity coefficients. e-folding time is the time it takes for the difference in surface and bulk concentration to change by an exponential factor and was used to investigate the interplay between viscosity and solubility and the effect this has on equilibration timescales within individual aerosol particles. The e-folding time was estimated after instantaneous increases in relative humidity to binary systems of water and an organic component. At low water mole fractions, viscous effects were found to dominate mixing. However, at high water mole fractions, equilibration times were more sensitive to a range in solubility, shown through the greater variation in e-folding times. This is the first time the Maxwell-Stefan framework has been applied to an atmospheric aerosol core-shell model and shows that there is a complex interplay between the viscous and solubility effects on aerosol composition that requires further investigation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Hoyoung; Tsouris, Vasilios; Lim, Yunho
We studied mixed poly(ethylene oxide) (PEO) and poly(2-(dimethylamino)ethyl methacrylate) (PDMAEMA) brushes. The question we attempted to answer was: when the chain grafting points are laterally mobile, how will this lateral mobility influence the structure and phase behavior of the mixed brush? Three different model mixed PEO/PDMAEMA brush systems were prepared: (1) a laterally mobile mixed brush by spreading onto the air–water interface a mixture of poly(ethylene oxide)–poly(n-butyl acrylate) (PEO–PnBA) and poly(2-(dimethylamino)ethyl methacrylate)–poly(n-butyl acrylate) (PDMAEMA–PnBA) diblock copolymers (the specific diblock copolymers used will be denoted as PEO 113–PnBA 100 and PDMAEMA 118–PnBA 100, where the subscripts refer to the number-average degreesmore » of polymerization of the individual blocks), (2) a mobility-restricted (inseparable) version of the above mixed brush prepared using a PEO–PnBA–PDMAEMA triblock copolymer (denoted as PEO 113–PnBA 89–PDMAEMA 120) having respective brush molecular weights matched with those of the diblock copolymers, and (3) a different laterally mobile mixed PEO and PDMAEMA brush prepared from a PEO 113–PnBA 100 and PDMAEMA 200–PnBA 103 diblock copolymer combination, which represents a further more height-mismatched mixed brush situation than described in (1). These three mixed brush systems were investigated by surface pressure–area isotherm and X-ray (XR) reflectivity measurements. These experimental data were analyzed within the theoretical framework of a continuum self-consistent field (SCF) polymer brush model. The combined experimental and theoretical results suggest that the mobile mixed brush derived using the PEO 113–PnBA 100 and PDMAEMA 118–PnBA 100 combination (i.e., mixed brush System #1) undergoes a lateral macroscopic phase separation at high chain grafting densities, whereas the more height-mismatched system (System #3) is only microscopically phase separated under comparable brush density conditions even though the lateral mobility of the grafted chains is unrestricted. The macroscopic phase separation observed in the laterally mobile mixed brush system is in contrast with the microphase separation behavior commonly observed in two-dimensional laterally mobile charged small molecule mixtures. Further study is needed to determine the detailed morphologies of the macro- and microphase-separated mixed PEO/PDMAEMA brushes.« less
A polyhedron-based metal-organic framework with a reo-e net.
Ren, Guojian; Liu, Shuxia; Wei, Feng; Ma, Fengji; Tang, Qun; Li, Shujun
2012-10-14
A polyhedron-based metal-organic framework has been designed and constructed with a reo-e net, which is constructed from trinuclear nickel clusters and mixed ligands (copolymerization pattern). It comprises three kinds of polyhedra, which are the hexahedron, cuboctahedron and rhombicuboctahedron.
Consideration in selecting crops for the human-rated life support system: a Linear Programming model
NASA Technical Reports Server (NTRS)
Wheeler, E. F.; Kossowski, J.; Goto, E.; Langhans, R. W.; White, G.; Albright, L. D.; Wilcox, D.; Henninger, D. L. (Principal Investigator)
1996-01-01
A Linear Programming model has been constructed which aids in selecting appropriate crops for CELSS (Controlled Environment Life Support System) food production. A team of Controlled Environment Agriculture (CEA) faculty, staff, graduate students and invited experts representing more than a dozen disciplines, provided a wide range of expertise in developing the model and the crop production program. The model incorporates nutritional content and controlled-environment based production yields of carefully chosen crops into a framework where a crop mix can be constructed to suit the astronauts' needs. The crew's nutritional requirements can be adequately satisfied with only a few crops (assuming vitamin mineral supplements are provided) but this will not be satisfactory from a culinary standpoint. This model is flexible enough that taste and variety driven food choices can be built into the model.
Consideration in selecting crops for the human-rated life support system: a linear programming model
NASA Astrophysics Data System (ADS)
Wheeler, E. F.; Kossowski, J.; Goto, E.; Langhans, R. W.; White, G.; Albright, L. D.; Wilcox, D.
A Linear Programming model has been constructed which aids in selecting appropriate crops for CELSS (Controlled Environment Life Support System) food production. A team of Controlled Environment Agriculture (CEA) faculty, staff, graduate students and invited experts representing more than a dozen disciplines, provided a wide range of expertise in developing the model and the crop production program. The model incorporates nutritional content and controlled-environment based production yields of carefully chosen crops into a framework where a crop mix can be constructed to suit the astronauts' needs. The crew's nutritional requirements can be adequately satisfied with only a few crops (assuming vitamin mineral supplements are provided) but this will not be satisfactory from a culinary standpoint. This model is flexible enough that taste and variety driven food choices can be built into the model.
Srigley, J A; Corace, K; Hargadon, D P; Yu, D; MacDonald, T; Fabrigar, L; Garber, G
2015-11-01
Despite the importance of hand hygiene in preventing transmission of healthcare-associated infections, compliance rates are suboptimal. Hand hygiene is a complex behaviour and psychological frameworks are promising tools to influence healthcare worker (HCW) behaviour. (i) To review the effectiveness of interventions based on psychological theories of behaviour change to improve HCW hand hygiene compliance; (ii) to determine which frameworks have been used to predict HCW hand hygiene compliance. Multiple databases and reference lists of included studies were searched for studies that applied psychological theories to improve and/or predict HCW hand hygiene. All steps in selection, data extraction, and quality assessment were performed independently by two reviewers. The search yielded 918 citations; seven met eligibility criteria. Four studies evaluated hand hygiene interventions based on psychological frameworks. Interventions were informed by goal setting, control theory, operant learning, positive reinforcement, change theory, the theory of planned behaviour, and the transtheoretical model. Three predictive studies employed the theory of planned behaviour, the transtheoretical model, and the theoretical domains framework. Interventions to improve hand hygiene adherence demonstrated efficacy but studies were at moderate to high risk of bias. For many studies, it was unclear how theories of behaviour change were used to inform the interventions. Predictive studies had mixed results. Behaviour change theory is a promising tool for improving hand hygiene; however, these theories have not been extensively examined. Our review reveals a significant gap in the literature and indicates possible avenues for novel research. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.
BiomeNet: A Bayesian Model for Inference of Metabolic Divergence among Microbial Communities
Chipman, Hugh; Gu, Hong; Bielawski, Joseph P.
2014-01-01
Metagenomics yields enormous numbers of microbial sequences that can be assigned a metabolic function. Using such data to infer community-level metabolic divergence is hindered by the lack of a suitable statistical framework. Here, we describe a novel hierarchical Bayesian model, called BiomeNet (Bayesian inference of metabolic networks), for inferring differential prevalence of metabolic subnetworks among microbial communities. To infer the structure of community-level metabolic interactions, BiomeNet applies a mixed-membership modelling framework to enzyme abundance information. The basic idea is that the mixture components of the model (metabolic reactions, subnetworks, and networks) are shared across all groups (microbiome samples), but the mixture proportions vary from group to group. Through this framework, the model can capture nested structures within the data. BiomeNet is unique in modeling each metagenome sample as a mixture of complex metabolic systems (metabosystems). The metabosystems are composed of mixtures of tightly connected metabolic subnetworks. BiomeNet differs from other unsupervised methods by allowing researchers to discriminate groups of samples through the metabolic patterns it discovers in the data, and by providing a framework for interpreting them. We describe a collapsed Gibbs sampler for inference of the mixture weights under BiomeNet, and we use simulation to validate the inference algorithm. Application of BiomeNet to human gut metagenomes revealed a metabosystem with greater prevalence among inflammatory bowel disease (IBD) patients. Based on the discriminatory subnetworks for this metabosystem, we inferred that the community is likely to be closely associated with the human gut epithelium, resistant to dietary interventions, and interfere with human uptake of an antioxidant connected to IBD. Because this metabosystem has a greater capacity to exploit host-associated glycans, we speculate that IBD-associated communities might arise from opportunist growth of bacteria that can circumvent the host's nutrient-based mechanism for bacterial partner selection. PMID:25412107
Kalra, Sanjay; Farooqi, Mohammad Hamed; El-Houni, Ali E.
2015-01-01
Premix insulins are commonly used insulin preparations, which are available in varying ratios of different molecules. These drugs contain one short- or rapid-acting, and one intermediate- or long-acting insulin. High-mix insulins are mixtures of insulins that contain 50% or more than 50% of short-acting insulin. This review describes the clinical pharmacology of high-mix insulins, including data from randomized controlled trials. It suggests various ways, in which high-mix insulin can be used, including once daily, twice daily, thrice daily, hetero-mix, and reverse regimes. The authors provide a rational framework to help diabetes care professionals, identify indications for pragmatic high-mix use. PMID:26425485
NASA Astrophysics Data System (ADS)
McPhee, J.; William, Y. W.
2005-12-01
This work presents a methodology for pumping test design based on the reliability requirements of a groundwater model. Reliability requirements take into consideration the application of the model results in groundwater management, expressed in this case as a multiobjective management model. The pumping test design is formulated as a mixed-integer nonlinear programming (MINLP) problem and solved using a combination of genetic algorithm (GA) and gradient-based optimization. Bayesian decision theory provides a formal framework for assessing the influence of parameter uncertainty over the reliability of the proposed pumping test. The proposed methodology is useful for selecting a robust design that will outperform all other candidate designs under most potential 'true' states of the system
Guo, Yujie; Shen, Jie; Ye, Xuchun; Chen, Huali; Jiang, Anli
2013-08-01
This paper aims to report the design and test the effectiveness of an innovative caring teaching model based on the theoretical framework of caring in the Chinese context. Since the 1970's, caring has been a core value in nursing education. In a previous study, a theoretical framework of caring in the Chinese context is explored employing a grounded theory study, considered beneficial for caring education. A caring teaching model was designed theoretically and a one group pre- and post-test quasi-experimental study was administered to test its effectiveness. From Oct, 2009 to Jul, 2010, a cohort of grade-2 undergraduate nursing students (n=64) in a Chinese medical school was recruited to participate in the study. Data were gathered through quantitative and qualitative methods to evaluate the effectiveness of the caring teaching model. The caring teaching model created an esthetic situation and experiential learning style for teaching caring that was integrated within the curricula. Quantitative data from the quasi-experimental study showed that the post-test scores of each item were higher than those on the pre-test (p<0.01). Thematic analysis of 1220 narratives from students' caring journals and reports of participant class observation revealed two main thematic categories, which reflected, from the students' points of view, the development of student caring character and the impact that the caring teaching model had on this regard. The model could be used as an integrated approach to teach caring in nursing curricula. It would also be beneficial for nursing administrators in cultivating caring nurse practitioners. Copyright © 2012 Elsevier Ltd. All rights reserved.
MAPGEN Planner: Mixed-Initiative Activity Planning for the Mars Exploration Rover Mission
NASA Technical Reports Server (NTRS)
Ai-Chang, Mitch; Bresina, John; Charest, Leonard; Hsu, Jennifer; Jonsson, Ari K.; Kanefsky, Bob; Maldague, Pierre; Morris, Paul; Rajan, Kanna; Yglesias, Jeffrey
2003-01-01
This document describes the Mixed-initiative Activity Plan Generation system MAPGEN. The system is be- ing developed as one of the tools to be used during surface operations of NASA's Mars Exploration Rover mission (MER). However, the core technology is general and can be adapted to different missions and applications. The motivation for the system is to better support users that need to rapidly build activity plans that have to satisfy complex rules and fit within resource limits. The system therefore combines an existing tool for activity plan editing and resource modeling, with an advanced constraint-based reasoning and planning framework. The demonstration will show the key capabilities of the automated reasoning and planning component of the system, with emphasis on how these capabilities will be used during surface operations of the MER mission.
Use and misuse of mixed methods in population oral health research: A scoping review.
Gupta, A; Keuskamp, D
2018-05-30
Despite the known benefits of a mixed methods approach in health research, little is known of its use in the field of population oral health. To map the extent of literature using a mixed methods approach to examine population oral health outcomes. For a comprehensive search of all the available literature published in the English language, databases including PubMed, Dentistry and Oral Sciences Source (DOSS), CINAHL, Web of Science and EMBASE (including Medline) were searched using a range of keywords from inception to October 2017. Only peer-reviewed, population-based studies of oral health outcomes conducted among non-institutionalised participants and using mixed methods were considered eligible for inclusion. Only nine studies met the inclusion criteria and were included in the review. The most frequent oral health outcome investigated was caries experience. However, most studies lacked a theoretical rationale or framework for using mixed methods, or supporting the use of qualitative data. Concurrent triangulation with a convergent design was the most commonly used mixed methods typology for integrating quantitative and qualitative data. The tools used to collect quantitative and qualitative data were mostly limited to surveys and interviews. With growing complexity recognised in the determinants of oral disease, future studies addressing population oral health outcomes are likely to benefit from the use of mixed methods. Explicit consideration of theoretical framework and methodology will strengthen those investigations. Copyright© 2018 Dennis Barber Ltd.
A large eddy simulation scheme for turbulent reacting flows
NASA Technical Reports Server (NTRS)
Gao, Feng
1993-01-01
The recent development of the dynamic subgrid-scale (SGS) model has provided a consistent method for generating localized turbulent mixing models and has opened up great possibilities for applying the large eddy simulation (LES) technique to real world problems. Given the fact that the direct numerical simulation (DNS) can not solve for engineering flow problems in the foreseeable future (Reynolds 1989), the LES is certainly an attractive alternative. It seems only natural to bring this new development in SGS modeling to bear on the reacting flows. The major stumbling block for introducing LES to reacting flow problems has been the proper modeling of the reaction source terms. Various models have been proposed, but none of them has a wide range of applicability. For example, some of the models in combustion have been based on the flamelet assumption which is only valid for relatively fast reactions. Some other models have neglected the effects of chemical reactions on the turbulent mixing time scale, which is certainly not valid for fast and non-isothermal reactions. The probability density function (PDF) method can be usefully employed to deal with the modeling of the reaction source terms. In order to fit into the framework of LES, a new PDF, the large eddy PDF (LEPDF), is introduced. This PDF provides an accurate representation for the filtered chemical source terms and can be readily calculated in the simulations. The details of this scheme are described.
Tabrizi, Jafar Sadegh; Gholipour, Kamal; Iezadi, Shabnam; Farahbakhsh, Mostafa; Ghiasi, Akbar
2018-01-01
The aim was to design a district health management performance framework for Iran's healthcare system. The mixed-method study was conducted between September 2015 and May 2016 in Tabriz, Iran. In this study, the indicators of district health management performance were obtained by analyzing the 45 semi-structured surveys of experts in the public health system. Content validity of performance indicators which were generated in qualitative part were reviewed and confirmed based on content validity index (CVI). Also content validity ratio (CVR) was calculated using data acquired from a survey of 21 experts in quantitative part. The result of this study indicated that, initially, 81 indicators were considered in framework of district health management performance and, at the end, 53 indicators were validated and confirmed. These indicators were classified in 11 categories which include: human resources and organizational creativity, management and leadership, rules and ethics, planning and evaluation, district managing, health resources management and economics, community participation, quality improvement, research in health system, health information management, epidemiology and situation analysis. The designed framework model can be used to assess the district health management and facilitates performance improvement at the district level.
Transformative Mixed Methods Research
ERIC Educational Resources Information Center
Mertens, Donna M.
2010-01-01
Paradigms serve as metaphysical frameworks that guide researchers in the identification and clarification of their beliefs with regard to ethics, reality, knowledge, and methodology. The transformative paradigm is explained and illustrated as a framework for researchers who place a priority on social justice and the furtherance of human rights.…
Steidinger, Brian S.; Bever, James D.
2016-01-01
Plants in multiple symbioses are exploited by symbionts that consume their resources without providing services. Discriminating hosts are thought to stabilize mutualism by preferentially allocating resources into anatomical structures (modules) where services are generated, with examples of modules including the entire inflorescences of figs and the root nodules of legumes. Modules are often colonized by multiple symbiotic partners, such that exploiters that co-occur with mutualists within mixed modules can share rewards generated by their mutualist competitors. We developed a meta-population model to answer how the population dynamics of mutualists and exploiters change when they interact with hosts with different module occupancies (number of colonists per module) and functionally different patterns of allocation into mixed modules. We find that as module occupancy increases, hosts must increase the magnitude of preferentially allocated resources in order to sustain comparable populations of mutualists. Further, we find that mixed colonization can result in the coexistence of mutualist and exploiter partners, but only when preferential allocation follows a saturating function of the number of mutualists in a module. Finally, using published data from the fig–wasp mutualism as an illustrative example, we derive model predictions that approximate the proportion of exploiter, non-pollinating wasps observed in the field. PMID:26740613
A rationale for long-lived quarks and leptons at the LHC: low energy flavour theory
NASA Astrophysics Data System (ADS)
Éboli, O. J. P.; Savoy, C. A.; Funchal, R. Zukanovich
2012-02-01
In the framework of gauged flavour symmetries, new fermions in parity symmetric representations of the standard model are generically needed for the compensation of mixed anomalies. The key point is that their masses are also protected by flavour symmetries and some of them are expected to lie way below the flavour symmetry breaking scale(s), which has to occur many orders of magnitude above the electroweak scale to be compatible with the available data from flavour changing neutral currents and CP violation experiments. We argue that, actually, some of these fermions would plausibly get masses within the LHC range. If they are taken to be heavy quarks and leptons, in (bi)-fundamental representations of the standard model symmetries, their mixings with the light ones are strongly constrained to be very small by electroweak precision data. The alternative chosen here is to exactly forbid such mixings by breaking of flavour symmetries into an exact discrete symmetry, the so-called proton-hexality, primarily suggested to avoid proton decay. As a consequence of the large value needed for the flavour breaking scale, those heavy particles are long-lived and rather appropriate for the current and future searches at the LHC for quasi-stable hadrons and leptons. In fact, the LHC experiments have already started to look for them.
Chen, Chen; Anderson, Jason C; Wang, Haizhong; Wang, Yinhai; Vogt, Rachel; Hernandez, Salvador
2017-11-01
Transportation agencies need efficient methods to determine how to reduce bicycle accidents while promoting cycling activities and prioritizing safety improvement investments. Many studies have used standalone methods, such as level of traffic stress (LTS) and bicycle level of service (BLOS), to better understand bicycle mode share and network connectivity for a region. However, in most cases, other studies rely on crash severity models to explain what variables contribute to the severity of bicycle related crashes. This research uniquely correlates bicycle LTS with reported bicycle crash locations for four cities in New Hampshire through geospatial mapping. LTS measurements and crash locations are compared visually using a GIS framework. Next, a bicycle injury severity model, that incorporates LTS measurements, is created through a mixed logit modeling framework. Results of the visual analysis show some geospatial correlation between higher LTS roads and "Injury" type bicycle crashes. It was determined, statistically, that LTS has an effect on the severity level of bicycle crashes and high LTS can have varying effects on severity outcome. However, it is recommended that further analyses be conducted to better understand the statistical significance and effect of LTS on injury severity. As such, this research will validate the use of LTS as a proxy for safety risk regardless of the recorded bicycle crash history. This research will help identify the clustering patterns of bicycle crashes on high-risk corridors and, therefore, assist with bicycle route planning and policy making. This paper also suggests low-cost countermeasures or treatments that can be implemented to address high-risk areas. Specifically, with the goal of providing safer routes for cyclists, such countermeasures or treatments have the potential to substantially reduce the number of fatalities and severe injuries. Published by Elsevier Ltd.
Structural Equation Modeling: A Framework for Ocular and Other Medical Sciences Research
Christ, Sharon L.; Lee, David J.; Lam, Byron L.; Diane, Zheng D.
2017-01-01
Structural equation modeling (SEM) is a modeling framework that encompasses many types of statistical models and can accommodate a variety of estimation and testing methods. SEM has been used primarily in social sciences but is increasingly used in epidemiology, public health, and the medical sciences. SEM provides many advantages for the analysis of survey and clinical data, including the ability to model latent constructs that may not be directly observable. Another major feature is simultaneous estimation of parameters in systems of equations that may include mediated relationships, correlated dependent variables, and in some instances feedback relationships. SEM allows for the specification of theoretically holistic models because multiple and varied relationships may be estimated together in the same model. SEM has recently expanded by adding generalized linear modeling capabilities that include the simultaneous estimation of parameters of different functional form for outcomes with different distributions in the same model. Therefore, mortality modeling and other relevant health outcomes may be evaluated. Random effects estimation using latent variables has been advanced in the SEM literature and software. In addition, SEM software has increased estimation options. Therefore, modern SEM is quite general and includes model types frequently used by health researchers, including generalized linear modeling, mixed effects linear modeling, and population average modeling. This article does not present any new information. It is meant as an introduction to SEM and its uses in ocular and other health research. PMID:24467557
Competition for resources can explain patterns of social and individual learning in nature.
Smolla, Marco; Gilman, R Tucker; Galla, Tobias; Shultz, Susanne
2015-09-22
In nature, animals often ignore socially available information despite the multiple theoretical benefits of social learning over individual trial-and-error learning. Using information filtered by others is quicker, more efficient and less risky than randomly sampling the environment. To explain the mix of social and individual learning used by animals in nature, most models penalize the quality of socially derived information as either out of date, of poor fidelity or costly to acquire. Competition for limited resources, a fundamental evolutionary force, provides a compelling, yet hitherto overlooked, explanation for the evolution of mixed-learning strategies. We present a novel model of social learning that incorporates competition and demonstrates that (i) social learning is favoured when competition is weak, but (ii) if competition is strong social learning is favoured only when resource quality is highly variable and there is low environmental turnover. The frequency of social learning in our model always evolves until it reduces the mean foraging success of the population. The results of our model are consistent with empirical studies showing that individuals rely less on social information where resources vary little in quality and where there is high within-patch competition. Our model provides a framework for understanding the evolution of social learning, a prerequisite for human cumulative culture. © 2015 The Author(s).
Competition for resources can explain patterns of social and individual learning in nature
Smolla, Marco; Gilman, R. Tucker; Galla, Tobias; Shultz, Susanne
2015-01-01
In nature, animals often ignore socially available information despite the multiple theoretical benefits of social learning over individual trial-and-error learning. Using information filtered by others is quicker, more efficient and less risky than randomly sampling the environment. To explain the mix of social and individual learning used by animals in nature, most models penalize the quality of socially derived information as either out of date, of poor fidelity or costly to acquire. Competition for limited resources, a fundamental evolutionary force, provides a compelling, yet hitherto overlooked, explanation for the evolution of mixed-learning strategies. We present a novel model of social learning that incorporates competition and demonstrates that (i) social learning is favoured when competition is weak, but (ii) if competition is strong social learning is favoured only when resource quality is highly variable and there is low environmental turnover. The frequency of social learning in our model always evolves until it reduces the mean foraging success of the population. The results of our model are consistent with empirical studies showing that individuals rely less on social information where resources vary little in quality and where there is high within-patch competition. Our model provides a framework for understanding the evolution of social learning, a prerequisite for human cumulative culture. PMID:26354936
A Typology of Mixed Methods Sampling Designs in Social Science Research
ERIC Educational Resources Information Center
Onwuegbuzie, Anthony J.; Collins, Kathleen M. T.
2007-01-01
This paper provides a framework for developing sampling designs in mixed methods research. First, we present sampling schemes that have been associated with quantitative and qualitative research. Second, we discuss sample size considerations and provide sample size recommendations for each of the major research designs for quantitative and…
The Threat of Unexamined Secondary Data: A Critical Race Transformative Convergent Mixed Methods
ERIC Educational Resources Information Center
Garcia, Nichole M.; Mayorga, Oscar J.
2018-01-01
This article uses a critical race theory framework to conceptualize a Critical Race Transformative Convergent Mixed Methods (CRTCMM) in education. CRTCMM is a methodology that challenges normative educational research practices by acknowledging that racism permeates educational institutions and marginalizes Communities of Color. The focus of this…
A Bayesian Missing Data Framework for Generalized Multiple Outcome Mixed Treatment Comparisons
ERIC Educational Resources Information Center
Hong, Hwanhee; Chu, Haitao; Zhang, Jing; Carlin, Bradley P.
2016-01-01
Bayesian statistical approaches to mixed treatment comparisons (MTCs) are becoming more popular because of their flexibility and interpretability. Many randomized clinical trials report multiple outcomes with possible inherent correlations. Moreover, MTC data are typically sparse (although richer than standard meta-analysis, comparing only two…
Methodology for the evaluation of the Stephanie Alexander Kitchen Garden program.
Gibbs, L; Staiger, P K; Townsend, M; Macfarlane, S; Gold, L; Block, K; Johnson, B; Kulas, J; Waters, E
2013-04-01
Community and school cooking and gardening programs have recently increased internationally. However, despite promising indications, there is limited evidence of their effectiveness. This paper presents the evaluation framework and methods negotiated and developed to meet the information needs of all stakeholders for the Stephanie Alexander Kitchen Garden (SAKG) program, a combined cooking and gardening program implemented in selectively funded primary schools across Australia. The evaluation used multiple aligned theoretical frameworks and models, including a public health ecological approach, principles of effective health promotion and models of experiential learning. The evaluation is a non-randomised comparison of six schools receiving the program (intervention) and six comparison schools (all government-funded primary schools) in urban and rural areas of Victoria, Australia. A mixed-methods approach was used, relying on qualitative measures to understand changes in school cultures and the experiential impacts on children, families, teachers, parents and volunteers, and quantitative measures at baseline and 1 year follow up to provide supporting information regarding patterns of change. The evaluation study design addressed the limitations of many existing evaluation studies of cooking or garden programs. The multistrand approach to the mixed methodology maintained the rigour of the respective methods and provided an opportunity to explore complexity in the findings. Limited sensitivity of some of the quantitative measures was identified, as well as the potential for bias in the coding of the open-ended questions. The SAKG evaluation methodology will address the need for appropriate evaluation approaches for school-based kitchen garden programs. It demonstrates the feasibility of a meaningful, comprehensive evaluation of school-based programs and also demonstrates the central role qualitative methods can have in a mixed-method evaluation. So what? This paper contributes to debate about appropriate evaluation approaches to meet the information needs of all stakeholders and will support the sharing of measures and potential comparisons between program outcomes for comparable population groups and settings.
Gamifying Self-Management of Chronic Illnesses: A Mixed-Methods Study
Wills, Gary; Ranchhod, Ashok
2016-01-01
Background Self-management of chronic illnesses is an ongoing issue in health care research. Gamification is a concept that arose in the field of computer science and has been borrowed by many other disciplines. It is perceived by many that gamification can improve the self-management experience of people with chronic illnesses. This paper discusses the validation of a framework (called The Wheel of Sukr) that was introduced to achieve this goal. Objective This research aims to (1) discuss a gamification framework targeting the self-management of chronic illnesses and (2) validate the framework by diabetic patients, medical professionals, and game experts. Methods A mixed-method approach was used to validate the framework. Expert interviews (N=8) were conducted in order to validate the themes of the framework. Additionally, diabetic participants completed a questionnaire (N=42) in order to measure their attitudes toward the themes of the framework. Results The results provide a validation of the framework. This indicates that gamification might improve the self-management of chronic illnesses, such as diabetes. Namely, the eight themes in the Wheel of Sukr (fun, esteem, socializing, self-management, self-representation, motivation, growth, sustainability) were perceived positively by 71% (30/42) of the participants with P value <.001. Conclusions In this research, both the interviews and the questionnaire yielded positive results that validate the framework (The Wheel of Sukr). Generally, this study indicates an overall acceptance of the notion of gamification in the self-management of diabetes. PMID:27612632
Gamifying Self-Management of Chronic Illnesses: A Mixed-Methods Study.
AlMarshedi, Alaa; Wills, Gary; Ranchhod, Ashok
2016-09-09
Self-management of chronic illnesses is an ongoing issue in health care research. Gamification is a concept that arose in the field of computer science and has been borrowed by many other disciplines. It is perceived by many that gamification can improve the self-management experience of people with chronic illnesses. This paper discusses the validation of a framework (called The Wheel of Sukr) that was introduced to achieve this goal. This research aims to (1) discuss a gamification framework targeting the self-management of chronic illnesses and (2) validate the framework by diabetic patients, medical professionals, and game experts. A mixed-method approach was used to validate the framework. Expert interviews (N=8) were conducted in order to validate the themes of the framework. Additionally, diabetic participants completed a questionnaire (N=42) in order to measure their attitudes toward the themes of the framework. The results provide a validation of the framework. This indicates that gamification might improve the self-management of chronic illnesses, such as diabetes. Namely, the eight themes in the Wheel of Sukr (fun, esteem, socializing, self-management, self-representation, motivation, growth, sustainability) were perceived positively by 71% (30/42) of the participants with P value <.001. In this research, both the interviews and the questionnaire yielded positive results that validate the framework (The Wheel of Sukr). Generally, this study indicates an overall acceptance of the notion of gamification in the self-management of diabetes.
NASA Technical Reports Server (NTRS)
Zhang, Minghua; Bretherton, Christopher S.; Blossey, Peter N.; Austin, Phillip H.; Bacmeister, Julio T.; Bony, Sandrine; Brient, Florent; Cheedela, Suvarchal K.; Cheng, Anning; DelGenio, Anthony;
2013-01-01
1] CGILS-the CFMIP-GASS Intercomparison of Large Eddy Models (LESs) and single column models (SCMs)-investigates the mechanisms of cloud feedback in SCMs and LESs under idealized climate change perturbation. This paper describes the CGILS results from 15 SCMs and 8 LES models. Three cloud regimes over the subtropical oceans are studied: shallow cumulus, cumulus under stratocumulus, and well-mixed coastal stratus/stratocumulus. In the stratocumulus and coastal stratus regimes, SCMs without activated shallow convection generally simulated negative cloud feedbacks, while models with active shallow convection generally simulated positive cloud feedbacks. In the shallow cumulus alone regime, this relationship is less clear, likely due to the changes in cloud depth, lateral mixing, and precipitation or a combination of them. The majority of LES models simulated negative cloud feedback in the well-mixed coastal stratus/stratocumulus regime, and positive feedback in the shallow cumulus and stratocumulus regime. A general framework is provided to interpret SCM results: in a warmer climate, the moistening rate of the cloudy layer associated with the surface-based turbulence parameterization is enhanced; together with weaker large-scale subsidence, it causes negative cloud feedback. In contrast, in the warmer climate, the drying rate associated with the shallow convection scheme is enhanced. This causes positive cloud feedback. These mechanisms are summarized as the "NESTS" negative cloud feedback and the "SCOPE" positive cloud feedback (Negative feedback from Surface Turbulence under weaker Subsidence-Shallow Convection PositivE feedback) with the net cloud feedback depending on how the two opposing effects counteract each other. The LES results are consistent with these interpretations
2008-10-01
Agents in the DEEP architecture extend and use the Java Agent Development (JADE) framework. DEEP requires a distributed multi-agent system and a...framework to help simplify the implementation of this system. JADE was chosen because it is fully implemented in Java , and supports these requirements
Students' Construction of External Representations in Design-Based Learning Situations
ERIC Educational Resources Information Center
de Vries, Erica
2006-01-01
This article develops a theoretical framework for the study of students' construction of mixed multiple external representations in design-based learning situations involving an adaptation of professional tasks and tools to a classroom setting. The framework draws on research on professional design processes and on learning with multiple external…
ERIC Educational Resources Information Center
Luyt, Russell
2012-01-01
A framework for quantitative measurement development, validation, and revision that incorporates both qualitative and quantitative methods is introduced. It extends and adapts Adcock and Collier's work, and thus, facilitates understanding of quantitative measurement development, validation, and revision as an integrated and cyclical set of…
Quantifying Conditions for Fault Self-Sealing in Geologic Carbon Sequestration
NASA Astrophysics Data System (ADS)
McPherson, B. J. O. L.; Patil, V.; Moore, J.; Trujillo, E. M.
2015-12-01
Injecting anthropogenic CO2 into a subsurface reservoir for sequestration will impact the reservoir significantly, including its geochemistry, porosity and permeability. If a fault or fracture penetrates the reservoir, CO2-laden brine may migrate into that fault, eventually sealing it via precipitation or opening it up via dissolution. The goal of this study was to identify and quantify such conditions of fault self-sealing or self-enhancing. We found that the dimensionless Damköhler number (Da), the ratio of reaction rate to advection rate, provides a meaningful framework for characterizing the propensity of (fault) systems to seal or open up. We developed our own framework wherein Damköhler numbers evolve spatiotemporally as opposed to the traditional single Da value approach. Our approach enables us to use the Damköhler for characterization of complex multiphase and multimineral reactive transport problems. We applied this framework to 1D fault models with eight conditions derived from four geologic compositions and two reservoir conditions. The four geologic compositions were chosen such that three out of them were representative of distinct geologic end-members (sandstone, mudstone and dolomitic limestone) and one was a mixed composition based on an average of three end-member compositions. The two sets of P-T conditions chosen included one set corresponding to CO2 in a gaseous phase ("shallow conditions") and the other corresponding to supercritical phase CO2 ("deep conditions"). Simulation results suggest that fault sealing via carbonate precipitation was a possibility for shallow conditions within limestone and mixed composition settings. The concentration of cations in the water was found to be an important control on the carbonate precipitation. The deep conditions models did not forecast self-sealing via carbonates. Sealing via clay precipitation is a likely possibility, but the 1000 year time-frame may be short for such. Model results indicated a range of Da values within which substantial reductions of fault porosity (meaning self-sealing) could be expected. A key conclusion suggested by the results of this study is that carbonate precipitation in the near-surface (top ~50-100 m) depths of a fault is the most likely mechanism of "self-sealing" for most geological settings.
Henriques, David; Rocha, Miguel; Saez-Rodriguez, Julio; Banga, Julio R.
2015-01-01
Motivation: Systems biology models can be used to test new hypotheses formulated on the basis of previous knowledge or new experimental data, contradictory with a previously existing model. New hypotheses often come in the shape of a set of possible regulatory mechanisms. This search is usually not limited to finding a single regulation link, but rather a combination of links subject to great uncertainty or no information about the kinetic parameters. Results: In this work, we combine a logic-based formalism, to describe all the possible regulatory structures for a given dynamic model of a pathway, with mixed-integer dynamic optimization (MIDO). This framework aims to simultaneously identify the regulatory structure (represented by binary parameters) and the real-valued parameters that are consistent with the available experimental data, resulting in a logic-based differential equation model. The alternative to this would be to perform real-valued parameter estimation for each possible model structure, which is not tractable for models of the size presented in this work. The performance of the method presented here is illustrated with several case studies: a synthetic pathway problem of signaling regulation, a two-component signal transduction pathway in bacterial homeostasis, and a signaling network in liver cancer cells. Supplementary information: Supplementary data are available at Bioinformatics online. Contact: julio@iim.csic.es or saezrodriguez@ebi.ac.uk PMID:26002881
Henriques, David; Rocha, Miguel; Saez-Rodriguez, Julio; Banga, Julio R
2015-09-15
Systems biology models can be used to test new hypotheses formulated on the basis of previous knowledge or new experimental data, contradictory with a previously existing model. New hypotheses often come in the shape of a set of possible regulatory mechanisms. This search is usually not limited to finding a single regulation link, but rather a combination of links subject to great uncertainty or no information about the kinetic parameters. In this work, we combine a logic-based formalism, to describe all the possible regulatory structures for a given dynamic model of a pathway, with mixed-integer dynamic optimization (MIDO). This framework aims to simultaneously identify the regulatory structure (represented by binary parameters) and the real-valued parameters that are consistent with the available experimental data, resulting in a logic-based differential equation model. The alternative to this would be to perform real-valued parameter estimation for each possible model structure, which is not tractable for models of the size presented in this work. The performance of the method presented here is illustrated with several case studies: a synthetic pathway problem of signaling regulation, a two-component signal transduction pathway in bacterial homeostasis, and a signaling network in liver cancer cells. Supplementary data are available at Bioinformatics online. julio@iim.csic.es or saezrodriguez@ebi.ac.uk. © The Author 2015. Published by Oxford University Press.
NASA Astrophysics Data System (ADS)
Sallée, J.-B.; Shuckburgh, E.; Bruneau, N.; Meijers, A. J. S.; Bracegirdle, T. J.; Wang, Z.; Roy, T.
2013-04-01
The ability of the models contributing to the fifth Coupled Models Intercomparison Project (CMIP5) to represent the Southern Ocean hydrological properties and its overturning is investigated in a water mass framework. Models have a consistent warm and light bias spread over the entire water column. The greatest bias occurs in the ventilated layers, which are volumetrically dominated by mode and intermediate layers. The ventilated layers have been observed to have a strong fingerprint of climate change and to impact climate by sequestrating a significant amount of heat and carbon dioxide. The mode water layer is poorly represented in the models and both mode and intermediate water have a significant fresh bias. Under increased radiative forcing, models simulate a warming and lightening of the entire water column, which is again greatest in the ventilated layers, highlighting the importance of these layers for propagating the climate signal into the deep ocean. While the intensity of the water mass overturning is relatively consistent between models, when compared to observation-based reconstructions, they exhibit a slightly larger rate of overturning at shallow to intermediate depths, and a slower rate of overturning deeper in the water column. Under increased radiative forcing, atmospheric fluxes increase the rate of simulated upper cell overturning, but this increase is counterbalanced by diapycnal fluxes, including mixed-layer horizontal mixing, and mostly vanishes.
Li, Qiuping; Xu, Yinghua; Zhou, Huiya; Loke, Alice Yuen
2015-12-01
The purpose of this study was to test the previous proposed Preliminary Live with Love Conceptual Framework (P-LLCF) that focuses on spousal caregiver-patient couples in their journey of coping with cancer as dyads. A mixed-methods study that included qualitative and quantitative approaches was conducted. Methods of concept and theory analysis, and structural equation modeling (SEM) were applied in testing the P-LLCF. In the qualitative approach in testing the concepts included in the P-LLCF, a comparison was made between the P-LLCF with a preliminary conceptual framework derived from focus group interviews among Chinese couples' coping with cancer. The comparison showed that the concepts identified in the P-LLCF are relevant to the phenomenon under scrutiny, and attributes of the concepts are consistent with those identified among Chinese cancer couple dyads. In the quantitative study, 117 cancer couples were recruited. The findings showed that inter-relationships exist among the components included in the P-LLCF: event situation, dyadic mediators, dyadic appraisal, dyadic coping, and dyadic outcomes. In that the event situation will impact the dyadic outcomes directly or indirectly through Dyadic Mediators. The dyadic mediators, dyadic appraisal, and dyadic coping are interrelated and work together to benefit the dyadic outcomes. This study provides evidence that supports the interlinked components and the relationship included in the P-LLCF. The findings of this study are important in that they provide healthcare professionals with guidance and directions according to the P-LLCF on how to plan supportive programs for couples coping with cancer. Copyright © 2015 Elsevier Ltd. All rights reserved.
Empirical evolution of a framework that supports the development of nursing competence.
Lima, Sally; Jordan, Helen L; Kinney, Sharon; Hamilton, Bridget; Newall, Fiona
2016-04-01
The aim of this study was to refine a framework for developing competence, for graduate nurses new to paediatric nursing in a transition programme. A competent healthcare workforce is essential to ensuring quality care. There are strong professional and societal expectations that nurses will be competent. Despite the importance of the topic, the most effective means through which competence develops remains elusive. A qualitative explanatory method was applied as part of a mixed methods design. Twenty-one graduate nurses taking part in a 12-month transition programme participated in semi-structured interviews between October and November 2013. Interviews were informed by data analysed during a preceding quantitative phase. Participants were provided with their quantitative results and a preliminary model for development of competence and asked to explain why their competence had developed as it had. The findings from the interviews, considered in combination with the preliminary model and quantitative results, enabled conceptualization of a Framework for Developing Competence. Key elements include: the individual in the team, identification and interpretation of standards, asking questions, guidance and engaging in endeavours, all taking place in a particular context. Much time and resources are directed at supporting the development of nursing competence, with little evidence as to the most effective means. This study led to conceptualization of a theory thought to underpin the development of nursing competence, particularly in a paediatric setting for graduate nurses. Future research should be directed at investigating the framework in other settings. © 2015 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
KIM, J.; Smith, M. B.; Koren, V.; Salas, F.; Cui, Z.; Johnson, D.
2017-12-01
The National Oceanic and Atmospheric Administration (NOAA)-National Weather Service (NWS) developed the Hydrology Laboratory-Research Distributed Hydrologic Model (HL-RDHM) framework as an initial step towards spatially distributed modeling at River Forecast Centers (RFCs). Recently, the NOAA/NWS worked with the National Center for Atmospheric Research (NCAR) to implement the National Water Model (NWM) for nationally-consistent water resources prediction. The NWM is based on the WRF-Hydro framework and is run at a 1km spatial resolution and 1-hour time step over the contiguous United States (CONUS) and contributing areas in Canada and Mexico. In this study, we compare streamflow simulations from HL-RDHM and WRF-Hydro to observations from 279 USGS stations. For streamflow simulations, HL-RDHM is run on 4km grids with the temporal resolution of 1 hour for a 5-year period (Water Years 2008-2012), using a priori parameters provided by NOAA-NWS. The WRF-Hydro streamflow simulations for the same time period are extracted from NCAR's 23 retrospective run of the NWM (version 1.0) over CONUS based on 1km grids. We choose 279 USGS stations which are relatively less affected by dams or reservoirs, in the domains of six different RFCs. We use the daily average values of simulations and observations for the convenience of comparison. The main purpose of this research is to evaluate how HL-RDHM and WRF-Hydro perform at USGS gauge stations. We compare daily time-series of observations and both simulations, and calculate the error values using a variety of error functions. Using these plots and error values, we evaluate the performances of HL-RDHM and WRF-Hydro models. Our results show a mix of model performance across geographic regions.
Using structural equation modeling for network meta-analysis.
Tu, Yu-Kang; Wu, Yun-Chun
2017-07-14
Network meta-analysis overcomes the limitations of traditional pair-wise meta-analysis by incorporating all available evidence into a general statistical framework for simultaneous comparisons of several treatments. Currently, network meta-analyses are undertaken either within the Bayesian hierarchical linear models or frequentist generalized linear mixed models. Structural equation modeling (SEM) is a statistical method originally developed for modeling causal relations among observed and latent variables. As random effect is explicitly modeled as a latent variable in SEM, it is very flexible for analysts to specify complex random effect structure and to make linear and nonlinear constraints on parameters. The aim of this article is to show how to undertake a network meta-analysis within the statistical framework of SEM. We used an example dataset to demonstrate the standard fixed and random effect network meta-analysis models can be easily implemented in SEM. It contains results of 26 studies that directly compared three treatment groups A, B and C for prevention of first bleeding in patients with liver cirrhosis. We also showed that a new approach to network meta-analysis based on the technique of unrestricted weighted least squares (UWLS) method can also be undertaken using SEM. For both the fixed and random effect network meta-analysis, SEM yielded similar coefficients and confidence intervals to those reported in the previous literature. The point estimates of two UWLS models were identical to those in the fixed effect model but the confidence intervals were greater. This is consistent with results from the traditional pairwise meta-analyses. Comparing to UWLS model with common variance adjusted factor, UWLS model with unique variance adjusted factor has greater confidence intervals when the heterogeneity was larger in the pairwise comparison. The UWLS model with unique variance adjusted factor reflects the difference in heterogeneity within each comparison. SEM provides a very flexible framework for univariate and multivariate meta-analysis, and its potential as a powerful tool for advanced meta-analysis is still to be explored.
Computational and experimental analysis of the flow in an annular centrifugal contactor
NASA Astrophysics Data System (ADS)
Wardle, Kent E.
The annular centrifugal contactor has been developed for solvent extraction processes for recycling used nuclear fuel. The compact size and high efficiency of these contactors have made them the choice for advanced reprocessing schemes and a key equipment for a proposed future advanced fuel cycle facility. While a sufficient base of experience exists to facilitate successful operation of current contactor technology, a more complete understanding of the fluid flow within the contactor would enable further advancements in design and operation of future units and greater confidence for use of such contactors in a variety of other solvent extraction applications. This research effort has coupled computational fluid dynamics modeling with a variety of experimental measurements and observations to provide a valid detailed analysis of the flow within the centrifugal contactor. CFD modeling of the free surface flow in the annular mixing zone using the Volume of Fluid (VOF) volume tracking method combined with Large Eddy Simulation (LES) of turbulence was found to have very good agreement with the experimental measurements and observations. A detailed study of the flow and mixing for different housing vane geometries was performed and it was found that the four straight mixing vane geometry had greater mixing for the flow rate simulated and more predictable operation over a range of low to moderate flow rates. The separation zone was also modeled providing a useful description of the flow in this region and identifying critical design features. It is anticipated that this work will form a foundation for additional efforts at improving the design and operation of centrifugal contactors and provide a framework for progress towards simulation of solvent extraction processes.
Mixed phase clouds: observations and theoretical advances (overview)
NASA Astrophysics Data System (ADS)
Korolev, Alexei
2013-04-01
Mixed phase clouds play important role in precipitation formation and radiation budget of the Earth. The microphysical measurements in mixed phase clouds are notoriously difficult due to many technical challenges. The airborne instrumentation for characterization of the microstructure of mixed phase clouds is discussed. The results multiyear airborne observations and measurements of frequency of occurrence of mixed phase, characteristic spatial scales, humidity in mixed phase and ice clouds are presented. A theoretical framework describing the thermodynamics and phase transformation of a three phase component system consisting of ice particles, liquid droplets and water vapor is discussed. It is shown that the Wegener-Bergeron-Findeisen process plays different role in clouds with different dynamics. The problem of maintenance and longevity of mixed phase clouds is discussed.
A narrative review of research impact assessment models and methods.
Milat, Andrew J; Bauman, Adrian E; Redman, Sally
2015-03-18
Research funding agencies continue to grapple with assessing research impact. Theoretical frameworks are useful tools for describing and understanding research impact. The purpose of this narrative literature review was to synthesize evidence that describes processes and conceptual models for assessing policy and practice impacts of public health research. The review involved keyword searches of electronic databases, including MEDLINE, CINAHL, PsycINFO, EBM Reviews, and Google Scholar in July/August 2013. Review search terms included 'research impact', 'policy and practice', 'intervention research', 'translational research', 'health promotion', and 'public health'. The review included theoretical and opinion pieces, case studies, descriptive studies, frameworks and systematic reviews describing processes, and conceptual models for assessing research impact. The review was conducted in two phases: initially, abstracts were retrieved and assessed against the review criteria followed by the retrieval and assessment of full papers against review criteria. Thirty one primary studies and one systematic review met the review criteria, with 88% of studies published since 2006. Studies comprised assessments of the impacts of a wide range of health-related research, including basic and biomedical research, clinical trials, health service research, as well as public health research. Six studies had an explicit focus on assessing impacts of health promotion or public health research and one had a specific focus on intervention research impact assessment. A total of 16 different impact assessment models were identified, with the 'payback model' the most frequently used conceptual framework. Typically, impacts were assessed across multiple dimensions using mixed methodologies, including publication and citation analysis, interviews with principal investigators, peer assessment, case studies, and document analysis. The vast majority of studies relied on principal investigator interviews and/or peer review to assess impacts, instead of interviewing policymakers and end-users of research. Research impact assessment is a new field of scientific endeavour and there are a growing number of conceptual frameworks applied to assess the impacts of research.
NASA Astrophysics Data System (ADS)
Brody, S.; Mahadevan, A.; Lozier, M. S.
2014-12-01
The subpolar spring phytoplankton bloom has important consequences for marine ecosystems and the carbon cycle. The timing of the bloom has been conceived of as a basin-scale event: as the ocean warms, the seasonal mixed layer shoals, restricting phytoplankton to shallower depths and increasing available light to a level at which the bloom can begin. Recent studies have highlighted the importance of localized phenomena in driving the bloom initiation. Specifically, the role of lateral density gradients in generating <10km instabilities in the upper ocean, which then stratify the mixed layer before surface heating begins, has been explored with a process study model and fine-scale observations from a field program to study the North Atlantic spring bloom [1]. However, an alternative hypothesis has recently been validated at both the small scale, using the same observational data [2], and at the basin scale, using remote sensing data [3]. According to this hypothesis, blooms begin when surface heat fluxes weaken, mixing shifts from primarily convectively-driven to primarily wind-driven, and the depth of active mixing in the upper ocean consequently decreases. Here, we compare the importance of the barriers to mixing presented by submesoscale instabilities with the decreases in mixing depth caused by changes in surface forcing in driving the initiation of the spring bloom prior to the onset of surface heating. To make this comparison, we use a Lagrangian framework to track the light history of particles seeded in a high-resolution numerical model that we initialize with various surface forcing scenarios, and with and without lateral density gradients. Because the model parameterizes convection with convective adjustment, we present two methodologies to account for turbulent mixing processes that utilize observations of turbulent vertical mixing from a Lagrangian float. We present conclusions on whether and how submesoscale processes affect bloom initiation under varied surface forcing conditions in the context of whether the timing of the subpolar phytoplankton bloom can be thought of as a basin-scale or submesoscale phenomenon. [1] A. Mahadevan et al.. Science 337, 6090 (2012). [2] Brody, S.R. and Lozier, M.S. (under review, ICES J. Mar. Sci) [3] Brody, S.R. and Lozier, M.S. Geophys. Res. Lett. 41, (2014).
Sources of Uncertainty in Predicting Land Surface Fluxes Using Diverse Data and Models
NASA Technical Reports Server (NTRS)
Dungan, Jennifer L.; Wang, Weile; Michaelis, Andrew; Votava, Petr; Nemani, Ramakrishma
2010-01-01
In the domain of predicting land surface fluxes, models are used to bring data from large observation networks and satellite remote sensing together to make predictions about present and future states of the Earth. Characterizing the uncertainty about such predictions is a complex process and one that is not yet fully understood. Uncertainty exists about initialization, measurement and interpolation of input variables; model parameters; model structure; and mixed spatial and temporal supports. Multiple models or structures often exist to describe the same processes. Uncertainty about structure is currently addressed by running an ensemble of different models and examining the distribution of model outputs. To illustrate structural uncertainty, a multi-model ensemble experiment we have been conducting using the Terrestrial Observation and Prediction System (TOPS) will be discussed. TOPS uses public versions of process-based ecosystem models that use satellite-derived inputs along with surface climate data and land surface characterization to produce predictions of ecosystem fluxes including gross and net primary production and net ecosystem exchange. Using the TOPS framework, we have explored the uncertainty arising from the application of models with different assumptions, structures, parameters, and variable definitions. With a small number of models, this only begins to capture the range of possible spatial fields of ecosystem fluxes. Few attempts have been made to systematically address the components of uncertainty in such a framework. We discuss the characterization of uncertainty for this approach including both quantifiable and poorly known aspects.
Ikushima, Koujiro; Arimura, Hidetaka; Jin, Ze; Yabu-Uchi, Hidetake; Kuwazuru, Jumpei; Shioyama, Yoshiyuki; Sasaki, Tomonari; Honda, Hiroshi; Sasaki, Masayuki
2017-01-01
We have proposed a computer-assisted framework for machine-learning-based delineation of gross tumor volumes (GTVs) following an optimum contour selection (OCS) method. The key idea of the proposed framework was to feed image features around GTV contours (determined based on the knowledge of radiation oncologists) into a machine-learning classifier during the training step, after which the classifier produces the 'degree of GTV' for each voxel in the testing step. Initial GTV regions were extracted using a support vector machine (SVM) that learned the image features inside and outside each tumor region (determined by radiation oncologists). The leave-one-out-by-patient test was employed for training and testing the steps of the proposed framework. The final GTV regions were determined using the OCS method that can be used to select a global optimum object contour based on multiple active delineations with a LSM around the GTV. The efficacy of the proposed framework was evaluated in 14 lung cancer cases [solid: 6, ground-glass opacity (GGO): 4, mixed GGO: 4] using the 3D Dice similarity coefficient (DSC), which denotes the degree of region similarity between the GTVs contoured by radiation oncologists and those determined using the proposed framework. The proposed framework achieved an average DSC of 0.777 for 14 cases, whereas the OCS-based framework produced an average DSC of 0.507. The average DSCs for GGO and mixed GGO were 0.763 and 0.701, respectively, obtained by the proposed framework. The proposed framework can be employed as a tool to assist radiation oncologists in delineating various GTV regions. © The Author 2016. Published by Oxford University Press on behalf of The Japan Radiation Research Society and Japanese Society for Radiation Oncology.
An Optimal Order Nonnested Mixed Multigrid Method for Generalized Stokes Problems
NASA Technical Reports Server (NTRS)
Deng, Qingping
1996-01-01
A multigrid algorithm is developed and analyzed for generalized Stokes problems discretized by various nonnested mixed finite elements within a unified framework. It is abstractly proved by an element-independent analysis that the multigrid algorithm converges with an optimal order if there exists a 'good' prolongation operator. A technique to construct a 'good' prolongation operator for nonnested multilevel finite element spaces is proposed. Its basic idea is to introduce a sequence of auxiliary nested multilevel finite element spaces and define a prolongation operator as a composite operator of two single grid level operators. This makes not only the construction of a prolongation operator much easier (the final explicit forms of such prolongation operators are fairly simple), but the verification of the approximate properties for prolongation operators is also simplified. Finally, as an application, the framework and technique is applied to seven typical nonnested mixed finite elements.
Moist entropy and water isotopologues in a Walker-type circulation framework of the MJO
NASA Astrophysics Data System (ADS)
Hurley, J. V.; Noone, D.
2017-12-01
The MJO is the principal source of tropical intraseasonal variability, yet we struggle to accurately simulate its observed convective behavior and eastward propagation. There is continued need for evaluating the role of water within the MJO, including evaporation, vertical transport, precipitation, and latent heating of the coupled atmosphere-ocean system. Isotopes are particularly useful for investigating these aspects of the water cycle. Recent contribution to resolve this includes analyses of the joint distribution of water vapor and isotopologue concentrations (dDv), to identify shortcomings in modeling MJO humidity, clouds or convection. Here, we complement the mixing ratio versus isotope approach with analyses of moist entropy, to distinguish the roles of convective and large-scale dynamic processes through the phases of the MJO. We do this in the classic MJO framework of the tropical Walker-type circulations. In this framework, the MJO can be characterized by strengthening and eastward expansion, and subsequent weakening and contraction, of the tropical stream function over the Indian Ocean. Low troposphere westerlies converge with easterlies, giving rise to uplift, convection, and precipitation, at a longitude that propagates east from 88°E to 136°E . In composite structure of the MJO, wet equivalent potential temperature (θq) anomalies have maximum expression at 500 hPa, and westward tilts with altitude. A positive θq anomaly occurs over the uplift and precipitation, and negative θq anomalies both trail and lead the convective center, along subsiding branches of the stream function anomalies. Out of phase with θq, dDv anomalies are positive east of and negative trailing or west of the convective center, suggesting moistening of the atmosphere with limited precipitation efficiency. MJO phase tendencies show θq is coherent with precipitation, and dDv are coherent with the tropical stream function, thus tying moist entropy to convective processes and isotope ratios to the large-scale dynamics. Joint distributions of MJO mixing ratio versus dDv are near or below Rayleigh curves, but θq is higher than would be expected for simple Rayleigh fractionation. To resolve this, we assess MJO θq versus mixing ratio and find vertical mixing likely occurs between the stratosphere and lower troposphere.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hayes, Birchard P; Michel, Kelly D; Few, Douglas A
From stereophonic, positional sound to high-definition imagery that is crisp and clean, high fidelity computer graphics enhance our view, insight, and intuition regarding our environments and conditions. Contemporary 3-D modeling tools offer an open architecture framework that enables integration with other technologically innovative arenas. One innovation of great interest is Augmented Reality, the merging of virtual, digital environments with physical, real-world environments creating a mixed reality where relevant data and information augments the real or actual experience in real-time by spatial or semantic context. Pairing 3-D virtual immersive models with a dynamic platform such as semi-autonomous robotics or personnel odometrymore » systems to create a mixed reality offers a new and innovative design information verification inspection capability, evaluation accuracy, and information gathering capability for nuclear facilities. Our paper discusses the integration of two innovative technologies, 3-D visualizations with inertial positioning systems, and the resulting augmented reality offered to the human inspector. The discussion in the paper includes an exploration of human and non-human (surrogate) inspections of a nuclear facility, integrated safeguards knowledge within a synchronized virtual model operated, or worn, by a human inspector, and the anticipated benefits to safeguards evaluations of facility operations.« less
The evolution of labile traits in sex- and age-structured populations.
Childs, Dylan Z; Sheldon, Ben C; Rees, Mark
2016-03-01
Many quantitative traits are labile (e.g. somatic growth rate, reproductive timing and investment), varying over the life cycle as a result of behavioural adaptation, developmental processes and plastic responses to the environment. At the population level, selection can alter the distribution of such traits across age classes and among generations. Despite a growing body of theoretical research exploring the evolutionary dynamics of labile traits, a data-driven framework for incorporating such traits into demographic models has not yet been developed. Integral projection models (IPMs) are increasingly being used to understand the interplay between changes in labile characters, life histories and population dynamics. One limitation of the IPM approach is that it relies on phenotypic associations between parents and offspring traits to capture inheritance. However, it is well-established that many different processes may drive these associations, and currently, no clear consensus has emerged on how to model micro-evolutionary dynamics in an IPM framework. We show how to embed quantitative genetic models of inheritance of labile traits into age-structured, two-sex models that resemble standard IPMs. Commonly used statistical tools such as GLMs and their mixed model counterparts can then be used for model parameterization. We illustrate the methodology through development of a simple model of egg-laying date evolution, parameterized using data from a population of Great tits (Parus major). We demonstrate how our framework can be used to project the joint dynamics of species' traits and population density. We then develop a simple extension of the age-structured Price equation (ASPE) for two-sex populations, and apply this to examine the age-specific contributions of different processes to change in the mean phenotype and breeding value. The data-driven framework we outline here has the potential to facilitate greater insight into the nature of selection and its consequences in settings where focal traits vary over the lifetime through ontogeny, behavioural adaptation and phenotypic plasticity, as well as providing a potential bridge between theoretical and empirical studies of labile trait variation. © 2016 The Authors Journal of Animal Ecology published by John Wiley & Sons Ltd on behalf of British Ecological Society.
A social-cognitive framework of multidisciplinary team innovation.
Paletz, Susannah B F; Schunn, Christian D
2010-01-01
The psychology of science typically lacks integration between cognitive and social variables. We present a new framework of team innovation in multidisciplinary science and engineering groups that ties factors from both literatures together. We focus on the effects of a particularly challenging social factor, knowledge diversity, which has a history of mixed effects on creativity, most likely because those effects are mediated and moderated by cognitive and additional social variables. In addition, we highlight the distinction between team innovative processes that are primarily divergent versus convergent; we propose that the social and cognitive implications are different for each, providing a possible explanation for knowledge diversity's mixed results on team outcomes. Social variables mapped out include formal roles, communication norms, sufficient participation and information sharing, and task conflict; cognitive variables include analogy, information search, and evaluation. This framework provides a roadmap for research that aims to harness the power of multidisciplinary teams. Copyright © 2009 Cognitive Science Society, Inc.
Walker, Martin; Basáñez, María-Gloria; Ouédraogo, André Lin; Hermsen, Cornelus; Bousema, Teun; Churcher, Thomas S
2015-01-16
Quantitative molecular methods (QMMs) such as quantitative real-time polymerase chain reaction (q-PCR), reverse-transcriptase PCR (qRT-PCR) and quantitative nucleic acid sequence-based amplification (QT-NASBA) are increasingly used to estimate pathogen density in a variety of clinical and epidemiological contexts. These methods are often classified as semi-quantitative, yet estimates of reliability or sensitivity are seldom reported. Here, a statistical framework is developed for assessing the reliability (uncertainty) of pathogen densities estimated using QMMs and the associated diagnostic sensitivity. The method is illustrated with quantification of Plasmodium falciparum gametocytaemia by QT-NASBA. The reliability of pathogen (e.g. gametocyte) densities, and the accompanying diagnostic sensitivity, estimated by two contrasting statistical calibration techniques, are compared; a traditional method and a mixed model Bayesian approach. The latter accounts for statistical dependence of QMM assays run under identical laboratory protocols and permits structural modelling of experimental measurements, allowing precision to vary with pathogen density. Traditional calibration cannot account for inter-assay variability arising from imperfect QMMs and generates estimates of pathogen density that have poor reliability, are variable among assays and inaccurately reflect diagnostic sensitivity. The Bayesian mixed model approach assimilates information from replica QMM assays, improving reliability and inter-assay homogeneity, providing an accurate appraisal of quantitative and diagnostic performance. Bayesian mixed model statistical calibration supersedes traditional techniques in the context of QMM-derived estimates of pathogen density, offering the potential to improve substantially the depth and quality of clinical and epidemiological inference for a wide variety of pathogens.
Fully constrained Majorana neutrino mass matrices using \\varvec{Σ(72× 3)}
NASA Astrophysics Data System (ADS)
Krishnan, R.; Harrison, P. F.; Scott, W. G.
2018-01-01
In 2002, two neutrino mixing ansatze having trimaximally mixed middle (ν _2) columns, namely tri-chi-maximal mixing ( {T}χ {M}) and tri-phi-maximal mixing ( {T}φ {M}), were proposed. In 2012, it was shown that {T}χ {M} with χ =± π /16 as well as {T}φ {M} with φ = ± π /16 leads to the solution, sin ^2 θ _{13} = 2/3 sin ^2 π /16, consistent with the latest measurements of the reactor mixing angle, θ _{13}. To obtain {T}χ {M}_{(χ =± π /16)} and {T}φ {M}_{(φ =± π /16)}, the type I see-saw framework with fully constrained Majorana neutrino mass matrices was utilised. These mass matrices also resulted in the neutrino mass ratios, m_1:m_2:m_3=( 2+√{2}) /1+√{2(2+√{2)}}:1:( 2+√{2}) /-1+√{2(2+√{2)}}. In this paper we construct a flavour model based on the discrete group Σ (72× 3) and obtain the aforementioned results. A Majorana neutrino mass matrix (a symmetric 3× 3 matrix with six complex degrees of freedom) is conveniently mapped into a flavon field transforming as the complex six-dimensional representation of Σ (72× 3). Specific vacuum alignments of the flavons are used to arrive at the desired mass matrices.
NASA Astrophysics Data System (ADS)
Yin, Hui; Yu, Dejie; Yin, Shengwen; Xia, Baizhan
2016-10-01
This paper introduces mixed fuzzy and interval parametric uncertainties into the FE components of the hybrid Finite Element/Statistical Energy Analysis (FE/SEA) model for mid-frequency analysis of built-up systems, thus an uncertain ensemble combining non-parametric with mixed fuzzy and interval parametric uncertainties comes into being. A fuzzy interval Finite Element/Statistical Energy Analysis (FIFE/SEA) framework is proposed to obtain the uncertain responses of built-up systems, which are described as intervals with fuzzy bounds, termed as fuzzy-bounded intervals (FBIs) in this paper. Based on the level-cut technique, a first-order fuzzy interval perturbation FE/SEA (FFIPFE/SEA) and a second-order fuzzy interval perturbation FE/SEA method (SFIPFE/SEA) are developed to handle the mixed parametric uncertainties efficiently. FFIPFE/SEA approximates the response functions by the first-order Taylor series, while SFIPFE/SEA improves the accuracy by considering the second-order items of Taylor series, in which all the mixed second-order items are neglected. To further improve the accuracy, a Chebyshev fuzzy interval method (CFIM) is proposed, in which the Chebyshev polynomials is used to approximate the response functions. The FBIs are eventually reconstructed by assembling the extrema solutions at all cut levels. Numerical results on two built-up systems verify the effectiveness of the proposed methods.
Mixed Resilience: A Study of Multiethnic Mexican American Stress and Coping in Arizona
ERIC Educational Resources Information Center
Jackson, Kelly F.; Wolven, Thera; Aguilera, Kimberly
2013-01-01
Guided by an integrated framework of resilience, this in-depth qualitative study examined the major stressors persons of multiethnic Mexican American heritage encountered in their social environments related to their mixed identity and the resilience enhancing processes they employed to cope with these stressors. Life-story event narratives were…
Mixed Methods Research with Internally Displaced Colombian Gay and Bisexual Men and Transwomen
ERIC Educational Resources Information Center
Zea, Maria Cecilia; Aguilar-Pardo, Marcela; Betancourt, Fabian; Reisen, Carol A.; Gonzales, Felisa
2014-01-01
We discuss the use of mixed methods research to further understanding of displaced Colombian gay and bisexual men and transwomen, a marginalized population at risk. Within the framework of communicative action, which calls for social change through egalitarian dialog, we describe how our multinational, interdisciplinary research team explored the…
Using computer simulations to facilitate conceptual understanding of electromagnetic induction
NASA Astrophysics Data System (ADS)
Lee, Yu-Fen
This study investigated the use of computer simulations to facilitate conceptual understanding in physics. The use of computer simulations in the present study was grounded in a conceptual framework drawn from findings related to the use of computer simulations in physics education. To achieve the goal of effective utilization of computers for physics education, I first reviewed studies pertaining to computer simulations in physics education categorized by three different learning frameworks and studies comparing the effects of different simulation environments. My intent was to identify the learning context and factors for successful use of computer simulations in past studies and to learn from the studies which did not obtain a significant result. Based on the analysis of reviewed literature, I proposed effective approaches to integrate computer simulations in physics education. These approaches are consistent with well established education principles such as those suggested by How People Learn (Bransford, Brown, Cocking, Donovan, & Pellegrino, 2000). The research based approaches to integrated computer simulations in physics education form a learning framework called Concept Learning with Computer Simulations (CLCS) in the current study. The second component of this study was to examine the CLCS learning framework empirically. The participants were recruited from a public high school in Beijing, China. All participating students were randomly assigned to two groups, the experimental (CLCS) group and the control (TRAD) group. Research based computer simulations developed by the physics education research group at University of Colorado at Boulder were used to tackle common conceptual difficulties in learning electromagnetic induction. While interacting with computer simulations, CLCS students were asked to answer reflective questions designed to stimulate qualitative reasoning and explanation. After receiving model reasoning online, students were asked to submit their revised answers electronically. Students in the TRAD group were not granted access to the CLCS material and followed their normal classroom routine. At the end of the study, both the CLCS and TRAD students took a post-test. Questions on the post-test were divided into "what" questions, "how" questions, and an open response question. Analysis of students' post-test performance showed mixed results. While the TRAD students scored higher on the "what" questions, the CLCS students scored higher on the "how" questions and the one open response questions. This result suggested that more TRAD students knew what kinds of conditions may or may not cause electromagnetic induction without understanding how electromagnetic induction works. Analysis of the CLCS students' learning also suggested that frequent disruption and technical trouble might pose threats to the effectiveness of the CLCS learning framework. Despite the mixed results of students' post-test performance, the CLCS learning framework revealed some limitations to promote conceptual understanding in physics. Improvement can be made by providing students with background knowledge necessary to understand model reasoning and incorporating the CLCS learning framework with other learning frameworks to promote integration of various physics concepts. In addition, the reflective questions in the CLCS learning framework may be refined to better address students' difficulties. Limitations of the study, as well as suggestions for future research, are also presented in this study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han Yinfeng; Department of Chemistry and Environmental Science, Taishan University, Taian 271021; Fu Lianshe
Three mixed europium-yttrium organic frameworks: Eu{sub 2-x}Y{sub x}(Mel)(H{sub 2}O){sub 6} (Mel=mellitic acid or benzene-1,2,3,4,5,6-hexacarboxylic acid, x=0.38 1, 0.74 2, and 0.86 3) have been synthesized and characterized. All the compounds contain a 3-D net with (4, 8)-flu topology. The study indicates that the photoluminescence properties are effectively affected by the different ratios of europium and yttrium ions, the quantum efficiency is increased and the Eu{sup 3+} lifetime becomes longer in these MOFs than those of the Eu analog. - Graphical abstract: Three mixed europium and yttrium organic frameworks: Eu{sub 2-x}Y{sub x}(Mel)(H{sub 2}O){sub 6} (Mel=mellitic acid) have been synthesized and characterized.more » All the compounds contain a 3-D net with (4, 8)-flu topology. The study indicates that the photoluminescence properties are effectively affected by the different ratios of europium and yttrium ions, the quantum efficiency is increased and the Eu{sup 3+} lifetime becomes longer in these MOFs than those of the Eu analog. Highlights: Black-Right-Pointing-Pointer Three (4, 8)-flu topological mixed Eu and Y MOFs were synthesized under mild conditions. Black-Right-Pointing-Pointer Metal ratios were refined by the single crystal data consistent with the EDS analysis. Black-Right-Pointing-Pointer Mixed Eu and Y MOFs show longer lifetime and higher quantum efficiency than the Eu analog. Black-Right-Pointing-Pointer Adding inert lanthanide into luminescent MOFs enlarges the field of luminescent MOFs.« less
A cell-free framework for rapid biosynthetic pathway prototyping and enzyme discovery.
Karim, Ashty S; Jewett, Michael C
2016-07-01
Speeding up design-build-test (DBT) cycles is a fundamental challenge facing biochemical engineering. To address this challenge, we report a new cell-free protein synthesis driven metabolic engineering (CFPS-ME) framework for rapid biosynthetic pathway prototyping. In our framework, cell-free cocktails for synthesizing target small molecules are assembled in a mix-and-match fashion from crude cell lysates either containing selectively enriched pathway enzymes from heterologous overexpression or directly producing pathway enzymes in lysates by CFPS. As a model, we apply our approach to n-butanol biosynthesis showing that Escherichia coli lysates support a highly active 17-step CoA-dependent n-butanol pathway in vitro. The elevated degree of flexibility in the cell-free environment allows us to manipulate physiochemical conditions, access enzymatic nodes, discover new enzymes, and prototype enzyme sets with linear DNA templates to study pathway performance. We anticipate that CFPS-ME will facilitate efforts to define, manipulate, and understand metabolic pathways for accelerated DBT cycles without the need to reengineer organisms. Copyright © 2016 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.
Let's Talk Learning Analytics: A Framework for Implementation in Relation to Student Retention
ERIC Educational Resources Information Center
West, Deborah; Heath, David; Huijser, Henk
2016-01-01
This paper presents a dialogical tool for the advancement of learning analytics implementation for student retention in Higher Education institutions. The framework was developed as an outcome of a project commissioned and funded by the Australian Government's "Office for Learning and Teaching". The project took a mixed-method approach…
Just-in-Time Pedagogy: Teachers' Perspectives on the Response to Intervention Framework
ERIC Educational Resources Information Center
Wilcox, Kathleen A.; Murakami-Ramalho, Elizabeth; Urick, Angela
2013-01-01
The purpose of this mixed methods research is to examine teachers' perspectives on the response to intervention (RTI) framework and its implementation in Michigan and Texas schools. Both states have been leaders in literacy, increasing preservice and in-service teacher certification standards and developing similar batteries for assessing literacy…
Head Teachers' Experiences of School Inspection under Ofsted's January 2012 Framework
ERIC Educational Resources Information Center
Courtney, Steven J.
2013-01-01
This article focuses on head teachers' experiences of inspection under Ofsted's revised school inspection framework, their views of its principles and its implications for school leaders and leadership. The article draws on findings from a mixed-methods study to show that inspections are more focused on pupils' attainment and progress. Head…
Interpretation of Radiological Images: Towards a Framework of Knowledge and Skills
ERIC Educational Resources Information Center
van der Gijp, A.; van der Schaaf, M. F.; van der Schaaf, I. C.; Huige, J. C. B. M.; Ravesloot, C. J.; van Schaik, J. P. J.; ten Cate, Th. J.
2014-01-01
The knowledge and skills that are required for radiological image interpretation are not well documented, even though medical imaging is gaining importance. This study aims to develop a comprehensive framework of knowledge and skills, required for two-dimensional and multiplanar image interpretation in radiology. A mixed-method study approach was…
Wenzl, Martin; Naci, Huseyin; Mossialos, Elias
2017-09-01
The objective of this paper is to provide a framework for evaluation of changes in health policy against overarching health system goals. We propose a categorisation of policies into seven distinct health system domains. We then develop existing analytical concepts of insurance coverage and cost-effectiveness further to evaluate the effects of policies in each domain on equity and efficiency. The framework is illustrated with likely effects of policy changes implemented in a sample of European countries since 2008. Our illustrative analysis suggests that cost containment has been the main focus and that countries have implemented a mix of measures that are efficient or efficiency neutral. Similarly, policies are likely to have mixed effects on equity. Additional user charges were a common theme but these were frequently accompanied by additional exemptions, making their likely effects on equity difficult to evaluate. We provide a framework for future, and more detailed, evaluations of changes in health policy. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Virtual machine-based simulation platform for mobile ad-hoc network-based cyber infrastructure
Yoginath, Srikanth B.; Perumalla, Kayla S.; Henz, Brian J.
2015-09-29
In modeling and simulating complex systems such as mobile ad-hoc networks (MANETs) in de-fense communications, it is a major challenge to reconcile multiple important considerations: the rapidity of unavoidable changes to the software (network layers and applications), the difficulty of modeling the critical, implementation-dependent behavioral effects, the need to sustain larger scale scenarios, and the desire for faster simulations. Here we present our approach in success-fully reconciling them using a virtual time-synchronized virtual machine(VM)-based parallel ex-ecution framework that accurately lifts both the devices as well as the network communications to a virtual time plane while retaining full fidelity. At themore » core of our framework is a scheduling engine that operates at the level of a hypervisor scheduler, offering a unique ability to execute multi-core guest nodes over multi-core host nodes in an accurate, virtual time-synchronized manner. In contrast to other related approaches that suffer from either speed or accuracy issues, our framework provides MANET node-wise scalability, high fidelity of software behaviors, and time-ordering accuracy. The design and development of this framework is presented, and an ac-tual implementation based on the widely used Xen hypervisor system is described. Benchmarks with synthetic and actual applications are used to identify the benefits of our approach. The time inaccuracy of traditional emulation methods is demonstrated, in comparison with the accurate execution of our framework verified by theoretically correct results expected from analytical models of the same scenarios. In the largest high fidelity tests, we are able to perform virtual time-synchronized simulation of 64-node VM-based full-stack, actual software behaviors of MANETs containing a mix of static and mobile (unmanned airborne vehicle) nodes, hosted on a 32-core host, with full fidelity of unmodified ad-hoc routing protocols, unmodified application executables, and user-controllable physical layer effects including inter-device wireless signal strength, reachability, and connectivity.« less
Virtual machine-based simulation platform for mobile ad-hoc network-based cyber infrastructure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoginath, Srikanth B.; Perumalla, Kayla S.; Henz, Brian J.
In modeling and simulating complex systems such as mobile ad-hoc networks (MANETs) in de-fense communications, it is a major challenge to reconcile multiple important considerations: the rapidity of unavoidable changes to the software (network layers and applications), the difficulty of modeling the critical, implementation-dependent behavioral effects, the need to sustain larger scale scenarios, and the desire for faster simulations. Here we present our approach in success-fully reconciling them using a virtual time-synchronized virtual machine(VM)-based parallel ex-ecution framework that accurately lifts both the devices as well as the network communications to a virtual time plane while retaining full fidelity. At themore » core of our framework is a scheduling engine that operates at the level of a hypervisor scheduler, offering a unique ability to execute multi-core guest nodes over multi-core host nodes in an accurate, virtual time-synchronized manner. In contrast to other related approaches that suffer from either speed or accuracy issues, our framework provides MANET node-wise scalability, high fidelity of software behaviors, and time-ordering accuracy. The design and development of this framework is presented, and an ac-tual implementation based on the widely used Xen hypervisor system is described. Benchmarks with synthetic and actual applications are used to identify the benefits of our approach. The time inaccuracy of traditional emulation methods is demonstrated, in comparison with the accurate execution of our framework verified by theoretically correct results expected from analytical models of the same scenarios. In the largest high fidelity tests, we are able to perform virtual time-synchronized simulation of 64-node VM-based full-stack, actual software behaviors of MANETs containing a mix of static and mobile (unmanned airborne vehicle) nodes, hosted on a 32-core host, with full fidelity of unmodified ad-hoc routing protocols, unmodified application executables, and user-controllable physical layer effects including inter-device wireless signal strength, reachability, and connectivity.« less
NASA Astrophysics Data System (ADS)
Wells, Aaron Raymond
This research focuses on the Emory and Obed Watersheds in the Cumberland Plateau in Central Tennessee and the Lower Hatchie River Watershed in West Tennessee. A framework based on market and nonmarket valuation techniques was used to empirically estimate economic values for environmental amenities and negative externalities in these areas. The specific techniques employed include a variation of hedonic pricing and discrete choice conjoint analysis (i.e., choice modeling), in addition to geographic information systems (GIS) and remote sensing. Microeconomic models of agent behavior, including random utility theory and profit maximization, provide the principal theoretical foundation linking valuation techniques and econometric models. The generalized method of moments estimator for a first-order spatial autoregressive function and mixed logit models are the principal econometric methods applied within the framework. The dissertation is subdivided into three separate chapters written in a manuscript format. The first chapter provides the necessary theoretical and mathematical conditions that must be satisfied in order for a forest amenity enhancement program to be implemented. These conditions include utility, value, and profit maximization. The second chapter evaluates the effect of forest land cover and information about future land use change on respondent preferences and willingness to pay for alternative hypothetical forest amenity enhancement options. Land use change information and the amount of forest land cover significantly influenced respondent preferences, choices, and stated willingness to pay. Hicksian welfare estimates for proposed enhancement options ranged from 57.42 to 25.53, depending on the policy specification, information level, and econometric model. The third chapter presents economic values for negative externalities associated with channelization that affect the productivity and overall market value of forested wetlands. Results of robust, generalized moments estimation of a double logarithmic first-order spatial autoregressive error model (inverse distance weights with spatial dependence up to 1500m) indicate that the implicit cost of damages to forested wetlands caused by channelization equaled -$5,438 ha-1. Collectively, the results of this dissertation provide economic measures of the damages to and benefits of environmental assets, help private landowners and policy makers identify the amenity attributes preferred by the public, and improve the management of natural resources.
Managing time-substitutable electricity usage using dynamic controls
Ghosh, Soumyadip; Hosking, Jonathan R.; Natarajan, Ramesh; Subramaniam, Shivaram; Zhang, Xiaoxuan
2017-02-07
A predictive-control approach allows an electricity provider to monitor and proactively manage peak and off-peak residential intra-day electricity usage in an emerging smart energy grid using time-dependent dynamic pricing incentives. The daily load is modeled as time-shifted, but cost-differentiated and substitutable, copies of the continuously-consumed electricity resource, and a consumer-choice prediction model is constructed to forecast the corresponding intra-day shares of total daily load according to this model. This is embedded within an optimization framework for managing the daily electricity usage. A series of transformations are employed, including the reformulation-linearization technique (RLT) to obtain a Mixed-Integer Programming (MIP) model representation of the resulting nonlinear optimization problem. In addition, various regulatory and pricing constraints are incorporated in conjunction with the specified profit and capacity utilization objectives.
Managing time-substitutable electricity usage using dynamic controls
Ghosh, Soumyadip; Hosking, Jonathan R.; Natarajan, Ramesh; Subramaniam, Shivaram; Zhang, Xiaoxuan
2017-02-21
A predictive-control approach allows an electricity provider to monitor and proactively manage peak and off-peak residential intra-day electricity usage in an emerging smart energy grid using time-dependent dynamic pricing incentives. The daily load is modeled as time-shifted, but cost-differentiated and substitutable, copies of the continuously-consumed electricity resource, and a consumer-choice prediction model is constructed to forecast the corresponding intra-day shares of total daily load according to this model. This is embedded within an optimization framework for managing the daily electricity usage. A series of transformations are employed, including the reformulation-linearization technique (RLT) to obtain a Mixed-Integer Programming (MIP) model representation of the resulting nonlinear optimization problem. In addition, various regulatory and pricing constraints are incorporated in conjunction with the specified profit and capacity utilization objectives.
Stability analysis of magnetized neutron stars - a semi-analytic approach
NASA Astrophysics Data System (ADS)
Herbrik, Marlene; Kokkotas, Kostas D.
2017-04-01
We implement a semi-analytic approach for stability analysis, addressing the ongoing uncertainty about stability and structure of neutron star magnetic fields. Applying the energy variational principle, a model system is displaced from its equilibrium state. The related energy density variation is set up analytically, whereas its volume integration is carried out numerically. This facilitates the consideration of more realistic neutron star characteristics within the model compared to analytical treatments. At the same time, our method retains the possibility to yield general information about neutron star magnetic field and composition structures that are likely to be stable. In contrast to numerical studies, classes of parametrized systems can be studied at once, finally constraining realistic configurations for interior neutron star magnetic fields. We apply the stability analysis scheme on polytropic and non-barotropic neutron stars with toroidal, poloidal and mixed fields testing their stability in a Newtonian framework. Furthermore, we provide the analytical scheme for dropping the Cowling approximation in an axisymmetric system and investigate its impact. Our results confirm the instability of simple magnetized neutron star models as well as a stabilization tendency in the case of mixed fields and stratification. These findings agree with analytical studies whose spectrum of model systems we extend by lifting former simplifications.
Lee, Jaeyoung; Yasmin, Shamsunnahar; Eluru, Naveen; Abdel-Aty, Mohamed; Cai, Qing
2018-02-01
In traffic safety literature, crash frequency variables are analyzed using univariate count models or multivariate count models. In this study, we propose an alternative approach to modeling multiple crash frequency dependent variables. Instead of modeling the frequency of crashes we propose to analyze the proportion of crashes by vehicle type. A flexible mixed multinomial logit fractional split model is employed for analyzing the proportions of crashes by vehicle type at the macro-level. In this model, the proportion allocated to an alternative is probabilistically determined based on the alternative propensity as well as the propensity of all other alternatives. Thus, exogenous variables directly affect all alternatives. The approach is well suited to accommodate for large number of alternatives without a sizable increase in computational burden. The model was estimated using crash data at Traffic Analysis Zone (TAZ) level from Florida. The modeling results clearly illustrate the applicability of the proposed framework for crash proportion analysis. Further, the Excess Predicted Proportion (EPP)-a screening performance measure analogous to Highway Safety Manual (HSM), Excess Predicted Average Crash Frequency is proposed for hot zone identification. Using EPP, a statewide screening exercise by the various vehicle types considered in our analysis was undertaken. The screening results revealed that the spatial pattern of hot zones is substantially different across the various vehicle types considered. Copyright © 2017 Elsevier Ltd. All rights reserved.
Redesigning inpatient care: Testing the effectiveness of an accountable care team model.
Kara, Areeba; Johnson, Cynthia S; Nicley, Amy; Niemeier, Michael R; Hui, Siu L
2015-12-01
US healthcare underperforms on quality and safety metrics. Inpatient care constitutes an immense opportunity to intervene to improve care. Describe a model of inpatient care and measure its impact. A quantitative assessment of the implementation of a new model of care. The graded implementation of the model allowed us to follow outcomes and measure their association with the dose of the implementation. Inpatient medical and surgical units in a large academic health center. Eight interventions rooted in improving interprofessional collaboration (IPC), enabling data-driven decisions, and providing leadership were implemented. Outcome data from August 2012 to December 2013 were analyzed using generalized linear mixed models for associations with the implementation of the model. Length of stay (LOS) index, case-mix index-adjusted variable direct costs (CMI-adjusted VDC), 30-day readmission rates, overall patient satisfaction scores, and provider satisfaction with the model were measured. The implementation of the model was associated with decreases in LOS index (P < 0.0001) and CMI-adjusted VDC (P = 0.0006). We did not detect improvements in readmission rates or patient satisfaction scores. Most providers (95.8%, n = 92) agreed that the model had improved the quality and safety of the care delivered. Creating an environment and framework in which IPC is fostered, performance data are transparently available, and leadership is provided may improve value on both medical and surgical units. These interventions appear to be well accepted by front-line staff. Readmission rates and patient satisfaction remain challenging. © 2015 Society of Hospital Medicine.
Yee, Susan Harrell; Barron, Mace G
2010-02-01
Coral reefs have experienced extensive mortality over the past few decades as a result of temperature-induced mass bleaching events. There is an increasing realization that other environmental factors, including water mixing, solar radiation, water depth, and water clarity, interact with temperature to either exacerbate bleaching or protect coral from mass bleaching. The relative contribution of these factors to variability in mass bleaching at a global scale has not been quantified, but can provide insights when making large-scale predictions of mass bleaching events. Using data from 708 bleaching surveys across the globe, a framework was developed to predict the probability of moderate or severe bleaching as a function of key environmental variables derived from global-scale remote-sensing data. The ability of models to explain spatial and temporal variability in mass bleaching events was quantified. Results indicated approximately 20% improved accuracy of predictions of bleaching when solar radiation and water mixing, in addition to elevated temperature, were incorporated into models, but predictive accuracy was variable among regions. Results provide insights into the effects of environmental parameters on bleaching at a global scale.
Yannouleas, Constantine; Romanovsky, Igor; Landman, Uzi
2015-01-20
Graphene's isolation launched explorations of fundamental relativistic physics originating from the planar honeycomb lattice arrangement of the carbon atoms, and of potential technological applications in nanoscale electronics. Bottom-up fabricated atomically-precise segmented graphene nanoribbons, SGNRs, open avenues for studies of electrical transport, coherence, and interference effects in metallic, semiconducting, and mixed GNRs, with different edge terminations. Conceptual and practical understanding of electric transport through SGNRs is gained through nonequilibrium Green's function (NEGF) conductance calculations and a Dirac continuum model that absorbs the valence-to-conductance energy gaps as position-dependent masses, including topological-in-origin mass-barriers at the contacts between segments. The continuum model reproduces themore » NEGF results, including optical Dirac Fabry-Pérot (FP) equidistant oscillations for massless relativistic carriers in metallic armchair SGNRs, and an unequally-spaced FP pattern for mixed armchair-zigzag SGNRs where carriers transit from a relativistic (armchair) to a nonrelativistic (zigzag) regime. This provides a unifying framework for analysis of coherent transport phenomena and interpretation of forthcoming experiments in SGNRs.« less
Bacterial growth, flow, and mixing shape human gut microbiota density and composition.
Arnoldini, Markus; Cremer, Jonas; Hwa, Terence
2018-03-13
The human gut microbiota is highly dynamic, and host physiology and diet exert major influences on its composition. In our recent study, we integrated new quantitative measurements on bacterial growth physiology with a reanalysis of published data on human physiology to build a comprehensive modeling framework. This can generate predictions of how changes in different host factors influence microbiota composition. For instance, hydrodynamic forces in the colon, along with colonic water absorption that manifests as transit time, exert a major impact on microbiota density and composition. This can be mechanistically explained by their effect on colonic pH which directly affects microbiota competition for food. In this addendum, we describe the underlying analysis in more detail. In particular, we discuss the mixing dynamics of luminal content by wall contractions and its implications for bacterial growth and density, as well as the broader implications of our insights for the field of gut microbiota research.
Bulk and surface properties of liquid Al-Cr and Cr-Ni alloys.
Novakovic, R
2011-06-15
The energetics of mixing and structural arrangement in liquid Al-Cr and Cr-Ni alloys has been analysed through the study of surface properties (surface tension and surface segregation), dynamic properties (chemical diffusion) and microscopic functions (concentration fluctuations in the long-wavelength limit and chemical short-range order parameter) in the framework of statistical mechanical theory in conjunction with quasi-lattice theory. The Al-Cr phase diagram exhibits the existence of different intermetallic compounds in the solid state, while that of Cr-Ni is a simple eutectic-type phase diagram at high temperatures and includes the low-temperature peritectoid reaction in the range near a CrNi(2) composition. Accordingly, the mixing behaviour in Al-Cr and Cr-Ni alloy melts was studied using the complex formation model in the weak interaction approximation and by postulating Al(8)Cr(5) and CrNi(2) chemical complexes, respectively, as energetically favoured.
NASA Astrophysics Data System (ADS)
Merler, Stefano
2016-09-01
Characterizing the early growth profile of an epidemic outbreak is key for predicting the likely trajectory of the number of cases and for designing adequate control measures. Epidemic profiles characterized by exponential growth have been widely observed in the past and a grounding theoretical framework for the analysis of infectious disease dynamics was provided by the pioneering work of Kermack and McKendrick [1]. In particular, exponential growth stems from the assumption that pathogens spread in homogeneous mixing populations; that is, individuals of the population mix uniformly and randomly with each other. However, this assumption was readily recognized as highly questionable [2], and sub-exponential profiles of epidemic growth have been observed in a number of epidemic outbreaks, including HIV/AIDS, foot-and-mouth disease, measles and, more recently, Ebola [3,4].
CKM pattern from localized generations in extra dimension
NASA Astrophysics Data System (ADS)
Matti, C.
2006-10-01
We revisit the issue of the quark masses and mixing angles in the framework of large extra dimension. We consider three identical standard model families resulting from higher-dimensional fields localized on different branes embedded in a large extra dimension. Furthermore we use a decaying profile in the bulk different form previous works. With the Higgs field also localized on a different brane, the hierarchy of masses between the families results from their different positions in the extra space. When the left-handed doublet and the right-handed singlets are localized with different couplings on the branes, we found a set of brane locations in one extra dimension which leads to the correct quark masses and mixing angles with the sufficient strength of CP-violation. We see that the decaying profile of the Higgs field plays a crucial role for producing the hierarchies in a rather natural way.
Recasting the theory of mosquito-borne pathogen transmission dynamics and control.
Smith, David L; Perkins, T Alex; Reiner, Robert C; Barker, Christopher M; Niu, Tianchan; Chaves, Luis Fernando; Ellis, Alicia M; George, Dylan B; Le Menach, Arnaud; Pulliam, Juliet R C; Bisanzio, Donal; Buckee, Caroline; Chiyaka, Christinah; Cummings, Derek A T; Garcia, Andres J; Gatton, Michelle L; Gething, Peter W; Hartley, David M; Johnston, Geoffrey; Klein, Eili Y; Michael, Edwin; Lloyd, Alun L; Pigott, David M; Reisen, William K; Ruktanonchai, Nick; Singh, Brajendra K; Stoller, Jeremy; Tatem, Andrew J; Kitron, Uriel; Godfray, H Charles J; Cohen, Justin M; Hay, Simon I; Scott, Thomas W
2014-04-01
Mosquito-borne diseases pose some of the greatest challenges in public health, especially in tropical and sub-tropical regions of the world. Efforts to control these diseases have been underpinned by a theoretical framework developed for malaria by Ross and Macdonald, including models, metrics for measuring transmission, and theory of control that identifies key vulnerabilities in the transmission cycle. That framework, especially Macdonald's formula for R0 and its entomological derivative, vectorial capacity, are now used to study dynamics and design interventions for many mosquito-borne diseases. A systematic review of 388 models published between 1970 and 2010 found that the vast majority adopted the Ross-Macdonald assumption of homogeneous transmission in a well-mixed population. Studies comparing models and data question these assumptions and point to the capacity to model heterogeneous, focal transmission as the most important but relatively unexplored component in current theory. Fine-scale heterogeneity causes transmission dynamics to be nonlinear, and poses problems for modeling, epidemiology and measurement. Novel mathematical approaches show how heterogeneity arises from the biology and the landscape on which the processes of mosquito biting and pathogen transmission unfold. Emerging theory focuses attention on the ecological and social context for mosquito blood feeding, the movement of both hosts and mosquitoes, and the relevant spatial scales for measuring transmission and for modeling dynamics and control.
The Role of Wakes in Modelling Tidal Current Turbines
NASA Astrophysics Data System (ADS)
Conley, Daniel; Roc, Thomas; Greaves, Deborah
2010-05-01
The eventual proper development of arrays of Tidal Current Turbines (TCT) will require a balance which maximizes power extraction while minimizing environmental impacts. Idealized analytical analogues and simple 2-D models are useful tools for investigating questions of a general nature but do not represent a practical tool for application to realistic cases. Some form of 3-D numerical simulations will be required for such applications and the current project is designed to develop a numerical decision-making tool for use in planning large scale TCT projects. The project is predicated on the use of an existing regional ocean modelling framework (the Regional Ocean Modelling System - ROMS) which is modified to enable the user to account for the effects of TCTs. In such a framework where mixing processes are highly parametrized, the fidelity of the quantitative results is critically dependent on the parameter values utilized. In light of the early stage of TCT development and the lack of field scale measurements, the calibration of such a model is problematic. In the absence of explicit calibration data sets, the device wake structure has been identified as an efficient feature for model calibration. This presentation will discuss efforts to design an appropriate calibration scheme which focuses on wake decay and the motivation for this approach, techniques applied, validation results from simple test cases and limitations shall be presented.
Recasting the theory of mosquito-borne pathogen transmission dynamics and control
Smith, David L.; Perkins, T. Alex; Reiner, Robert C.; Barker, Christopher M.; Niu, Tianchan; Chaves, Luis Fernando; Ellis, Alicia M.; George, Dylan B.; Le Menach, Arnaud; Pulliam, Juliet R. C.; Bisanzio, Donal; Buckee, Caroline; Chiyaka, Christinah; Cummings, Derek A. T.; Garcia, Andres J.; Gatton, Michelle L.; Gething, Peter W.; Hartley, David M.; Johnston, Geoffrey; Klein, Eili Y.; Michael, Edwin; Lloyd, Alun L.; Pigott, David M.; Reisen, William K.; Ruktanonchai, Nick; Singh, Brajendra K.; Stoller, Jeremy; Tatem, Andrew J.; Kitron, Uriel; Godfray, H. Charles J.; Cohen, Justin M.; Hay, Simon I.; Scott, Thomas W.
2014-01-01
Mosquito-borne diseases pose some of the greatest challenges in public health, especially in tropical and sub-tropical regions of the world. Efforts to control these diseases have been underpinned by a theoretical framework developed for malaria by Ross and Macdonald, including models, metrics for measuring transmission, and theory of control that identifies key vulnerabilities in the transmission cycle. That framework, especially Macdonald's formula for R0 and its entomological derivative, vectorial capacity, are now used to study dynamics and design interventions for many mosquito-borne diseases. A systematic review of 388 models published between 1970 and 2010 found that the vast majority adopted the Ross–Macdonald assumption of homogeneous transmission in a well-mixed population. Studies comparing models and data question these assumptions and point to the capacity to model heterogeneous, focal transmission as the most important but relatively unexplored component in current theory. Fine-scale heterogeneity causes transmission dynamics to be nonlinear, and poses problems for modeling, epidemiology and measurement. Novel mathematical approaches show how heterogeneity arises from the biology and the landscape on which the processes of mosquito biting and pathogen transmission unfold. Emerging theory focuses attention on the ecological and social context for mosquito blood feeding, the movement of both hosts and mosquitoes, and the relevant spatial scales for measuring transmission and for modeling dynamics and control. PMID:24591453
Hopkins, John B.; Ferguson, Jake M.; Tyers, Daniel B.; Kurle, Carolyn M.
2017-01-01
Past research indicates that whitebark pine seeds are a critical food source for Threatened grizzly bears (Ursus arctos) in the Greater Yellowstone Ecosystem (GYE). In recent decades, whitebark pine forests have declined markedly due to pine beetle infestation, invasive blister rust, and landscape-level fires. To date, no study has reliably estimated the contribution of whitebark pine seeds to the diets of grizzlies through time. We used stable isotope ratios (expressed as δ13C, δ15N, and δ34S values) measured in grizzly bear hair and their major food sources to estimate the diets of grizzlies sampled in Cooke City Basin, Montana. We found that stable isotope mixing models that included different combinations of stable isotope values for bears and their foods generated similar proportional dietary contributions. Estimates generated by our top model suggest that whitebark pine seeds (35±10%) and other plant foods (56±10%) were more important than meat (9±8%) to grizzly bears sampled in the study area. Stable isotope values measured in bear hair collected elsewhere in the GYE and North America support our conclusions about plant-based foraging. We recommend that researchers consider model selection when estimating the diets of animals using stable isotope mixing models. We also urge researchers to use the new statistical framework described here to estimate the dietary responses of grizzlies to declines in whitebark pine seeds and other important food sources through time in the GYE (e.g., cutthroat trout), as such information could be useful in predicting how the population will adapt to future environmental change. PMID:28493929
Hopkins, John B; Ferguson, Jake M; Tyers, Daniel B; Kurle, Carolyn M
2017-01-01
Past research indicates that whitebark pine seeds are a critical food source for Threatened grizzly bears (Ursus arctos) in the Greater Yellowstone Ecosystem (GYE). In recent decades, whitebark pine forests have declined markedly due to pine beetle infestation, invasive blister rust, and landscape-level fires. To date, no study has reliably estimated the contribution of whitebark pine seeds to the diets of grizzlies through time. We used stable isotope ratios (expressed as δ13C, δ15N, and δ34S values) measured in grizzly bear hair and their major food sources to estimate the diets of grizzlies sampled in Cooke City Basin, Montana. We found that stable isotope mixing models that included different combinations of stable isotope values for bears and their foods generated similar proportional dietary contributions. Estimates generated by our top model suggest that whitebark pine seeds (35±10%) and other plant foods (56±10%) were more important than meat (9±8%) to grizzly bears sampled in the study area. Stable isotope values measured in bear hair collected elsewhere in the GYE and North America support our conclusions about plant-based foraging. We recommend that researchers consider model selection when estimating the diets of animals using stable isotope mixing models. We also urge researchers to use the new statistical framework described here to estimate the dietary responses of grizzlies to declines in whitebark pine seeds and other important food sources through time in the GYE (e.g., cutthroat trout), as such information could be useful in predicting how the population will adapt to future environmental change.
NASA Astrophysics Data System (ADS)
Abdullatif, O.; Yassin, M.
2012-04-01
1KFUPM This study investigates the lithofacies types distribution of the carbonate and siliciclastic rocks of Dam and Hofuf Formations in eastern Saudi Arabia. The shallow burial of these formations and limited post depositional changes allowed significant preservation of porosity at outcrop scale. The mixed carbonate-siliciclastic succession represents important reservoirs in the Mesozoic and Tertiary stratigraphic succession in the Arabian Plate.This study integrates field work sedimentological and stratigraphical and lithofacies data to model the spatial distribution of facies of this shallow marine and fluvial depositional setting. The Dam Formation is characterized by very high percentage of grain- dominated textures representing high to low energy intertidal deposits a mixed of carbonate and siliciclastic succession. The middle Miocene Dam section is dominated by intra-clasts, ooids and peloids grainstones. The Hofuf Formation represents fluvial channel and overank facies which is characterized by mudclast abd gravel-rich erosive bases overlain by pebbly conglomerates which passes upward into medium to very coarse grained massive, horizontally stratified and trough cross-stratifed sandstone facies. Lithological stratigraphic sections data distributed over the Al-lidam escarpment were correlated on the basis of facies types and sequences. This allow mapping and building a framework for modeling the spatial distribution of the carbonate and siliciclastic facies in the area. The geological model shows variations in the facies distribution patterns which mainly reflect both dynamic and static depositional controls on facies types distribution. The geological model may act as a guide for facies types distribution, and provide better understanding and prediction of reservoir quality and architecture of stratigraphically equivalent carbonate-siliciclastic successions in the subsurface.
NASA Astrophysics Data System (ADS)
Das, Debottam; Ghosh, Kirtiman; Mitra, Manimala; Mondal, Subhadeep
2018-01-01
We consider an extension of the standard model (SM) augmented by two neutral singlet fermions per generation and a leptoquark. In order to generate the light neutrino masses and mixing, we incorporate inverse seesaw mechanism. The right-handed neutrino production in this model is significantly larger than the conventional inverse seesaw scenario. We analyze the different collider signatures of this model and find that the final states associated with three or more leptons, multijet and at least one b -tagged and (or) τ -tagged jet can probe larger RH neutrino mass scale. We have also proposed a same-sign dilepton signal region associated with multiple jets and missing energy that can be used to distinguish the present scenario from the usual inverse seesaw extended SM.
Marwan, Wolfgang; Sujatha, Arumugam; Starostzik, Christine
2005-10-21
We reconstruct the regulatory network controlling commitment and sporulation of Physarum polycephalum from experimental results using a hierarchical Petri Net-based modelling and simulation framework. The stochastic Petri Net consistently describes the structure and simulates the dynamics of the molecular network as analysed by genetic, biochemical and physiological experiments within a single coherent model. The Petri Net then is extended to simulate time-resolved somatic complementation experiments performed by mixing the cytoplasms of mutants altered in the sporulation response, to systematically explore the network structure and to probe its dynamics. This reverse engineering approach presumably can be employed to explore other molecular or genetic signalling systems where the activity of genes or their products can be experimentally controlled in a time-resolved manner.
Study of nuclear structure of 76-86Sr isotopes in the pn interacting boson model
NASA Astrophysics Data System (ADS)
Saxena, M.; Gupta, J. B.; Mandal, S.
2015-08-01
The proton neutron interacting boson model (IBM-2) has been used to make a systematic study of Strontium isotopes in this mass region of A ˜ 80 with 38 ≤slant N ≤slant 48 and Z = 38. The three-term Talmi-Otsuka general Hamiltonian in the framework of the neutron proton version of the Interaction boson model was used to perform the calculations. The yrast levels energy are reproduced. The beta and gamma band energy levels also matched well. The reduced transition probabilities were also calculated and were found to be in agreement with the experimental values. In addition, g-factor for the {2}1+ state was evaluated. Possible candidates for mixed symmetry states were also predicted for several nuclei in this isotopic chain.
Ultem ®/ZIF-8 mixed matrix membranes for gas separation: Transport and physical properties
Eiras, Daniel; Labreche, Ying; Pessan, Luiz Antonio
2016-02-19
Mixed matrix membranes are promising options for improving gas separation processes. Zeolitic imidazolate frameworks (ZIFs) have a porous structure similar to conventional zeolites, being capable in principle of separating gases based on their differences in kinetic diameter while offering the advantage of having a partial organic character. This partial organic nature improves the compatibility between the sieve and the polymer, and a combination of the mentioned characteristics makes these hybrid materials interesting for the preparation of mixed matrix gas separation membranes. In this context the present work reports the preparation of Ultem ®/ZIF-8 mixed matrix membranes and their permeabilities tomore » pure CO 2, N 2 and CH 4 gases. A significant increase in permeability with increase in CO 2/N 2 selectivity was observed for the mixed matrix systems as compared to the properties of the neat Ultem ®. Sorption results allowed to speculate that the ZIF-8 framework is not completely stable dimensionally, what influences the separation process by allowing gases with higher kinetic diameter than its nominal aperture to be sorbed and to diffuse through the crystal. Lastly, sorption and diffusion selectivities indicate that the higher separation performance of the mixed matrix membranes is governed by the diffusion process associated with the influence of gas molecule´s geometry.« less
A Mixed-Methods Research Framework for Healthcare Process Improvement.
Bastian, Nathaniel D; Munoz, David; Ventura, Marta
2016-01-01
The healthcare system in the United States is spiraling out of control due to ever-increasing costs without significant improvements in quality, access to care, satisfaction, and efficiency. Efficient workflow is paramount to improving healthcare value while maintaining the utmost standards of patient care and provider satisfaction in high stress environments. This article provides healthcare managers and quality engineers with a practical healthcare process improvement framework to assess, measure and improve clinical workflow processes. The proposed mixed-methods research framework integrates qualitative and quantitative tools to foster the improvement of processes and workflow in a systematic way. The framework consists of three distinct phases: 1) stakeholder analysis, 2a) survey design, 2b) time-motion study, and 3) process improvement. The proposed framework is applied to the pediatric intensive care unit of the Penn State Hershey Children's Hospital. The implementation of this methodology led to identification and categorization of different workflow tasks and activities into both value-added and non-value added in an effort to provide more valuable and higher quality patient care. Based upon the lessons learned from the case study, the three-phase methodology provides a better, broader, leaner, and holistic assessment of clinical workflow. The proposed framework can be implemented in various healthcare settings to support continuous improvement efforts in which complexity is a daily element that impacts workflow. We proffer a general methodology for process improvement in a healthcare setting, providing decision makers and stakeholders with a useful framework to help their organizations improve efficiency. Published by Elsevier Inc.
Huang, Yangxin; Lu, Xiaosun; Chen, Jiaqing; Liang, Juan; Zangmeister, Miriam
2017-10-27
Longitudinal and time-to-event data are often observed together. Finite mixture models are currently used to analyze nonlinear heterogeneous longitudinal data, which, by releasing the homogeneity restriction of nonlinear mixed-effects (NLME) models, can cluster individuals into one of the pre-specified classes with class membership probabilities. This clustering may have clinical significance, and be associated with clinically important time-to-event data. This article develops a joint modeling approach to a finite mixture of NLME models for longitudinal data and proportional hazard Cox model for time-to-event data, linked by individual latent class indicators, under a Bayesian framework. The proposed joint models and method are applied to a real AIDS clinical trial data set, followed by simulation studies to assess the performance of the proposed joint model and a naive two-step model, in which finite mixture model and Cox model are fitted separately.
Development and validation of a regional coupled forecasting system for S2S forecasts
NASA Astrophysics Data System (ADS)
Sun, R.; Subramanian, A. C.; Hoteit, I.; Miller, A. J.; Ralph, M.; Cornuelle, B. D.
2017-12-01
Accurate and efficient forecasting of oceanic and atmospheric circulation is essential for a wide variety of high-impact societal needs, including: weather extremes; environmental protection and coastal management; management of fisheries, marine conservation; water resources; and renewable energy. Effective forecasting relies on high model fidelity and accurate initialization of the models with observed state of the ocean-atmosphere-land coupled system. A regional coupled ocean-atmosphere model with the Weather Research and Forecasting (WRF) model and the MITGCM ocean model coupled using the ESMF (Earth System Modeling Framework) coupling framework is developed to resolve mesoscale air-sea feedbacks. The regional coupled model allows oceanic mixed layer heat and momentum to interact with the atmospheric boundary layer dynamics at the mesoscale and submesoscale spatiotemporal regimes, thus leading to feedbacks which are otherwise not resolved in coarse resolution global coupled forecasting systems or regional uncoupled forecasting systems. The model is tested in two scenarios in the mesoscale eddy rich Red Sea and Western Indian Ocean region as well as mesoscale eddies and fronts of the California Current System. Recent studies show evidence for air-sea interactions involving the oceanic mesoscale in these two regions which can enhance predictability on sub seasonal timescale. We will present results from this newly developed regional coupled ocean-atmosphere model for forecasts over the Red Sea region as well as the California Current region. The forecasts will be validated against insitu observations in the region as well as reanalysis fields.
Simulating squeeze flows in multiaxial laminates using an improved TIF model
NASA Astrophysics Data System (ADS)
Ibañez, R.; Abisset-Chavanne, Emmanuelle; Chinesta, Francisco
2017-10-01
Thermoplastic composites are widely considered in structural parts. In this paper attention is paid to squeeze flow of continuous fiber laminates. In the case of unidirectional prepregs, the ply constitutive equation is modeled as a transversally isotropic fluid, that must satisfy both the fiber inextensibility as well as the fluid incompressibility. When laminate is squeezed the flow kinematics exhibits a complex dependency along the laminate thickness requiring a detailed velocity description through the thickness. In a former work the solution making use of an in-plane-out-of-plane separated representation within the PGD - Poper Generalized Decomposition - framework was successfully accomplished when both kinematic constraints (inextensibility and in-compressibility) were introduced using a penalty formulation for circumventing the LBB constraints. However, such a formulation makes difficult the calculation on fiber tractions and compression forces, the last required in rheological characterizations. In this paper the former penalty formulation is substituted by a mixed formulation that makes use of two Lagrange multipliers, while addressing the LBB stability conditions within the separated representation framework, questions never until now addressed.
Development of a robust framework for controlling high performance turbofan engines
NASA Astrophysics Data System (ADS)
Miklosovic, Robert
This research involves the development of a robust framework for controlling complex and uncertain multivariable systems. Where mathematical modeling is often tedious or inaccurate, the new method uses an extended state observer (ESO) to estimate and cancel dynamic information in real time and dynamically decouple the system. As a result, controller design and tuning become transparent as the number of required model parameters is reduced. Much research has been devoted towards the application of modern multivariable control techniques on aircraft engines. However, few, if any, have been implemented on an operational aircraft, partially due to the difficulty in tuning the controller for satisfactory performance. The new technique is applied to a modern two-spool, high-pressure ratio, low-bypass turbofan with mixed-flow afterburning. A realistic Modular Aero-Propulsion System Simulation (MAPSS) package, developed by NASA, is used to demonstrate the new design process and compare its performance with that of a supplied nominal controller. This approach is expected to reduce gain scheduling over the full operating envelope of the engine and allow a controller to be tuned for engine-to-engine variations.
Integrated optimization of planetary rover layout and exploration routes
NASA Astrophysics Data System (ADS)
Lee, Dongoo; Ahn, Jaemyung
2018-01-01
This article introduces an optimization framework for the integrated design of a planetary surface rover and its exploration route that is applicable to the initial phase of a planetary exploration campaign composed of multiple surface missions. The scientific capability and the mobility of a rover are modelled as functions of the science weight fraction, a key parameter characterizing the rover. The proposed problem is formulated as a mixed-integer nonlinear program that maximizes the sum of profits obtained through a planetary surface exploration mission by simultaneously determining the science weight fraction of the rover, the sites to visit and their visiting sequences under resource consumption constraints imposed on each route and collectively on a mission. A solution procedure for the proposed problem composed of two loops (the outer loop and the inner loop) is developed. The results of test cases demonstrating the effectiveness of the proposed framework are presented.
NASA Astrophysics Data System (ADS)
Fukuda, J.; Johnson, K. M.
2009-12-01
Studies utilizing inversions of geodetic data for the spatial distribution of coseismic slip on faults typically present the result as a single fault plane and slip distribution. Commonly the geometry of the fault plane is assumed to be known a priori and the data are inverted for slip. However, sometimes there is not strong a priori information on the geometry of the fault that produced the earthquake and the data is not always strong enough to completely resolve the fault geometry. We develop a method to solve for the full posterior probability distribution of fault slip and fault geometry parameters in a Bayesian framework using Monte Carlo methods. The slip inversion problem is particularly challenging because it often involves multiple data sets with unknown relative weights (e.g. InSAR, GPS), model parameters that are related linearly (slip) and nonlinearly (fault geometry) through the theoretical model to surface observations, prior information on model parameters, and a regularization prior to stabilize the inversion. We present the theoretical framework and solution method for a Bayesian inversion that can handle all of these aspects of the problem. The method handles the mixed linear/nonlinear nature of the problem through combination of both analytical least-squares solutions and Monte Carlo methods. We first illustrate and validate the inversion scheme using synthetic data sets. We then apply the method to inversion of geodetic data from the 2003 M6.6 San Simeon, California earthquake. We show that the uncertainty in strike and dip of the fault plane is over 20 degrees. We characterize the uncertainty in the slip estimate with a volume around the mean fault solution in which the slip most likely occurred. Slip likely occurred somewhere in a volume that extends 5-10 km in either direction normal to the fault plane. We implement slip inversions with both traditional, kinematic smoothing constraints on slip and a simple physical condition of uniform stress drop.
Chen, Gang; Taylor, Paul A.; Shin, Yong-Wook; Reynolds, Richard C.; Cox, Robert W.
2016-01-01
It has been argued that naturalistic conditions in FMRI studies provide a useful paradigm for investigating perception and cognition through a synchronization measure, inter-subject correlation (ISC). However, one analytical stumbling block has been the fact that the ISC values associated with each single subject are not independent, and our previous paper (Chen et al., 2016) used simulations and analyses of real data to show that the methodologies adopted in the literature do not have the proper control for false positives. In the same paper, we proposed nonparametric subject-wise bootstrapping and permutation testing techniques for one and two groups, respectively, which account for the correlation structure, and these greatly outperformed the prior methods in controlling the false positive rate (FPR); that is, subject-wise bootstrapping (SWB) worked relatively well for both cases with one and two groups, and subject-wise permutation (SWP) testing was virtually ideal for group comparisons. Here we seek to explicate and adopt a parametric approach through linear mixed-effects (LME) modeling for studying the ISC values, building on the previous correlation framework, with the benefit that the LME platform offers wider adaptability, more powerful interpretations, and quality control checking capability than nonparametric methods. We describe both theoretical and practical issues involved in the modeling and the manner in which LME with crossed random effects (CRE) modeling is applied. A data-doubling step further allows us to conveniently track the subject index, and achieve easy implementations. We pit the LME approach against the best nonparametric methods, and find that the LME framework achieves proper control for false positives. The new LME methodologies are shown to be both efficient and robust, and they will be added as an additional option and settings in an existing open source program, 3dLME, in AFNI (http://afni.nimh.nih.gov). PMID:27751943
A Methodology for Conducting Integrative Mixed Methods Research and Data Analyses
Castro, Felipe González; Kellison, Joshua G.; Boyd, Stephen J.; Kopak, Albert
2011-01-01
Mixed methods research has gained visibility within the last few years, although limitations persist regarding the scientific caliber of certain mixed methods research designs and methods. The need exists for rigorous mixed methods designs that integrate various data analytic procedures for a seamless transfer of evidence across qualitative and quantitative modalities. Such designs can offer the strength of confirmatory results drawn from quantitative multivariate analyses, along with “deep structure” explanatory descriptions as drawn from qualitative analyses. This article presents evidence generated from over a decade of pilot research in developing an integrative mixed methods methodology. It presents a conceptual framework and methodological and data analytic procedures for conducting mixed methods research studies, and it also presents illustrative examples from the authors' ongoing integrative mixed methods research studies. PMID:22167325
MESOSCOPIC MODELING OF STOCHASTIC REACTION-DIFFUSION KINETICS IN THE SUBDIFFUSIVE REGIME
BLANC, EMILIE; ENGBLOM, STEFAN; HELLANDER, ANDREAS; LÖTSTEDT, PER
2017-01-01
Subdiffusion has been proposed as an explanation of various kinetic phenomena inside living cells. In order to fascilitate large-scale computational studies of subdiffusive chemical processes, we extend a recently suggested mesoscopic model of subdiffusion into an accurate and consistent reaction-subdiffusion computational framework. Two different possible models of chemical reaction are revealed and some basic dynamic properties are derived. In certain cases those mesoscopic models have a direct interpretation at the macroscopic level as fractional partial differential equations in a bounded time interval. Through analysis and numerical experiments we estimate the macroscopic effects of reactions under subdiffusive mixing. The models display properties observed also in experiments: for a short time interval the behavior of the diffusion and the reaction is ordinary, in an intermediate interval the behavior is anomalous, and at long times the behavior is ordinary again. PMID:29046618
Theoretical study of mixing in liquid clouds – Part 1: Classical concepts
Korolev, Alexei; Khain, Alex; Pinsky, Mark; ...
2016-07-28
The present study considers final stages of in-cloud mixing in the framework of classical concept of homogeneous and extreme inhomogeneous mixing. Simple analytical relationships between basic microphysical parameters were obtained for homogeneous and extreme inhomogeneous mixing based on the adiabatic consideration. It was demonstrated that during homogeneous mixing the functional relationships between the moments of the droplets size distribution hold only during the primary stage of mixing. Subsequent random mixing between already mixed parcels and undiluted cloud parcels breaks these relationships. However, during extreme inhomogeneous mixing the functional relationships between the microphysical parameters hold both for primary and subsequent mixing.more » The obtained relationships can be used to identify the type of mixing from in situ observations. The effectiveness of the developed method was demonstrated using in situ data collected in convective clouds. It was found that for the specific set of in situ measurements the interaction between cloudy and entrained environments was dominated by extreme inhomogeneous mixing.« less
CFD-based optimization in plastics extrusion
NASA Astrophysics Data System (ADS)
Eusterholz, Sebastian; Elgeti, Stefanie
2018-05-01
This paper presents novel ideas in numerical design of mixing elements in single-screw extruders. The actual design process is reformulated as a shape optimization problem, given some functional, but possibly inefficient initial design. Thereby automatic optimization can be incorporated and the design process is advanced, beyond the simulation-supported, but still experience-based approach. This paper proposes concepts to extend a method which has been developed and validated for die design to the design of mixing-elements. For simplicity, it focuses on single-phase flows only. The developed method conducts forward-simulations to predict the quasi-steady melt behavior in the relevant part of the extruder. The result of each simulation is used in a black-box optimization procedure based on an efficient low-order parameterization of the geometry. To minimize user interaction, an objective function is formulated that quantifies the products' quality based on the forward simulation. This paper covers two aspects: (1) It reviews the set-up of the optimization framework as discussed in [1], and (2) it details the necessary extensions for the optimization of mixing elements in single-screw extruders. It concludes with a presentation of first advances in the unsteady flow simulation of a metering and mixing section with the SSMUM [2] using the Carreau material model.
Lagrangian particle statistics of numerically simulated shear waves
NASA Astrophysics Data System (ADS)
Kirby, J.; Briganti, R.; Brocchini, M.; Chen, Q. J.
2006-12-01
The properties of numerical solutions of various circulation models (Boussinesq-type and wave-averaged NLSWE) have been investigated on the basis of the induced horizontal flow mixing, for the case of shear waves. The mixing properties of the flow have been investigated using particle statistics, following the approach of LaCasce (2001) and Piattella et al. (2006). Both an idealized barred beach bathymetry and a test case taken from SANDYDUCK '97 have been considered. Random seeding patterns of passive tracer particles are used. The flow exhibits features similar to those discussed in literature. Differences are also evident due both to the physics (intense longshore shear shoreward of the bar) and the procedure used to obtain the statistics (lateral conditions limit the time/space window for the longshore flow). Within the Boussinesq framework, different formulations of Boussinesq type equations have been used and the results compared (Wei et al. 1995, Chen et al. (2003), Chen et al. (2006)). Analysis based on the Eulerian velocity fields suggests a close similarity between Wei et al. (1995) and Chen et. al (2006), while examination of particle displacements and implied mixing suggests a closer behaviour between Chen et al. (2003) and Chen et al. (2006). Two distinct stages of mixing are evident in all simulations: i) the first stage ends at t
Mixed reality framework for collective motion patterns of swarms with delay coupling
NASA Astrophysics Data System (ADS)
Szwaykowska, Klementyna; Schwartz, Ira
The formation of coherent patterns in swarms of interacting self-propelled autonomous agents is an important subject for many applications within the field of distributed robotic systems. However, there are significant logistical challenges associated with testing fully distributed systems in real-world settings. In this paper, we provide a rigorous theoretical justification for the use of mixed-reality experiments as a stepping stone to fully physical testing of distributed robotic systems. We also model and experimentally realize a mixed-reality large-scale swarm of delay-coupled agents. Our analyses, assuming agents communicating over an Erdos-Renyi network, demonstrate the existence of stable coherent patterns that can be achieved only with delay coupling and that are robust to decreasing network connectivity and heterogeneity in agent dynamics. We show how the bifurcation structure for emergence of different patterns changes with heterogeneity in agent acceleration capabilities and limited connectivity in the network as a function of coupling strength and delay. Our results are verified through simulation as well as preliminary experimental results of delay-induced pattern formation in a mixed-reality swarm. K. S. was a National Research Council postdoctoral fellow. I.B.S was supported by the U.S. Naval Research Laboratory funding (N0001414WX00023) and office of Naval Research (N0001414WX20610).
NASA Astrophysics Data System (ADS)
Lowe, Rachel; Bailey, Trevor C.; Stephenson, David B.; Graham, Richard J.; Coelho, Caio A. S.; Sá Carvalho, Marilia; Barcellos, Christovam
2011-03-01
This paper considers the potential for using seasonal climate forecasts in developing an early warning system for dengue fever epidemics in Brazil. In the first instance, a generalised linear model (GLM) is used to select climate and other covariates which are both readily available and prove significant in prediction of confirmed monthly dengue cases based on data collected across the whole of Brazil for the period January 2001 to December 2008 at the microregion level (typically consisting of one large city and several smaller municipalities). The covariates explored include temperature and precipitation data on a 2.5°×2.5° longitude-latitude grid with time lags relevant to dengue transmission, an El Niño Southern Oscillation index and other relevant socio-economic and environmental variables. A negative binomial model formulation is adopted in this model selection to allow for extra-Poisson variation (overdispersion) in the observed dengue counts caused by unknown/unobserved confounding factors and possible correlations in these effects in both time and space. Subsequently, the selected global model is refined in the context of the South East region of Brazil, where dengue predominates, by reverting to a Poisson framework and explicitly modelling the overdispersion through a combination of unstructured and spatio-temporal structured random effects. The resulting spatio-temporal hierarchical model (or GLMM—generalised linear mixed model) is implemented via a Bayesian framework using Markov Chain Monte Carlo (MCMC). Dengue predictions are found to be enhanced both spatially and temporally when using the GLMM and the Bayesian framework allows posterior predictive distributions for dengue cases to be derived, which can be useful for developing a dengue alert system. Using this model, we conclude that seasonal climate forecasts could have potential value in helping to predict dengue incidence months in advance of an epidemic in South East Brazil.
Quantum-like dynamics applied to cognition: a consideration of available options
NASA Astrophysics Data System (ADS)
Broekaert, Jan; Basieva, Irina; Blasiak, Pawel; Pothos, Emmanuel M.
2017-10-01
Quantum probability theory (QPT) has provided a novel, rich mathematical framework for cognitive modelling, especially for situations which appear paradoxical from classical perspectives. This work concerns the dynamical aspects of QPT, as relevant to cognitive modelling. We aspire to shed light on how the mind's driving potentials (encoded in Hamiltonian and Lindbladian operators) impact the evolution of a mental state. Some existing QPT cognitive models do employ dynamical aspects when considering how a mental state changes with time, but it is often the case that several simplifying assumptions are introduced. What kind of modelling flexibility does QPT dynamics offer without any simplifying assumptions and is it likely that such flexibility will be relevant in cognitive modelling? We consider a series of nested QPT dynamical models, constructed with a view to accommodate results from a simple, hypothetical experimental paradigm on decision-making. We consider Hamiltonians more complex than the ones which have traditionally been employed with a view to explore the putative explanatory value of this additional complexity. We then proceed to compare simple models with extensions regarding both the initial state (e.g. a mixed state with a specific orthogonal decomposition; a general mixed state) and the dynamics (by introducing Hamiltonians which destroy the separability of the initial structure and by considering an open-system extension). We illustrate the relations between these models mathematically and numerically. This article is part of the themed issue `Second quantum revolution: foundational questions'.
A Mixed Integer Linear Programming Approach to Electrical Stimulation Optimization Problems.
Abouelseoud, Gehan; Abouelseoud, Yasmine; Shoukry, Amin; Ismail, Nour; Mekky, Jaidaa
2018-02-01
Electrical stimulation optimization is a challenging problem. Even when a single region is targeted for excitation, the problem remains a constrained multi-objective optimization problem. The constrained nature of the problem results from safety concerns while its multi-objectives originate from the requirement that non-targeted regions should remain unaffected. In this paper, we propose a mixed integer linear programming formulation that can successfully address the challenges facing this problem. Moreover, the proposed framework can conclusively check the feasibility of the stimulation goals. This helps researchers to avoid wasting time trying to achieve goals that are impossible under a chosen stimulation setup. The superiority of the proposed framework over alternative methods is demonstrated through simulation examples.
Surface functionalization of metal organic frameworks for mixed matrix membranes
Albenze, Erik; Lartey, Michael; Li, Tao; Luebke, David R.; Nulwala, Hunaid B.; Rosi, Nathaniel L.; Venna, Surendar R.
2017-03-21
Mixed Matrix Membrane (MMM) are composite membranes for gas separation and comprising a quantity of inorganic filler particles, in particular metal organic framework (MOF), dispersed throughout a polymer matrix comprising one or more polymers. This disclosure is directed to MOF functionalized through addition of a pendant functional group to the MOF, in order to improve interaction with a surrounding polymer matrix in a MMM. The improved interaction aids in avoiding defects in the MMM due to incompatible interfaces between the polymer matrix and the MOF particle, in turn increasing the mechanical and gas separation properties of the MMM. The disclosure is also directed to a MMM incorporating the surface functionalized MOF.
Monolithic Gyroidal Mesoporous Mixed Titanium–Niobium Nitrides
2015-01-01
Mesoporous transition metal nitrides are interesting materials for energy conversion and storage applications due to their conductivity and durability. We present ordered mixed titanium–niobium (8:2, 1:1) nitrides with gyroidal network structures synthesized from triblock terpolymer structure-directed mixed oxides. The materials retain both macroscopic integrity and mesoscale ordering despite heat treatment up to 600 °C, without a rigid carbon framework as a support. Furthermore, the gyroidal lattice parameters were varied by changing polymer molar mass. This synthesis strategy may prove useful in generating a variety of monolithic ordered mesoporous mixed oxides and nitrides for electrode and catalyst materials. PMID:25122534
Coordinating AgMIP data and models across global and regional scales for 1.5°C and 2.0°C assessments
NASA Astrophysics Data System (ADS)
Rosenzweig, Cynthia; Ruane, Alex C.; Antle, John; Elliott, Joshua; Ashfaq, Muhammad; Chatta, Ashfaq Ahmad; Ewert, Frank; Folberth, Christian; Hathie, Ibrahima; Havlik, Petr; Hoogenboom, Gerrit; Lotze-Campen, Hermann; MacCarthy, Dilys S.; Mason-D'Croz, Daniel; Contreras, Erik Mencos; Müller, Christoph; Perez-Dominguez, Ignacio; Phillips, Meridel; Porter, Cheryl; Raymundo, Rubi M.; Sands, Ronald D.; Schleussner, Carl-Friedrich; Valdivia, Roberto O.; Valin, Hugo; Wiebe, Keith
2018-05-01
The Agricultural Model Intercomparison and Improvement Project (AgMIP) has developed novel methods for Coordinated Global and Regional Assessments (CGRA) of agriculture and food security in a changing world. The present study aims to perform a proof of concept of the CGRA to demonstrate advantages and challenges of the proposed framework. This effort responds to the request by the UN Framework Convention on Climate Change (UNFCCC) for the implications of limiting global temperature increases to 1.5°C and 2.0°C above pre-industrial conditions. The protocols for the 1.5°C/2.0°C assessment establish explicit and testable linkages across disciplines and scales, connecting outputs and inputs from the Shared Socio-economic Pathways (SSPs), Representative Agricultural Pathways (RAPs), Half a degree Additional warming, Prognosis and Projected Impacts (HAPPI) and Coupled Model Intercomparison Project Phase 5 (CMIP5) ensemble scenarios, global gridded crop models, global agricultural economics models, site-based crop models and within-country regional economics models. The CGRA consistently links disciplines, models and scales in order to track the complex chain of climate impacts and identify key vulnerabilities, feedbacks and uncertainties in managing future risk. CGRA proof-of-concept results show that, at the global scale, there are mixed areas of positive and negative simulated wheat and maize yield changes, with declines in some breadbasket regions, at both 1.5°C and 2.0°C. Declines are especially evident in simulations that do not take into account direct CO2 effects on crops. These projected global yield changes mostly resulted in increases in prices and areas of wheat and maize in two global economics models. Regional simulations for 1.5°C and 2.0°C using site-based crop models had mixed results depending on the region and the crop. In conjunction with price changes from the global economics models, productivity declines in the Punjab, Pakistan, resulted in an increase in vulnerable households and the poverty rate. This article is part of the theme issue `The Paris Agreement: understanding the physical and social challenges for a warming world of 1.5°C above pre-industrial levels'.
Rosenzweig, Cynthia; Ruane, Alex C; Antle, John; Elliott, Joshua; Ashfaq, Muhammad; Chatta, Ashfaq Ahmad; Ewert, Frank; Folberth, Christian; Hathie, Ibrahima; Havlik, Petr; Hoogenboom, Gerrit; Lotze-Campen, Hermann; MacCarthy, Dilys S; Mason-D'Croz, Daniel; Contreras, Erik Mencos; Müller, Christoph; Perez-Dominguez, Ignacio; Phillips, Meridel; Porter, Cheryl; Raymundo, Rubi M; Sands, Ronald D; Schleussner, Carl-Friedrich; Valdivia, Roberto O; Valin, Hugo; Wiebe, Keith
2018-05-13
The Agricultural Model Intercomparison and Improvement Project (AgMIP) has developed novel methods for Coordinated Global and Regional Assessments (CGRA) of agriculture and food security in a changing world. The present study aims to perform a proof of concept of the CGRA to demonstrate advantages and challenges of the proposed framework. This effort responds to the request by the UN Framework Convention on Climate Change (UNFCCC) for the implications of limiting global temperature increases to 1.5°C and 2.0°C above pre-industrial conditions. The protocols for the 1.5°C/2.0°C assessment establish explicit and testable linkages across disciplines and scales, connecting outputs and inputs from the Shared Socio-economic Pathways (SSPs), Representative Agricultural Pathways (RAPs), Half a degree Additional warming, Prognosis and Projected Impacts (HAPPI) and Coupled Model Intercomparison Project Phase 5 (CMIP5) ensemble scenarios, global gridded crop models, global agricultural economics models, site-based crop models and within-country regional economics models. The CGRA consistently links disciplines, models and scales in order to track the complex chain of climate impacts and identify key vulnerabilities, feedbacks and uncertainties in managing future risk. CGRA proof-of-concept results show that, at the global scale, there are mixed areas of positive and negative simulated wheat and maize yield changes, with declines in some breadbasket regions, at both 1.5°C and 2.0°C. Declines are especially evident in simulations that do not take into account direct CO 2 effects on crops. These projected global yield changes mostly resulted in increases in prices and areas of wheat and maize in two global economics models. Regional simulations for 1.5°C and 2.0°C using site-based crop models had mixed results depending on the region and the crop. In conjunction with price changes from the global economics models, productivity declines in the Punjab, Pakistan, resulted in an increase in vulnerable households and the poverty rate.This article is part of the theme issue 'The Paris Agreement: understanding the physical and social challenges for a warming world of 1.5°C above pre-industrial levels'. © 2018 The Authors.
ERIC Educational Resources Information Center
Kerrigan, Monica Reid
2014-01-01
This convergent parallel design mixed methods case study of four community colleges explores the relationship between organizational capacity and implementation of data-driven decision making (DDDM). The article also illustrates purposive sampling using replication logic for cross-case analysis and the strengths and weaknesses of quantitizing…
ERIC Educational Resources Information Center
Mbella, Kinge Keka
2012-01-01
Mixed-format assessments are increasingly being used in large scale standardized assessments to measure a continuum of skills ranging from basic recall to higher order thinking skills. These assessments are usually comprised of a combination of (a) multiple-choice items which can be efficiently scored, have stable psychometric properties, and…
ERIC Educational Resources Information Center
McLafferty, Charles L., Jr.; Slate, John R.; Onwuegbuzie, Anthony J.
2010-01-01
Quantitative research dominates published literature in the helping professions. Mixed methods research, which integrates quantitative and qualitative methodologies, has received a lukewarm reception. The authors address the iterative separation that infuses theory, praxis, philosophy, methodology, training, and public perception and propose a…
Mindful Leaders in Highly Effective Schools: A Mixed-Method Application of Hoy's M-Scale
ERIC Educational Resources Information Center
Kearney, W. Sean; Kelsey, Cheryl; Herrington, David
2013-01-01
This article presents a mixed-method study utilizing teacher ratings of principal mindfulness from 149 public schools in Texas and follow-up qualitative data analysis through semi-structured interviews conducted with the top 10 percent of princeipals identified as mindful. This research is based on the theoretical framework of mindfulness as…
Zeolite-imidazolate framework (ZIF-8) membrane synthesis on a mixed-matrix substrate.
Barankova, Eva; Pradeep, Neelakanda; Peinemann, Klaus-Viktor
2013-10-21
A thin, dense, compact and hydrogen selective ZIF-8 membrane was synthesized on a polymer/metal oxide mixed-matrix support by a secondary seeding method. The new concept of incorporating ZnO particles into the support and PDMS coating of the ZIF-8 layer is introduced to improve the preparation of ZIF-polymer composite membranes.
NASA Astrophysics Data System (ADS)
Waldman, Robin; Somot, Samuel; Herrmann, Marine; Bosse, Anthony; Caniaux, Guy; Estournel, Claude; Houpert, Loic; Prieur, Louis; Sevault, Florence; Testor, Pierre
2017-02-01
The northwestern Mediterranean Sea is a well-observed ocean deep convection site. Winter 2012-2013 was an intense and intensely documented dense water formation (DWF) event. We evaluate this DWF event in an ensemble configuration of the regional ocean model NEMOMED12. We then assess for the first time the impact of ocean intrinsic variability on DWF with a novel perturbed initial state ensemble method. Finally, we identify the main physical mechanisms driving water mass transformations. NEMOMED12 reproduces accurately the deep convection chronology between late January and March, its location off the Gulf of Lions although with a southward shift and its magnitude. It fails to reproduce the Western Mediterranean Deep Waters salinification and warming, consistently with too strong a surface heat loss. The Ocean Intrinsic Variability modulates half of the DWF area, especially in the open-sea where the bathymetry slope is low. It modulates marginally (3-5%) the integrated DWF rate, but its increase with time suggests its impact could be larger at interannual timescales. We conclude that ensemble frameworks are necessary to evaluate accurately numerical simulations of DWF. Each phase of DWF has distinct diapycnal and thermohaline regimes: during preconditioning, the Mediterranean thermohaline circulation is driven by exchanges with the Algerian basin. During the intense mixing phase, surface heat fluxes trigger deep convection and internal mixing largely determines the resulting deep water properties. During restratification, lateral exchanges and internal mixing are enhanced. Finally, isopycnal mixing was shown to play a large role in water mass transformations during the preconditioning and restratification phases.
Richard T. Reynolds; Andrew J. Sanchez Meador; James A. Youtz; Tessa Nicolet; Megan S. Matonis; Patrick L. Jackson; Donald G. DeLorenzo; Andrew D. Graves
2013-01-01
Ponderosa pine and dry mixed-conifer forests in the Southwest United States are experiencing, or have become increasingly susceptible to, large-scale severe wildfire, insect, and disease episodes resulting in altered plant and animal demographics, reduced productivity and biodiversity, and impaired ecosystem processes and functions. We present a management framework...
NASA Technical Reports Server (NTRS)
Ly, Uy-Loi; Schoemig, Ewald
1993-01-01
In the past few years, the mixed H(sub 2)/H-infinity control problem has been the object of much research interest since it allows the incorporation of robust stability into the LQG framework. The general mixed H(sub 2)/H-infinity design problem has yet to be solved analytically. Numerous schemes have considered upper bounds for the H(sub 2)-performance criterion and/or imposed restrictive constraints on the class of systems under investigation. Furthermore, many modern control applications rely on dynamic models obtained from finite-element analysis and thus involve high-order plant models. Hence the capability to design low-order (fixed-order) controllers is of great importance. In this research a new design method was developed that optimizes the exact H(sub 2)-norm of a certain subsystem subject to robust stability in terms of H-infinity constraints and a minimal number of system assumptions. The derived algorithm is based on a differentiable scalar time-domain penalty function to represent the H-infinity constraints in the overall optimization. The scheme is capable of handling multiple plant conditions and hence multiple performance criteria and H-infinity constraints and incorporates additional constraints such as fixed-order and/or fixed structure controllers. The defined penalty function is applicable to any constraint that is expressible in form of a real symmetric matrix-inequity.
Goeyvaerts, Nele; Leuridan, Elke; Faes, Christel; Van Damme, Pierre; Hens, Niel
2015-09-10
Biomedical studies often generate repeated measures of multiple outcomes on a set of subjects. It may be of interest to develop a biologically intuitive model for the joint evolution of these outcomes while assessing inter-subject heterogeneity. Even though it is common for biological processes to entail non-linear relationships, examples of multivariate non-linear mixed models (MNMMs) are still fairly rare. We contribute to this area by jointly analyzing the maternal antibody decay for measles, mumps, rubella, and varicella, allowing for a different non-linear decay model for each infectious disease. We present a general modeling framework to analyze multivariate non-linear longitudinal profiles subject to censoring, by combining multivariate random effects, non-linear growth and Tobit regression. We explore the hypothesis of a common infant-specific mechanism underlying maternal immunity using a pairwise correlated random-effects approach and evaluating different correlation matrix structures. The implied marginal correlation between maternal antibody levels is estimated using simulations. The mean duration of passive immunity was less than 4 months for all diseases with substantial heterogeneity between infants. The maternal antibody levels against rubella and varicella were found to be positively correlated, while little to no correlation could be inferred for the other disease pairs. For some pairs, computational issues occurred with increasing correlation matrix complexity, which underlines the importance of further developing estimation methods for MNMMs. Copyright © 2015 John Wiley & Sons, Ltd.
Spectral characteristics of background error covariance and multiscale data assimilation
Li, Zhijin; Cheng, Xiaoping; Gustafson, Jr., William I.; ...
2016-05-17
The steady increase of the spatial resolutions of numerical atmospheric and oceanic circulation models has occurred over the past decades. Horizontal grid spacing down to the order of 1 km is now often used to resolve cloud systems in the atmosphere and sub-mesoscale circulation systems in the ocean. These fine resolution models encompass a wide range of temporal and spatial scales, across which dynamical and statistical properties vary. In particular, dynamic flow systems at small scales can be spatially localized and temporarily intermittent. Difficulties of current data assimilation algorithms for such fine resolution models are numerically and theoretically examined. Ourmore » analysis shows that the background error correlation length scale is larger than 75 km for streamfunctions and is larger than 25 km for water vapor mixing ratios, even for a 2-km resolution model. A theoretical analysis suggests that such correlation length scales prevent the currently used data assimilation schemes from constraining spatial scales smaller than 150 km for streamfunctions and 50 km for water vapor mixing ratios. Moreover, our results highlight the need to fundamentally modify currently used data assimilation algorithms for assimilating high-resolution observations into the aforementioned fine resolution models. Lastly, within the framework of four-dimensional variational data assimilation, a multiscale methodology based on scale decomposition is suggested and challenges are discussed.« less
NASA Astrophysics Data System (ADS)
Ruiz-Baier, Ricardo; Lunati, Ivan
2016-10-01
We present a novel discretization scheme tailored to a class of multiphase models that regard the physical system as consisting of multiple interacting continua. In the framework of mixture theory, we consider a general mathematical model that entails solving a system of mass and momentum equations for both the mixture and one of the phases. The model results in a strongly coupled and nonlinear system of partial differential equations that are written in terms of phase and mixture (barycentric) velocities, phase pressure, and saturation. We construct an accurate, robust and reliable hybrid method that combines a mixed finite element discretization of the momentum equations with a primal discontinuous finite volume-element discretization of the mass (or transport) equations. The scheme is devised for unstructured meshes and relies on mixed Brezzi-Douglas-Marini approximations of phase and total velocities, on piecewise constant elements for the approximation of phase or total pressures, as well as on a primal formulation that employs discontinuous finite volume elements defined on a dual diamond mesh to approximate scalar fields of interest (such as volume fraction, total density, saturation, etc.). As the discretization scheme is derived for a general formulation of multicontinuum physical systems, it can be readily applied to a large class of simplified multiphase models; on the other, the approach can be seen as a generalization of these models that are commonly encountered in the literature and employed when the latter are not sufficiently accurate. An extensive set of numerical test cases involving two- and three-dimensional porous media are presented to demonstrate the accuracy of the method (displaying an optimal convergence rate), the physics-preserving properties of the mixed-primal scheme, as well as the robustness of the method (which is successfully used to simulate diverse physical phenomena such as density fingering, Terzaghi's consolidation, deformation of a cantilever bracket, and Boycott effects). The applicability of the method is not limited to flow in porous media, but can also be employed to describe many other physical systems governed by a similar set of equations, including e.g. multi-component materials.
Regional transport modelling for nitrate trend assessment and forecasting in a chalk aquifer.
Orban, Philippe; Brouyère, Serge; Batlle-Aguilar, Jordi; Couturier, Julie; Goderniaux, Pascal; Leroy, Mathieu; Maloszewski, Piotr; Dassargues, Alain
2010-10-21
Regional degradation of groundwater resources by nitrate has become one of the main challenges for water managers worldwide. Regulations have been defined to reverse observed nitrate trends in groundwater bodies, such as the Water Framework Directive and the Groundwater Daughter Directive in the European Union. In such a context, one of the main challenges remains to develop efficient approaches for groundwater quality assessment at regional scale, including quantitative numerical modelling, as a decision support for groundwater management. A new approach combining the use of environmental tracers and the innovative 'Hybrid Finite Element Mixing Cell' (HFEMC) modelling technique is developed to study and forecast the groundwater quality at the regional scale, with an application to a regional chalk aquifer in the Geer basin in Belgium. Tritium data and nitrate time series are used to produce a conceptual model for regional groundwater flow and contaminant transport in the combined unsaturated and saturated zones of the chalk aquifer. This shows that the spatial distribution of the contamination in the Geer basin is essentially linked to the hydrodynamic conditions prevailing in the basin, more precisely to groundwater age and mixing and not to the spatial patterns of land use or local hydrodispersive processes. A three-dimensional regional scale groundwater flow and solute transport model is developed. It is able to reproduce the spatial patterns of tritium and nitrate and the observed nitrate trends in the chalk aquifer and it is used to predict the evolution of nitrate concentrations in the basin. The modelling application shows that the global inertia of groundwater quality is strong in the basin and trend reversal is not expected to occur before the 2015 deadline fixed by the European Water Framework Directive. The expected time required for trend reversal ranges between 5 and more than 50 years, depending on the location in the basin and the expected reduction in nitrate application. To reach a good chemical status, nitrate concentrations in the infiltrating water should be reduced as soon as possible below 50mg/l; however, even in that case, more than 50 years is needed to fully reverse upward trends. Copyright © 2010 Elsevier B.V. All rights reserved.
Joint modelling of repeated measurement and time-to-event data: an introductory tutorial.
Asar, Özgür; Ritchie, James; Kalra, Philip A; Diggle, Peter J
2015-02-01
The term 'joint modelling' is used in the statistical literature to refer to methods for simultaneously analysing longitudinal measurement outcomes, also called repeated measurement data, and time-to-event outcomes, also called survival data. A typical example from nephrology is a study in which the data from each participant consist of repeated estimated glomerular filtration rate (eGFR) measurements and time to initiation of renal replacement therapy (RRT). Joint models typically combine linear mixed effects models for repeated measurements and Cox models for censored survival outcomes. Our aim in this paper is to present an introductory tutorial on joint modelling methods, with a case study in nephrology. We describe the development of the joint modelling framework and compare the results with those obtained by the more widely used approaches of conducting separate analyses of the repeated measurements and survival times based on a linear mixed effects model and a Cox model, respectively. Our case study concerns a data set from the Chronic Renal Insufficiency Standards Implementation Study (CRISIS). We also provide details of our open-source software implementation to allow others to replicate and/or modify our analysis. The results for the conventional linear mixed effects model and the longitudinal component of the joint models were found to be similar. However, there were considerable differences between the results for the Cox model with time-varying covariate and the time-to-event component of the joint model. For example, the relationship between kidney function as measured by eGFR and the hazard for initiation of RRT was significantly underestimated by the Cox model that treats eGFR as a time-varying covariate, because the Cox model does not take measurement error in eGFR into account. Joint models should be preferred for simultaneous analyses of repeated measurement and survival data, especially when the former is measured with error and the association between the underlying error-free measurement process and the hazard for survival is of scientific interest. © The Author 2015; all rights reserved. Published by Oxford University Press on behalf of the International Epidemiological Association.
SLMRACE: a noise-free RACE implementation with reduced computational time
NASA Astrophysics Data System (ADS)
Chauvin, Juliet; Provenzi, Edoardo
2017-05-01
We present a faster and noise-free implementation of the RACE algorithm. RACE has mixed characteristics between the famous Retinex model of Land and McCann and the automatic color equalization (ACE) color-correction algorithm. The original random spray-based RACE implementation suffers from two main problems: its computational time and the presence of noise. Here, we will show that it is possible to adapt two techniques recently proposed by Banić et al. to the RACE framework in order to drastically decrease the computational time and noise generation. The implementation will be called smart-light-memory-RACE (SLMRACE).
Study of the low energy spectrum of titanium by using QMC methods
NASA Astrophysics Data System (ADS)
Buendía, E.; Caballero, M. A.; Gálvez, F. J.
2018-02-01
We study the ground state and the low energy excited states of Ti. Each variational wave function is a product of a Jastrow correlation factor by a model function obtained within the parameterized optimized effective potential (POEP) framework by using a configuration mixing. Near degeneracy effects between the orbitals 4s and 4p, as well as excitations to the 3d orbital due to the strong competition between 4s and 3d orbitals in transition metal atoms are taken into account. All electron calculations have been carried out by using quantum Monte Carlo techniques, variational and diffusion.
Charged mediators in dark matter scattering
NASA Astrophysics Data System (ADS)
Stengel, Patrick
2017-11-01
We consider a scenario, within the framework of the MSSM, in which dark matter is bino-like and dark matter-nucleon spin-independent scattering occurs via the exchange of light squarks which exhibit left-right mixing. We show that direct detection experiments such as LUX and SuperCDMS will be sensitive to a wide class of such models through spin-independent scattering. The dominant nuclear physics uncertainty is the quark content of the nucleon, particularly the strangeness content. We also investigate parameter space with nearly degenerate neutralino and squark masses, thus enhancing dark matter annihilation and nucleon scattering event rates.
NASA Astrophysics Data System (ADS)
Corvo, Arthur Francis
Given the reality that active and competitive participation in the 21 st century requires American students to deepen their scientific and mathematical knowledge base, the National Research Council (NRC) proposed a new conceptual framework for K--12 science education. The framework consists of an integration of what the NRC report refers to as the three dimensions: scientific and engineering practices, crosscutting concepts, and core ideas in four disciplinary areas (physical, life and earth/spaces sciences, and engineering/technology). The Next Generation Science Standards (NGSS ), which are derived from this new framework, were released in April 2013 and have implications on teacher learning and development in Science, Technology, Engineering, and Mathematics (STEM). Given the NGSS's recent introduction, there is little research on how teachers can prepare for its release. To meet this research need, I implemented a self-study aimed at examining my teaching practices and classroom outcomes through the lens of the NRC's conceptual framework and the NGSS. The self-study employed design-based research (DBR) methods to investigate what happened in my secondary classroom when I designed, enacted, and reflected on units of study for my science, engineering, and mathematics classes. I utilized various best practices including Learning for Use (LfU) and Understanding by Design (UbD) models for instructional design, talk moves as a tool for promoting discourse, and modeling instruction for these designed units of study. The DBR strategy was chosen to promote reflective cycles, which are consistent with and in support of the self-study framework. A multiple case, mixed-methods approach was used for data collection and analysis. The findings in the study are reported by study phase in terms of unit planning, unit enactment, and unit reflection. The findings have implications for science teaching, teacher professional development, and teacher education.
Schlette, Sophia; Lisac, Melanie; Wagner, Ed; Gensichen, Jochen
2009-01-01
The Bellagio Model for Population-oriented Primary Care is an evidence-informed framework to assess accessible care for sick, vulnerable, and healthy people. The model was developed in spring 2008 by a multidisciplinary group of 24 experts from nine countries. The purpose of their gathering was to determine success factors for effective 21st century primary care based on state-of-the-art research findings, models, and empirical experience, and to assist with its implementation in practice, management, and health policy. Against the backdrop of "partialization", fragmentation in open health care systems, and the growing numbers of chronically ill or fragile people or those in need of any other kind of care, today's health care systems do not provide the much needed anchor point for continuing coordination and assistance prior, during and following an episode of illness. The Bellagio Model consists of ten key elements, which can make a substantial contribution to identify and overcome current gaps in primary care by using a synergetic approach. These elements are Shared Leadership, Public Trust, Horizontal and Vertical Integration, Networking of Professionals, Standardized Measurement, Research and Development, Payment Mix, Infrastructure, Programmes for Practice Improvement, and Population-oriented Management. All of these elements, which have been identified as being equally necessary, are also alike in that they involve all those responsible for health care: providers, managers, and policymakers.
NASA Astrophysics Data System (ADS)
Gavrishchaka, Valeriy V.; Kovbasinskaya, Maria; Monina, Maria
2008-11-01
Novelty detection is a very desirable additional feature of any practical classification or forecasting system. Novelty and rare patterns detection is the main objective in such applications as fault/abnormality discovery in complex technical and biological systems, fraud detection and risk management in financial and insurance industry. Although many interdisciplinary approaches for rare event modeling and novelty detection have been proposed, significant data incompleteness due to the nature of the problem makes it difficult to find a universal solution. Even more challenging and much less formalized problem is novelty detection in complex strategies and models where practical performance criteria are usually multi-objective and the best state-of-the-art solution is often not known due to the complexity of the task and/or proprietary nature of the application area. For example, it is much more difficult to detect a series of small insider trading or other illegal transactions mixed with valid operations and distributed over long time period according to a well-designed strategy than a single, large fraudulent transaction. Recently proposed boosting-based optimization was shown to be an effective generic tool for the discovery of stable multi-component strategies/models from the existing parsimonious base strategies/models in financial and other applications. Here we outline how the same framework can be used for novelty and fraud detection in complex strategies and models.
Scobbie, Lesley; Duncan, Edward A; Brady, Marian C; Wyke, Sally
2015-01-01
We investigated the nature of services providing community-based stroke rehabilitation across the UK, and goal setting practice used within them, to inform evaluation of a goal setting and action planning (G-AP) framework. We designed, piloted and electronically distributed a survey to health professionals working in community-based stroke rehabilitation settings across the UK. We optimised recruitment using a multi-faceted strategy. Responses were analysed from 437 services. Services size, composition and input was highly variable; however, most were multi-disciplinary (82%; n = 335/407) and provided input to a mixed diagnostic group of patients (71%; n = 312/437). Ninety one percent of services (n = 358/395) reported setting goals with "all" or "most" stroke survivors. Seventeen percent (n = 65/380) reported that no methods were used to guide goal setting practice; 47% (n = 148/315) reported use of informal methods only. Goal setting practice varied, e.g. 98% of services (n = 362/369) reported routinely asking patients about goal priorities; 39% (n = 141/360) reported routinely providing patients with a copy of their goals. Goal setting is embedded within community-based stroke rehabilitation; however, practice varies and is potentially sub-optimal. Further evaluation of the G-AP framework is warranted to inform optimal practice. Evaluation design will take account of the diverse service models that exist. Implications for Rehabilitation Community-based stroke rehabilitation services across the UK are diverse and tend to see a mixed diagnostic group of patients. Goal setting is implemented routinely within community-based stroke rehabilitation services; however, practice is variable and potentially sub-optimal. Further evaluation of the G-AP framework is warranted to assess its effectiveness in practice.
Eastwood, John G; Kemp, Lynn A; Jalaludin, Bin B
2016-01-01
We have recently described a protocol for a study that aims to build a theory of neighbourhood context and postnatal depression. That protocol proposed a critical realist Explanatory Theory Building Method comprising of an: (1) emergent phase, (2) construction phase, and (3) confirmatory phase. A concurrent triangulated mixed method multilevel cross-sectional study design was described. The protocol also described in detail the Theory Construction Phase which will be presented here. The Theory Construction Phase will include: (1) defining stratified levels; (2) analytic resolution; (3) abductive reasoning; (4) comparative analysis (triangulation); (5) retroduction; (6) postulate and proposition development; (7) comparison and assessment of theories; and (8) conceptual frameworks and model development. The stratified levels of analysis in this study were predominantly social and psychological. The abductive analysis used the theoretical frames of: Stress Process; Social Isolation; Social Exclusion; Social Services; Social Capital, Acculturation Theory and Global-economic level mechanisms. Realist propositions are presented for each analysis of triangulated data. Inference to best explanation is used to assess and compare theories. A conceptual framework of maternal depression, stress and context is presented that includes examples of mechanisms at psychological, social, cultural and global-economic levels. Stress was identified as a necessary mechanism that has the tendency to cause several outcomes including depression, anxiety, and health harming behaviours. The conceptual framework subsequently included conditional mechanisms identified through the retroduction including the stressors of isolation and expectations and buffers of social support and trust. The meta-theory of critical realism is used here to generate and construct social epidemiological theory using stratified ontology and both abductive and retroductive analysis. The findings will be applied to the development of a middle range theory and subsequent programme theory for local perinatal child and family interventions.
Hamel, Sandra; Yoccoz, Nigel G; Gaillard, Jean-Michel
2017-05-01
Mixed models are now well-established methods in ecology and evolution because they allow accounting for and quantifying within- and between-individual variation. However, the required normal distribution of the random effects can often be violated by the presence of clusters among subjects, which leads to multi-modal distributions. In such cases, using what is known as mixture regression models might offer a more appropriate approach. These models are widely used in psychology, sociology, and medicine to describe the diversity of trajectories occurring within a population over time (e.g. psychological development, growth). In ecology and evolution, however, these models are seldom used even though understanding changes in individual trajectories is an active area of research in life-history studies. Our aim is to demonstrate the value of using mixture models to describe variation in individual life-history tactics within a population, and hence to promote the use of these models by ecologists and evolutionary ecologists. We first ran a set of simulations to determine whether and when a mixture model allows teasing apart latent clustering, and to contrast the precision and accuracy of estimates obtained from mixture models versus mixed models under a wide range of ecological contexts. We then used empirical data from long-term studies of large mammals to illustrate the potential of using mixture models for assessing within-population variation in life-history tactics. Mixture models performed well in most cases, except for variables following a Bernoulli distribution and when sample size was small. The four selection criteria we evaluated [Akaike information criterion (AIC), Bayesian information criterion (BIC), and two bootstrap methods] performed similarly well, selecting the right number of clusters in most ecological situations. We then showed that the normality of random effects implicitly assumed by evolutionary ecologists when using mixed models was often violated in life-history data. Mixed models were quite robust to this violation in the sense that fixed effects were unbiased at the population level. However, fixed effects at the cluster level and random effects were better estimated using mixture models. Our empirical analyses demonstrated that using mixture models facilitates the identification of the diversity of growth and reproductive tactics occurring within a population. Therefore, using this modelling framework allows testing for the presence of clusters and, when clusters occur, provides reliable estimates of fixed and random effects for each cluster of the population. In the presence or expectation of clusters, using mixture models offers a suitable extension of mixed models, particularly when evolutionary ecologists aim at identifying how ecological and evolutionary processes change within a population. Mixture regression models therefore provide a valuable addition to the statistical toolbox of evolutionary ecologists. As these models are complex and have their own limitations, we provide recommendations to guide future users. © 2016 Cambridge Philosophical Society.
Frye, Victoria; Blaney, Shannon; Cerdá, Magdalena; Vlahov, David; Galea, Sandro; Ompad, Danielle C
2014-07-01
We assessed relations among neighborhood characteristics and sexual intimate partner violence against women (SIPVAW), among low-income, drug-involved, women (n = 360) and men (n = 670) in New York City between 2005 and 2009. Six percent of women (n = 22) and 5% of men (n = 33) reported experiencing and perpetrating SIPVAW in the past year with a main partner. In adjusted mixed models among women, neighborhood ethnic heterogeneity was significantly negatively associated with SIPVAW victimization. In adjusted logistic models among men, neighborhood collective efficacy was significantly positively associated with SIPVAW perpetration. Novel theoretical frameworks are needed to guide research on neighborhoods and partner violence. © The Author(s) 2014.
Analyzing the Discovery Potential for Light Dark Matter.
Izaguirre, Eder; Krnjaic, Gordan; Schuster, Philip; Toro, Natalia
2015-12-18
In this Letter, we determine the present status of sub-GeV thermal dark matter annihilating through standard model mixing, with special emphasis on interactions through the vector portal. Within representative simple models, we carry out a complete and precise calculation of the dark matter abundance and of all available constraints. We also introduce a concise framework for comparing different experimental approaches, and use this comparison to identify important ranges of dark matter mass and couplings to better explore in future experiments. The requirement that dark matter be a thermal relic sets a sharp sensitivity target for terrestrial experiments, and so we highlight complementary experimental approaches that can decisively reach this milestone sensitivity over the entire sub-GeV mass range.
Understanding large SEP events with the PATH code: Modeling of the 13 December 2006 SEP event
NASA Astrophysics Data System (ADS)
Verkhoglyadova, O. P.; Li, G.; Zank, G. P.; Hu, Q.; Cohen, C. M. S.; Mewaldt, R. A.; Mason, G. M.; Haggerty, D. K.; von Rosenvinge, T. T.; Looper, M. D.
2010-12-01
The Particle Acceleration and Transport in the Heliosphere (PATH) numerical code was developed to understand solar energetic particle (SEP) events in the near-Earth environment. We discuss simulation results for the 13 December 2006 SEP event. The PATH code includes modeling a background solar wind through which a CME-driven oblique shock propagates. The code incorporates a mixed population of both flare and shock-accelerated solar wind suprathermal particles. The shock parameters derived from ACE measurements at 1 AU and observational flare characteristics are used as input into the numerical model. We assume that the diffusive shock acceleration mechanism is responsible for particle energization. We model the subsequent transport of particles originated at the flare site and particles escaping from the shock and propagating in the equatorial plane through the interplanetary medium. We derive spectra for protons, oxygen, and iron ions, together with their time-intensity profiles at 1 AU. Our modeling results show reasonable agreement with in situ measurements by ACE, STEREO, GOES, and SAMPEX for this event. We numerically estimate the Fe/O abundance ratio and discuss the physics underlying a mixed SEP event. We point out that the flare population is as important as shock geometry changes during shock propagation for modeling time-intensity profiles and spectra at 1 AU. The combined effects of seed population and shock geometry will be examined in the framework of an extended PATH code in future modeling efforts.
Becker, Sara J
2015-02-10
Fewer than one in 10 adolescents with substance use disorders (ASUDs) will receive specialty treatment, and even fewer will receive treatment designated as evidence-based practice (EBP). Traditional efforts to increase the utilization of EBP by ASUDs typically focus on practitioners-either in substance use clinics or allied health settings. Direct-to-consumer (DTC) marketing that directly targets parents of ASUDs represents a potentially complementary paradigm that has yet to be evaluated. The current study is the first to evaluate the relevance of a well-established marketing framework (the Marketing Mix) and measurement approach (measurement of perceived service quality [PSQ]) with parents of ASUDs in need of treatment. A mixed-methods design is employed across three study phases, consistent with well-established methods used in the field of marketing science. Phase 1 consists of formative qualitative research with parents (and a supplementary sample of adolescents) in order to evaluate and potentially adapt a conceptual framework (Marketing Mix) and measure of PSQ. Phase 2 is a targeted survey of ASUD parents to elucidate their marketing preferences, using the adapted Marketing Mix framework, and to establish the psychometric properties of the PSQ measure. The survey will also gather data on parents' preferences for different targeted marketing messages. Phase 3 is a two-group randomized controlled trial comparing the effectiveness of targeted marketing messages versus standard clinical information. Key outcomes will include parents' ratings of PSQ (using the new measure), behavioral intentions to seek out information about EBP, and actual information-seeking behavior. The current study will inform the field whether a well-established marketing framework and measurement approach can be used to increase demand for EBP among parents of ASUDs. Results of this study will have the potential to immediately inform DTC marketing efforts by professional organizations, federal agencies, clinicians, and clinical researchers.
A Categorical Framework for Model Classification in the Geosciences
NASA Astrophysics Data System (ADS)
Hauhs, Michael; Trancón y Widemann, Baltasar; Lange, Holger
2016-04-01
Models have a mixed record of success in the geosciences. In meteorology, model development and implementation has been among the first and most successful examples of triggering computer technology in science. On the other hand, notorious problems such as the 'equifinality issue' in hydrology lead to a rather mixed reputation of models in other areas. The most successful models in geosciences are applications of dynamic systems theory to non-living systems or phenomena. Thus, we start from the hypothesis that the success of model applications relates to the influence of life on the phenomenon under study. We thus focus on the (formal) representation of life in models. The aim is to investigate whether disappointment in model performance is due to system properties such as heterogeneity and historicity of ecosystems, or rather reflects an abstraction and formalisation problem at a fundamental level. As a formal framework for this investigation, we use category theory as applied in computer science to specify behaviour at an interface. Its methods have been developed for translating and comparing formal structures among different application areas and seems highly suited for a classification of the current "model zoo" in the geosciences. The approach is rather abstract, with a high degree of generality but a low level of expressibility. Here, category theory will be employed to check the consistency of assumptions about life in different models. It will be shown that it is sufficient to distinguish just four logical cases to check for consistency of model content. All four cases can be formalised as variants of coalgebra-algebra homomorphisms. It can be demonstrated that transitions between the four variants affect the relevant observations (time series or spatial maps), the formalisms used (equations, decision trees) and the test criteria of success (prediction, classification) of the resulting model types. We will present examples from hydrology and ecology in which a transport problem is combined with the strategic behaviour of living agents. The living and the non-living aspects of the model belong to two different model types. If a model is built to combine strategic behaviour with the constraint of mass conservation, some critical assumptions appear as inevitable, or models may become logically inconsistent. The categorical assessment and the examples demonstrate that many models at ecosystem level, where both living and non-living aspects inevitably meet, pose so far unsolved, fundamental problems. Today, these are often pragmatically resolved at the level of software engineering. Some suggestions will be given how model documentation and benchmarking may help clarifying and resolving some of these issues.
Velocity Resolved---Scalar Modeled Simulations of High Schmidt Number Turbulent Transport
NASA Astrophysics Data System (ADS)
Verma, Siddhartha
The objective of this thesis is to develop a framework to conduct velocity resolved - scalar modeled (VR-SM) simulations, which will enable accurate simulations at higher Reynolds and Schmidt (Sc) numbers than are currently feasible. The framework established will serve as a first step to enable future simulation studies for practical applications. To achieve this goal, in-depth analyses of the physical, numerical, and modeling aspects related to Sc " 1 are presented, specifically when modeling in the viscous-convective subrange. Transport characteristics are scrutinized by examining scalar-velocity Fourier mode interactions in Direct Numerical Simulation (DNS) datasets and suggest that scalar modes in the viscous-convective subrange do not directly affect large-scale transport for high Sc . Further observations confirm that discretization errors inherent in numerical schemes can be sufficiently large to wipe out any meaningful contribution from subfilter models. This provides strong incentive to develop more effective numerical schemes to support high Sc simulations. To lower numerical dissipation while maintaining physically and mathematically appropriate scalar bounds during the convection step, a novel method of enforcing bounds is formulated, specifically for use with cubic Hermite polynomials. Boundedness of the scalar being transported is effected by applying derivative limiting techniques, and physically plausible single sub-cell extrema are allowed to exist to help minimize numerical dissipation. The proposed bounding algorithm results in significant performance gain in DNS of turbulent mixing layers and of homogeneous isotropic turbulence. Next, the combined physical/mathematical behavior of the subfilter scalar-flux vector is analyzed in homogeneous isotropic turbulence, by examining vector orientation in the strain-rate eigenframe. The results indicate no discernible dependence on the modeled scalar field, and lead to the identification of the tensor-diffusivity model as a good representation of the subfilter flux. Velocity resolved - scalar modeled simulations of homogeneous isotropic turbulence are conducted to confirm the behavior theorized in these a priori analyses, and suggest that the tensor-diffusivity model is ideal for use in the viscous-convective subrange. Simulations of a turbulent mixing layer are also discussed, with the partial objective of analyzing Schmidt number dependence of a variety of scalar statistics. Large-scale statistics are confirmed to be relatively independent of the Schmidt number for Sc " 1, which is explained by the dominance of subfilter dissipation over resolved molecular dissipation in the simulations. Overall, the VR-SM framework presented is quite effective in predicting large-scale transport characteristics of high Schmidt number scalars, however, it is determined that prediction of subfilter quantities would entail additional modeling intended specifically for this purpose. The VR-SM simulations presented in this thesis provide us with the opportunity to overlap with experimental studies, while at the same time creating an assortment of baseline datasets for future validation of LES models, thereby satisfying the objectives outlined for this work.
Bayesian function-on-function regression for multilevel functional data.
Meyer, Mark J; Coull, Brent A; Versace, Francesco; Cinciripini, Paul; Morris, Jeffrey S
2015-09-01
Medical and public health research increasingly involves the collection of complex and high dimensional data. In particular, functional data-where the unit of observation is a curve or set of curves that are finely sampled over a grid-is frequently obtained. Moreover, researchers often sample multiple curves per person resulting in repeated functional measures. A common question is how to analyze the relationship between two functional variables. We propose a general function-on-function regression model for repeatedly sampled functional data on a fine grid, presenting a simple model as well as a more extensive mixed model framework, and introducing various functional Bayesian inferential procedures that account for multiple testing. We examine these models via simulation and a data analysis with data from a study that used event-related potentials to examine how the brain processes various types of images. © 2015, The International Biometric Society.
Anshel, Mark H; Brinthaupt, Thomas M; Kang, Minsoo
2010-01-01
This study examined the effect of a 10-week wellness program on changes in physical fitness and mental well-being. The conceptual framework for this study was the Disconnected Values Model (DVM). According to the DVM, detecting the inconsistencies between negative habits and values (e.g., health, family, faith, character) and concluding that these "disconnects" are unacceptable promotes the need for health behavior change. Participants were 164 full-time employees at a university in the southeastern U.S. The program included fitness coaching and a 90-minute orientation based on the DVM. Multivariate Mixed Model analyses indicated significantly improved scores from pre- to post-intervention on selected measures of physical fitness and mental well-being. The results suggest that the Disconnected Values Model provides an effective cognitive-behavioral approach to generating health behavior change in a 10-week workplace wellness program.
Evaristo, Jaivime; McDonnell, Jeffrey J.; Scholl, Martha A.; Bruijnzeel, L. Adrian; Chun, Kwok P.
2016-01-01
Water transpired by trees has long been assumed to be sourced from the same subsurface water stocks that contribute to groundwater recharge and streamflow. However, recent investigations using dual water stable isotopes have shown an apparent ecohydrological separation between tree-transpired water and stream water. Here we present evidence for such ecohydrological separation in two tropical environments in Puerto Rico where precipitation seasonality is relatively low and where precipitation is positively correlated with primary productivity. We determined the stable isotope signature of xylem water of 30 mahogany (Swietenia spp.) trees sampled during two periods with contrasting moisture status. Our results suggest that the separation between transpiration water and groundwater recharge/streamflow water might be related less to the temporal phasing of hydrologic inputs and primary productivity, and more to the fundamental processes that drive evaporative isotopic enrichment of residual soil water within the soil matrix. The lack of an evaporative signature of both groundwater and streams in the study area suggests that these water balance components have a water source that is transported quickly to deeper subsurface storage compared to waters that trees use. A Bayesian mixing model used to partition source water proportions of xylem water showed that groundwater contribution was greater for valley-bottom, riparian trees than for ridge-top trees. Groundwater contribution was also greater at the xeric site than at the mesic–hydric site. These model results (1) underline the utility of a simple linear mixing model, implemented in a Bayesian inference framework, in quantifying source water contributions at sites with contrasting physiographic characteristics, and (2) highlight the informed judgement that should be made in interpreting mixing model results, of import particularly in surveying groundwater use patterns by vegetation from regional to global scales.
ERIC Educational Resources Information Center
Fordham, Sabrina R.
2015-01-01
The purpose of this study was to investigate the relationship between preceptor mentorship to athletic training students and first-attempt success on the Board of Certification (BOC) exam. Adult learning theory provides the theoretical framework. The study followed a mixed-method approach, using a focus-group discussion to gain a qualitative…
ERIC Educational Resources Information Center
Henderson, Joyce Herod
2013-01-01
Our schools are considered a place of safety for learning, however, the unfortunate reality is that schools may face crises and violence. Leadership styles vary among school leaders and provide the framework for handling daily challenges. This mixed-methods research design was used to investigate the individual leadership styles of public school…
ERIC Educational Resources Information Center
Metcalf, Heather
2016-01-01
This research methods Essay details the usefulness of critical theoretical frameworks and critical mixed-methodological approaches for life sciences education research on broadening participation in the life sciences. First, I draw on multidisciplinary research to discuss critical theory and methodologies. Then, I demonstrate the benefits of these…
Giant ferrimagnetism and polarization in a mixed metal perovskite metal-organic framework
NASA Astrophysics Data System (ADS)
Rout, Paresh C.; Srinivasan, Varadharajan
2018-01-01
Perovskite metal-organic frameworks (MOFs) have recently emerged as potential candidates for multiferroicity. However, the compounds synthesized so far possess only weak ferromagnetism and low polarization. Additionally, the very low magnetic transition temperatures (Tc) also pose a challenge to the application of the materials. We have computationally designed a mixed metal perovskite MOF—[C(NH2)3] [(Cu0.5Mn0.5) (HCOO) 3] —that is predicted to have magnetization two orders of magnitude larger than its parent ([C (NH2)3] [Cu (HCOO) 3] ), a significantly larger polarization (9.9 μ C /cm2), and an enhanced Tc of up to 56 K, unprecedented in perovskite MOFs. A detailed study of the magnetic interactions revealed a mechanism leading to the large moments as well as the increase in the Tc. Mixing a non-Jahn-Teller ion (Mn2 +) into a Jahn-Teller host (Cu2 +) leads to competing lattice distortions which are directly responsible for the enhanced polarization. The MOF is thermodynamically stable as evidenced by the computed enthalpy of formation and can likely be synthesized. Our work represents a first step towards rational design of multiferroic perovskite MOFs through the largely unexplored mixed metal approach.
Understanding Design Tradeoffs for Health Technologies: A Mixed-Methods Approach
O’Leary, Katie; Eschler, Jordan; Kendall, Logan; Vizer, Lisa M.; Ralston, James D.; Pratt, Wanda
2017-01-01
We introduce a mixed-methods approach for determining how people weigh tradeoffs in values related to health and technologies for health self-management. Our approach combines interviews with Q-methodology, a method from psychology uniquely suited to quantifying opinions. We derive the framework for structured data collection and analysis for the Q-methodology from theories of self-management of chronic illness and technology adoption. To illustrate the power of this new approach, we used it in a field study of nine older adults with type 2 diabetes, and nine mothers of children with asthma. Our mixed-methods approach provides three key advantages for health design science in HCI: (1) it provides a structured health sciences theoretical framework to guide data collection and analysis; (2) it enhances the coding of unstructured data with statistical patterns of polarizing and consensus views; and (3) it empowers participants to actively weigh competing values that are most personally significant to them. PMID:28804794
Mixed Integer Programming and Heuristic Scheduling for Space Communication Networks
NASA Technical Reports Server (NTRS)
Cheung, Kar-Ming; Lee, Charles H.
2012-01-01
We developed framework and the mathematical formulation for optimizing communication network using mixed integer programming. The design yields a system that is much smaller, in search space size, when compared to the earlier approach. Our constrained network optimization takes into account the dynamics of link performance within the network along with mission and operation requirements. A unique penalty function is introduced to transform the mixed integer programming into the more manageable problem of searching in a continuous space. The constrained optimization problem was proposed to solve in two stages: first using the heuristic Particle Swarming Optimization algorithm to get a good initial starting point, and then feeding the result into the Sequential Quadratic Programming algorithm to achieve the final optimal schedule. We demonstrate the above planning and scheduling methodology with a scenario of 20 spacecraft and 3 ground stations of a Deep Space Network site. Our approach and framework have been simple and flexible so that problems with larger number of constraints and network can be easily adapted and solved.
NASA Astrophysics Data System (ADS)
Hua, Xiu-Ni; Qin, Lan; Yan, Xiao-Zhi; Yu, Lei; Xie, Yi-Xin; Han, Lei
2015-12-01
Hydrothermal reactions of N-auxiliary flexible exo-bidentate ligand 1,3-bis(4-pyridyl)propane (bpp) and carboxylates ligands naphthalene-2,6-dicarboxylic acid (2,6-H2ndc) or 4,4‧-(hydroxymethylene)dibenzoic acid (H2hmdb), in the presence of cadmium(II) salts have given rise to two novel metal-organic frameworks based on flexible ligands (FL-MOFs), namely, [Cd2(2,6-ndc)2(bpp)(DMF)]·2DMF (1) and [Cd3(hmdb)3(bpp)]·2DMF·2EtOH (2) (DMF=N,N-Dimethylformamide). Single-crystal X-ray diffraction analyses revealed that compound 1 exhibits a three-dimensional self-penetrating 6-connected framework based on dinuclear cluster second building unit. Compound 2 displays an infinite three-dimensional 'Lucky Clover' shape (2,10)-connected network based on the trinuclear cluster and V-shaped organic linkers. The flexible bpp ligand displays different conformations in 1 and 2, which are successfully controlled by size-matching mixed ligands during the self-assembly process.
NASA Astrophysics Data System (ADS)
Machetel, P.; Yuen, D. A.
2012-12-01
In this work, we propose to use Open Thermodynamic System (OTS) frameworks to assess temperatures and discharges of underground flows in fluviokarstic systems. The theoretical formulation is built on the first and second laws of thermodynamics. However, such assumptions would require steady states in the Control Volume to cancel the heat exchanges between underground water and embedding rocks. This situation is obviously never perfectly reached in Nature where flow discharges and temperatures vary with rainfalls, recessions and seasonal or diurnal fluctuations. First, we will shortly show that the results of a pumping test campaign on the Cent-Font (Hérault, France) fluviokarst during summer 2005 are consistent with this theoretical approach. Second, we will present the theoretical formalism of the OTS framework that leads to equation systems involving the temperatures and/or the discharges of the underground and surface flows.Third, this approach will be applied to the white (2003) conceptual model of fluviokarst, and we will present the numerical model built to assess the applicability of these assumptions. The first order of the field hydrologic properties observed at the Cent-Fonts resurgence are well described by the calculations based on this OTS framework. If this agreement is necessary, it is not sufficient to validate the method. In order to test its applicability, the mixing process has been modelized as a cooling reaction in a Continuous Stirred Tank Reactor (CSTR) for which matrix and intrusive flows are introduced continuously while effluent water is recovered at the output. The enthalpy of the various flows is conserved except for the part that exchanges heat with the embedding rocks. However the numerical model shows that in the water saturated part of the CS, the matrix flow swepts heat by convective-advective processes while temporal heat fluctuations from intrusive flows cross the CV walls. The numerical model shows that the convective flow from matrix damps the diurnal fluctuations on very short space and time scales. The case of the seasonal temperature fluctuations depends on the relative global space and time scales between the global transport properties of the fluviokarst and the fluctuations. This works shows that, under these circumstances and framework, temperature can be considered as a conservative tracer because most of the heat exchanged with the embedding rocks during non-steady periods is brought back by the convergence of matrix flows toward the CV. This mechanism cancels the effects of the heat exchanges for the diurnal fluctuations and also reduces those that are due to seasonal variations of temperature. The OTS approach may therefore bring new tools for underground fluid temperatures and discharges assessment and may also probably offer potential applications for geothermal studies. The mixing process in the fluviokarst Conduit System is analogous to a chemical reaction in a Continuous Stirred Tank Reactor (CSTR).
A4 flavour model for Dirac neutrinos: Type I and inverse seesaw
NASA Astrophysics Data System (ADS)
Borah, Debasish; Karmakar, Biswajit
2018-05-01
We propose two different seesaw models namely, type I and inverse seesaw to realise light Dirac neutrinos within the framework of A4 discrete flavour symmetry. The additional fields and their transformations under the flavour symmetries are chosen in such a way that naturally predicts the hierarchies of different elements of the seesaw mass matrices in these two types of seesaw mechanisms. For generic choices of flavon alignments, both the models predict normal hierarchical light neutrino masses with the atmospheric mixing angle in the lower octant. Apart from predicting interesting correlations between different neutrino parameters as well as between neutrino and model parameters, the model also predicts the leptonic Dirac CP phase to lie in a specific range - π / 3 to π / 3. While the type I seesaw model predicts smaller values of absolute neutrino mass, the inverse seesaw predictions for the absolute neutrino masses can saturate the cosmological upper bound on sum of absolute neutrino masses for certain choices of model parameters.
NASA Astrophysics Data System (ADS)
Zhu, Wei; Timmermans, Harry
2011-06-01
Models of geographical choice behavior have been dominantly based on rational choice models, which assume that decision makers are utility-maximizers. Rational choice models may be less appropriate as behavioral models when modeling decisions in complex environments in which decision makers may simplify the decision problem using heuristics. Pedestrian behavior in shopping streets is an example. We therefore propose a modeling framework for pedestrian shopping behavior incorporating principles of bounded rationality. We extend three classical heuristic rules (conjunctive, disjunctive and lexicographic rule) by introducing threshold heterogeneity. The proposed models are implemented using data on pedestrian behavior in Wang Fujing Street, the city center of Beijing, China. The models are estimated and compared with multinomial logit models and mixed logit models. Results show that the heuristic models are the best for all the decisions that are modeled. Validation tests are carried out through multi-agent simulation by comparing simulated spatio-temporal agent behavior with the observed pedestrian behavior. The predictions of heuristic models are slightly better than those of the multinomial logit models.
Developing multifunctional nanoparticles in a 1-D coordination polymer of Cd(II)
NASA Astrophysics Data System (ADS)
Agarwal, Rashmi A.; Gupta, Neeraj K.
2017-11-01
A simple synthesis for the integration of different nanoparticles (NPs) including Ag, Au, Pd, Cr and mixed (Cu/Fe), has been demonstrated within the nanopores of a non-activated one dimensional porous coordination polymer (PCP) of Cd(II) due to its high flexible structure. There are two different mechanisms (acid formation (HCl/HNO3) and redox activity of the framework) elucidated by electron paramagnetic resonance (EPR). Presence of -NO2 groups of the ligand act as anchoring sites for metal ions of metal precursors leading to NPs growth within the PCP explained by FTIR. High resolution transmission electron microscopy (HRTEM) images provided insight of the chemical and physical characteristics of the NPs within the framework. Ag/AgO NPs exhibit excellent antibacterial properties at extremely low concentrations. The polymer shows potential for sequestration and reduction of hexavalent Cr (highly toxic) to elemental, trivalent and tetravalent Cr (non toxic). This framework is also an excellent template for fabrication and dry storage of nanoparticles synthesized by mixed metal precursors. Ferromagnetic properties have been shown by Ag and Au NPs integrated frameworks while Cu/Fe@Cd-PCP behaves as a paramagnet material at room temperature.
DasPy – Open Source Multivariate Land Data Assimilation Framework with High Performance Computing
NASA Astrophysics Data System (ADS)
Han, Xujun; Li, Xin; Montzka, Carsten; Kollet, Stefan; Vereecken, Harry; Hendricks Franssen, Harrie-Jan
2015-04-01
Data assimilation has become a popular method to integrate observations from multiple sources with land surface models to improve predictions of the water and energy cycles of the soil-vegetation-atmosphere continuum. In recent years, several land data assimilation systems have been developed in different research agencies. Because of the software availability or adaptability, these systems are not easy to apply for the purpose of multivariate land data assimilation research. Multivariate data assimilation refers to the simultaneous assimilation of observation data for multiple model state variables into a simulation model. Our main motivation was to develop an open source multivariate land data assimilation framework (DasPy) which is implemented using the Python script language mixed with C++ and Fortran language. This system has been evaluated in several soil moisture, L-band brightness temperature and land surface temperature assimilation studies. The implementation allows also parameter estimation (soil properties and/or leaf area index) on the basis of the joint state and parameter estimation approach. LETKF (Local Ensemble Transform Kalman Filter) is implemented as the main data assimilation algorithm, and uncertainties in the data assimilation can be represented by perturbed atmospheric forcings, perturbed soil and vegetation properties and model initial conditions. The CLM4.5 (Community Land Model) was integrated as the model operator. The CMEM (Community Microwave Emission Modelling Platform), COSMIC (COsmic-ray Soil Moisture Interaction Code) and the two source formulation were integrated as observation operators for assimilation of L-band passive microwave, cosmic-ray soil moisture probe and land surface temperature measurements, respectively. DasPy is parallelized using the hybrid MPI (Message Passing Interface) and OpenMP (Open Multi-Processing) techniques. All the input and output data flow is organized efficiently using the commonly used NetCDF file format. Online 1D and 2D visualization of data assimilation results is also implemented to facilitate the post simulation analysis. In summary, DasPy is a ready to use open source parallel multivariate land data assimilation framework.
Kraak, V I; Englund, T; Misyak, S; Serrano, E L
2017-08-01
This review identified and adapted choice architecture frameworks to develop a novel framework that restaurant owners could use to promote healthy food environments for customers who currently overconsume products high in fat, sugar and sodium that increase their risk of obesity and diet-related non-communicable diseases. This review was conducted in three steps and presented as a narrative summary to demonstrate a proof of concept. Step 1 was a systematic review of nudge or choice architecture frameworks used to categorize strategies that cue healthy behaviours in microenvironments. We searched nine electronic databases between January 2000 and December 2016 and identified 1,244 records. Inclusion criteria led to the selection of five choice architecture frameworks, of which three were adapted and combined with marketing mix principles to highlight eight strategies (i.e. place, profile, portion, pricing, promotion, healthy default picks, prompting or priming and proximity). Step 2 involved conducting a comprehensive evidence review between January 2006 and December 2016 to identify U.S. recommendations for the restaurant sector organized by strategy. Step 3 entailed developing 12 performance metrics for the eight strategies. This framework should be tested to determine its value to assist restaurant owners to promote and socially normalize healthy food environments to reduce obesity and non-communicable diseases. © 2017 The Authors. Obesity Reviews published by John Wiley & Sons Ltd on behalf of World Obesity Federation.
Peterson, Gregory W; Lu, Annie X; Hall, Morgan G; Browe, Matthew A; Tovar, Trenton; Epps, Thomas H
2018-02-28
This work describes a new strategy for fabricating mixed matrix composites containing layered metal-organic framework (MOF)/polymer films as functional barriers for chemical warfare agent protection. Through the use of mechanically robust polymers as the top and bottom encasing layers, a high-MOF-loading, high-performance-core layer can be sandwiched within. We term this multifunctional composite "MOFwich". We found that the use of elastomeric encasing layers enabled core layer reformation after breakage, an important feature for composites and membranes alike. The incorporation of MOFs into the core layer led to enhanced removal of chemical warfare agents while simultaneously promoting moisture vapor transport through the composite, showcasing the promise of these composites for protection applications.
NASA Astrophysics Data System (ADS)
Di Crescenzo, A.; OPERA Collaboration
2016-05-01
The OPERA experiment observed ν μ → ν τ oscillations in the atmospheric sector. To this purpose the hybrid OPERA detector was exposed to the CERN Neutrinos to Gran Sasso beam from 2008 to 2012, at a distance of 730 km from the neutrino source. Charged-current interactions of ν τ were searched for through the identification of τ lepton decay topologies. The five observed ν τ interactions are consistent with the expected number of events in the standard three neutrino framework. Based on this result, new limits on the mixing parameters of a massive sterile neutrino may be set. Preliminary results of the analysis performed in the 3+1 neutrino framework are here presented.
Active mixing of complex fluids at the microscale
Ober, Thomas J.; Foresti, Daniele; Lewis, Jennifer A.
2015-09-22
Mixing of complex fluids at low Reynolds number is fundamental for a broad range of applications, including materials assembly, microfluidics, and biomedical devices. Of these materials, yield stress fluids (and gels) pose the most significant challenges, especially when they must be mixed in low volumes over short timescales. New scaling relationships between mixer dimensions and operating conditions are derived and experimentally verified to create a framework for designing active microfluidic mixers that can efficiently homogenize a wide range of complex fluids. As a result, active mixing printheads are then designed and implemented for multimaterial 3D printing of viscoelastic inks withmore » programmable control of local composition.« less
Active mixing of complex fluids at the microscale
Ober, Thomas J.; Foresti, Daniele; Lewis, Jennifer A.
2015-01-01
Mixing of complex fluids at low Reynolds number is fundamental for a broad range of applications, including materials assembly, microfluidics, and biomedical devices. Of these materials, yield stress fluids (and gels) pose the most significant challenges, especially when they must be mixed in low volumes over short timescales. New scaling relationships between mixer dimensions and operating conditions are derived and experimentally verified to create a framework for designing active microfluidic mixers that can efficiently homogenize a wide range of complex fluids. Active mixing printheads are then designed and implemented for multimaterial 3D printing of viscoelastic inks with programmable control of local composition. PMID:26396254
Importance of fishing as a segmentation variable in the application of a social worlds model
Gigliotti, Larry M.; Chase, Loren
2017-01-01
Market segmentation is useful to understanding and classifying the diverse range of outdoor recreation experiences sought by different recreationists. Although many different segmentation methodologies exist, many are complex and difficult to measure accurately during in-person intercepts, such as that of creel surveys. To address that gap in the literature, we propose a single-item measure of the importance of fishing as a surrogate to often overly- or needlesslycomplex segmentation techniques. The importance of fishing item is a measure of the value anglers place on the activity or a coarse quantification of how central the activity is to the respondent’s lifestyle (scale: 0 = not important, 1 = slightly, 2 = moderately, 3 = very, and 4 = fishing is my most important recreational activity). We suggest the importance scale may be a proxy measurement for segmenting anglers using the social worlds model as a theoretical framework. Vaske (1980) suggested that commitment to recreational activities may be best understood in relation to social group participation and the social worlds model provides a rich theoretical framework for understanding social group segments. Unruh (1983) identified four types of actor involvement in social worlds: strangers, tourists, regulars, and insiders, differentiated by four characteristics (orientation, experiences, relationships, and commitment). We evaluated the importance of fishing as a segmentation variable using data collected by a mixed-mode survey of South Dakota anglers fishing in 2010. We contend that this straightforward measurement may be useful for segmenting outdoor recreation activities when more complicated segmentation schemes are not suitable. Further, this index, when coupled with the social worlds model, provides a valuable framework for understanding the segments and making management decisions.
Fundamental Studies of Crystal Growth of Microporous Materials
NASA Technical Reports Server (NTRS)
Dutta, P.; George, M.; Ramachandran, N.; Schoeman, B.; Curreri, Peter A. (Technical Monitor)
2002-01-01
Microporous materials are framework structures with well-defined porosity, often of molecular dimensions. Zeolites contain aluminum and silicon atoms in their framework and are the most extensively studied amongst all microporous materials. Framework structures with P, Ga, Fe, Co, Zn, B, Ti and a host of other elements have also been made. Typical synthesis of microporous materials involve mixing the framework elements (or compounds, thereof) in a basic solution, followed by aging in some cases and then heating at elevated temperatures. This process is termed hydrothermal synthesis, and involves complex chemical and physical changes. Because of a limited understanding of this process, most synthesis advancements happen by a trial and error approach. There is considerable interest in understanding the synthesis process at a molecular level with the expectation that eventually new framework structures will be built by design. The basic issues in the microporous materials crystallization process include: (1) Nature of the molecular units responsible for the crystal nuclei formation; (2) Nature of the nuclei and nucleation process; (3) Growth process of the nuclei into crystal; (4) Morphological control and size of the resulting crystal; (5) Surface structure of the resulting crystals; (6) Transformation of frameworks into other frameworks or condensed structures. The NASA-funded research described in this report focuses to varying degrees on all of the above issues and has been described in several publications. Following is the presentation of the highlights of our current research program. The report is divided into five sections: (1) Fundamental aspects of the crystal growth process; (2) Morphological and Surface properties of crystals; (3) Crystal dissolution and transformations; (4) Modeling of Crystal Growth; (5) Relevant Microgravity Experiments.
Basson, Jacob; Sung, Yun Ju; de Las Fuentes, Lisa; Schwander, Karen L; Vazquez, Ana; Rao, Dabeeru C
2016-01-01
Blood pressure (BP) has been shown to be substantially heritable, yet identified genetic variants explain only a small fraction of the heritability. Gene-smoking interactions have detected novel BP loci in cross-sectional family data. Longitudinal family data are available and have additional promise to identify BP loci. However, this type of data presents unique analysis challenges. Although several methods for analyzing longitudinal family data are available, which method is the most appropriate and under what conditions has not been fully studied. Using data from three clinic visits from the Framingham Heart Study, we performed association analysis accounting for gene-smoking interactions in BP at 31,203 markers on chromosome 22. We evaluated three different modeling frameworks: generalized estimating equations (GEE), hierarchical linear modeling, and pedigree-based mixed modeling. The three models performed somewhat comparably, with multiple overlaps in the most strongly associated loci from each model. Loci with the greatest significance were more strongly supported in the longitudinal analyses than in any of the component single-visit analyses. The pedigree-based mixed model was more conservative, with less inflation in the variant main effect and greater deflation in the gene-smoking interactions. The GEE, but not the other two models, resulted in substantial inflation in the tail of the distribution when variants with minor allele frequency <1% were included in the analysis. The choice of analysis method should depend on the model and the structure and complexity of the familial and longitudinal data. © 2015 WILEY PERIODICALS, INC.
Numerical simulation of wave-current interaction under strong wind conditions
NASA Astrophysics Data System (ADS)
Larrañaga, Marco; Osuna, Pedro; Ocampo-Torres, Francisco Javier
2017-04-01
Although ocean surface waves are known to play an important role in the momentum and other scalar transfer between the atmosphere and the ocean, most operational numerical models do not explicitly include the terms of wave-current interaction. In this work, a numerical analysis about the relative importance of the processes associated with the wave-current interaction under strong off-shore wind conditions in Gulf of Tehuantepec (the southern Mexican Pacific) was carried out. The numerical system includes the spectral wave model WAM and the 3D hydrodynamic model POLCOMS, with the vertical turbulent mixing parametrized by the kappa-epsilon closure model. The coupling methodology is based on the vortex-force formalism. The hydrodynamic model was forced at the open boundaries using the HYCOM database and the wave model was forced at the open boundaries by remote waves from the southern Pacific. The atmospheric forcing for both models was provided by a local implementation of the WRF model, forced at the open boundaries using the CFSR database. The preliminary analysis of the model results indicates an effect of currents on the propagation of the swell throughout the study area. The Stokes-Coriolis term have an impact on the transient Ekman transport by modifying the Ekman spiral, while the Stokes drift has an effect on the momentum advection and the production of TKE, where the later induces a deepening of the mixing layer. This study is carried out in the framework of the project CONACYT CB-2015-01 255377 and RugDiSMar Project (CONACYT 155793).
Bauer, Mark S; Krawczyk, Lois; Tuozzo, Kathy; Frigand, Cara; Holmes, Sally; Miller, Christopher J; Abel, Erica; Osser, David N; Franz, Aleda; Brandt, Cynthia; Rooney, Meghan; Fleming, Jerry; Smith, Eric; Godleski, Linda
2018-01-01
Telemental health interventions have empirical support from clinical trials and structured demonstration projects. However, their implementation and sustainability under less structured clinical conditions are not well demonstrated. We conducted a follow-up analysis of the implementation and sustainability of a clinical video teleconference-based collaborative care model for individuals with bipolar disorder treated in the Department of Veterans Affairs to (a) characterize the extent of implementation and sustainability of the program after its establishment and (b) identify barriers and facilitators to implementation and sustainability. We conducted a mixed methods program evaluation, assessing quantitative aspects of implementation according to the Reach, Efficacy, Adoption, Implementation, and Maintenance implementation framework. We conducted qualitative analysis of semistructured interviews with 16 of the providers who submitted consults, utilizing the Integrated Promoting Action on Research Implementation in the Health Services implementation framework. The program demonstrated linear growth in sites (n = 35) and consults (n = 915) from late 2011 through mid-2016. Site-based analysis indicated statistically significant sustainability beyond the first year of operation. Qualitative analysis identified key facilitators, including consult content, ease of use via electronic health record, and national infrastructure. Barriers included availability of telehealth space, equipment, and staff at the sites, as well as the labor-intensive nature of scheduling. The program achieved continuous growth over almost 5 years due to (1) successfully filling a need perceived by providers, (2) developing in a supportive context, and (3) receiving effective facilitation by national and local infrastructure. Clinical video teleconference-based interventions, even multicomponent collaborative care interventions for individuals with complex mental health conditions, can grow vigorously under appropriate conditions.
Age-specific contacts and travel patterns in the spatial spread of 2009 H1N1 influenza pandemic
2013-01-01
Background Confirmed H1N1 cases during late spring and summer 2009 in various countries showed a substantial age shift between importations and local transmission cases, with adults mainly responsible for seeding unaffected regions and children most frequently driving community outbreaks. Methods We introduce a multi-host stochastic metapopulation model with two age classes to analytically investigate the role of a heterogeneously mixing population and its associated non-homogeneous travel behaviors on the risk of a major epidemic. We inform the model with demographic data, contact data and travel statistics of Europe and Mexico, and calibrate it to the 2009 H1N1 pandemic early outbreak. We allow for variations of the model parameters to explore the conditions of invasion under different scenarios. Results We derive the expression for the potential of global invasion of the epidemic that depends on the transmissibility of the pathogen, the transportation network and mobility features, the demographic profile and the mixing pattern. Higher assortativity in the contact pattern greatly increases the probability of spatial containment of the epidemic, this effect being contrasted by an increase in the social activity of adults vs. children. Heterogeneous features of the mobility network characterizing its topology and traffic flows strongly favor the invasion of the pathogen at the spatial level, as also a larger fraction of children traveling. Variations in the demographic profile and mixing habits across countries lead to heterogeneous outbreak situations. Model results are compatible with the H1N1 spatial transmission dynamics observed. Conclusions This work illustrates the importance of considering age-dependent mixing profiles and mobility features coupled together to study the conditions for the spatial invasion of an emerging influenza pandemic. Its results allow the immediate assessment of the risk of a major epidemic for a specific scenario upon availability of data, and the evaluation of the potential effectiveness of public health interventions targeting specific age groups, their interactions and mobility behaviors. The approach provides a general modeling framework that can be used for other types of partitions of the host population and applied to different settings. PMID:23587010
NASA Astrophysics Data System (ADS)
Beck, V.; Gerbig, C.; Koch, T.; Bela, M. M.; Longo, K. M.; Freitas, S. R.; Kaplan, J. O.; Prigent, C.; Bergamaschi, P.; Heimann, M.
2013-08-01
The Amazon region, being a large source of methane (CH4), contributes significantly to the global annual CH4 budget. For the first time, a forward and inverse modelling framework on regional scale for the purpose of assessing the CH4 budget of the Amazon region is implemented. Here, we present forward simulations of CH4 as part of the forward and inverse modelling framework based on a modified version of the Weather Research and Forecasting model with chemistry that allows for passive tracer transport of CH4, carbon monoxide, and carbon dioxide (WRF-GHG), in combination with two different process-based bottom-up models of CH4 emissions from anaerobic microbial production in wetlands and additional datasets prescribing CH4 emissions from other sources such as biomass burning, termites, or other anthropogenic emissions. We compare WRF-GHG simulations on 10 km horizontal resolution to flask and continuous CH4 observations obtained during two airborne measurement campaigns within the Balanço Atmosférico Regional de Carbono na Amazônia (BARCA) project in November 2008 and May 2009. In addition, three different wetland inundation maps, prescribing the fraction of inundated area per grid cell, are evaluated. Our results indicate that the wetland inundation maps based on remote-sensing data represent the observations best except for the northern part of the Amazon basin and the Manaus area. WRF-GHG was able to represent the observed CH4 mixing ratios best at days with less convective activity. After adjusting wetland emissions to match the averaged observed mixing ratios of flights with little convective activity, the monthly CH4 budget for the Amazon basin obtained from four different simulations ranges from 1.5 to 4.8 Tg for November 2008 and from 1.3 to 5.5 Tg for May 2009. This corresponds to an average CH4 flux of 9-31 mg m-2 d-1 for November 2008 and 8-36 mg m-2 d-1 for May 2009.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jolos, R. V.; Shirikova, N. Yu.; Voronov, V. V.
A schematic microscopic method is developed to calculate the M1 transition probabilities between the mixed-symmetry and the fully symmetric states in {gamma}-soft nuclei. The method is based on the random-phase approximation-interacting boson model (RPA-IBM) boson mapping of the most collective isoscalar boson. All other boson modes with higher excitation energies, including the mixed-symmetry boson, are described in the framework of RPA. As an example the M1 transition probabilities are calculated for the {sup 124-134}Xe isotopes and compared with the experimental data. The results agree well with the data for the ratio B(M1;1{sub ms}{sup +}{yields}2{sub 2}{sup +})/B(M1;1{sub ms}{sup +}{yields}0{sub 1}{sup +}).more » However, the calculated ratio B(M1;2{sub ms}{sup +}{yields}2{sub 1}{sup +})/B(M1;1{sub ms}{sup +}{yields}0{sub 1}{sup +}) shows a significantly weaker dependence on the mass number than the experimental data.« less
NASA Astrophysics Data System (ADS)
Stenzel, J.; Hudiburg, T. W.; Berardi, D.; McNellis, B.; Walsh, E.
2017-12-01
In forests vulnerable to drought and fire, there is critical need for in situ carbon and water balance measurements that can be integrated with earth system modeling to predict climate feedbacks. Model development can be improved by measurements that inform a mechanistic understanding of the component fluxes of net carbon uptake (i.e., NPP, autotrophic and heterotrophic respiration) and water use, with specific focus on responses to climate and disturbance. By integrating novel field-based instrumental technology, existing datasets, and state-of-the-art earth system modeling, we are attempting to 1) quantify the spatial and temporal impacts of forest thinning on regional biogeochemical cycling and climate 2) evaluate the impact of forest thinning on forest resilience to drought and disturbance in the Northern Rockies ecoregion. The combined model-experimental framework enables hypothesis testing that would otherwise be impossible because the use of new in situ high temporal resolution field technology allows for research in remote and mountainous terrains that have been excluded from eddy-covariance techniques. Our preliminary work has revealed some underlying difficulties with the new instrumentation that has led to new ideas and modified methods to correctly measure the component fluxes. Our observations of C balance following the thinning operations indicate that the recovery period (source to sink) is longer than hypothesized. Finally, we have incorporated a new plant functional type parameterization for Northern Rocky mixed-conifer into our simulation modeling using regional and site observations.
Salles, Tristan; Ding, Xuesong; Webster, Jody M; Vila-Concejo, Ana; Brocard, Gilles; Pall, Jodie
2018-03-27
Understanding the effects of climatic variability on sediment dynamics is hindered by limited ability of current models to simulate long-term evolution of sediment transfer from source to sink and associated morphological changes. We present a new approach based on a reduced-complexity model which computes over geological time: sediment transport from landmasses to coasts, reworking of marine sediments by longshore currents, and development of coral reef systems. Our framework links together the main sedimentary processes driving mixed siliciclastic-carbonate system dynamics. It offers a methodology for objective and quantitative sediment fate estimations over regional and millennial time-scales. A simulation of the Holocene evolution of the Great Barrier Reef shows: (1) how high sediment loads from catchments erosion prevented coral growth during the early transgression phase and favoured sediment gravity-flows in the deepest parts of the northern region basin floor (prior to 8 ka before present (BP)); (2) how the fine balance between climate, sea-level, and margin physiography enabled coral reefs to thrive under limited shelf sedimentation rates after ~6 ka BP; and, (3) how since 3 ka BP, with the decrease of accommodation space, reduced of vertical growth led to the lateral extension of reefs consistent with available observational data.
Retrospective Binary-Trait Association Test Elucidates Genetic Architecture of Crohn Disease
Jiang, Duo; Zhong, Sheng; McPeek, Mary Sara
2016-01-01
In genetic association testing, failure to properly control for population structure can lead to severely inflated type 1 error and power loss. Meanwhile, adjustment for relevant covariates is often desirable and sometimes necessary to protect against spurious association and to improve power. Many recent methods to account for population structure and covariates are based on linear mixed models (LMMs), which are primarily designed for quantitative traits. For binary traits, however, LMM is a misspecified model and can lead to deteriorated performance. We propose CARAT, a binary-trait association testing approach based on a mixed-effects quasi-likelihood framework, which exploits the dichotomous nature of the trait and achieves computational efficiency through estimating equations. We show in simulation studies that CARAT consistently outperforms existing methods and maintains high power in a wide range of population structure settings and trait models. Furthermore, CARAT is based on a retrospective approach, which is robust to misspecification of the phenotype model. We apply our approach to a genome-wide analysis of Crohn disease, in which we replicate association with 17 previously identified regions. Moreover, our analysis on 5p13.1, an extensively reported region of association, shows evidence for the presence of multiple independent association signals in the region. This example shows how CARAT can leverage known disease risk factors to shed light on the genetic architecture of complex traits. PMID:26833331
CP4 miracle: shaping Yukawa sector with CP symmetry of order four
NASA Astrophysics Data System (ADS)
Ferreira, P. M.; Ivanov, Igor P.; Jiménez, Enrique; Pasechnik, Roman; Serôdio, Hugo
2018-01-01
We explore the phenomenology of a unique three-Higgs-doublet model based on the single CP symmetry of order 4 (CP4) without any accidental symmetries. The CP4 symmetry is imposed on the scalar potential and Yukawa interactions, strongly shaping both sectors of the model and leading to a very characteristic phenomenology. The scalar sector is analyzed in detail, and in the Yukawa sector we list all possible CP4-symmetric structures which do not run into immediate conflict with experiment, namely, do not lead to massless or mass-degenerate quarks nor to insufficient mixing or CP -violation in the CKM matrix. We show that the parameter space of the model, although very constrained by CP4, is large enough to comply with the electroweak precision data and the LHC results for the 125 GeV Higgs boson phenomenology, as well as to perfectly reproduce all fermion masses, mixing, and CP violation. Despite the presence of flavor changing neutral currents mediated by heavy Higgs scalars, we find through a parameter space scan many points which accurately reproduce the kaon CP -violating parameter ɛ K as well as oscillation parameters in K and B ( s) mesons. Thus, CP4 offers a novel minimalistic framework for building models with very few assumptions, sufficient predictive power, and rich phenomenology yet to be explored.
Pan-London tuberculosis services: a service evaluation
2012-01-01
Background London has the largest proportion of tuberculosis (TB) cases of any western European capital, with almost half of new cases drug-resistant. Prevalence varies considerably between and within boroughs with research suggesting inadequate control of TB transmission in London. Economic pressures may exacerbate the already considerable challenges for service organisation and delivery within this context. This paper presents selected findings from an evaluation of London’s TB services’ organisation, delivery, professional workforce and skill mix, intended to support development of a strategic framework for a pan-London TB service. These may also interest health service professionals and managers in TB services in the UK, other European cities and countries and in services currently delivered by multiple providers operating independently. Methods Objectives were: 1) To establish how London’s TB services are structured and delivered in relation to leadership, management, organisation and delivery, coordination, staffing and support; 2) To identify tools/models for calculating skill mix as a basis for identifying skill mix requirements in delivering TB services across London; 3) To inform a strategic framework for the delivery of a pan-London TB service, which may be applicable to other European cities. The multi-method service audit evaluation comprised documentary analysis, semi-structured interviews with TB service users (n = 10), lead TB health professionals and managers (n = 13) representing London’s five sectors and focus groups with TB nurses (n = 8) and non-London network professionals (n = 2). Results Findings showed TB services to be mainly hospital-based, with fewer community-based services. Documentary analysis and professionals’ interviews suggested difficulties with early access to services, low suspicion index amongst some GPs and restricted referral routes. Interviews indicated lack of managed accommodation for difficult to treat patients, professional workforce shortages, a need for strategic leadership, nurse-led clinics and structured career paths for TB nurses and few social care/outreach workers to support patients with complex needs. Conclusions This paper has identified key issues relating to London’s TB services’ organisation, delivery, professional workforce and skill mix. The majority of these present challenges which need to be addressed as part of the future development of a strategic framework for a pan-London TB service. More consistent strategic planning/co-ordination and sharing of best practice is needed, together with a review of pan-London TB workforce development strategy, encompassing changing professional roles, skills development needs and patient pathways. These findings may be relevant with the development of TB services in other European cities. PMID:22805234