NASA Astrophysics Data System (ADS)
Hrachowitz, M.; Fovet, O.; Ruiz, L.; Euser, T.; Gharari, S.; Nijzink, R.; Freer, J.; Savenije, H. H. G.; Gascuel-Odoux, C.
2014-09-01
Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus, ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study, the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by four calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce a suite of hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by "prior constraints," inferred from expert knowledge to ensure a model which behaves well with respect to the modeler's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model setup exhibited increased performance in the independent test period and skill to better reproduce all tested signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if counter-balanced by prior constraints, can significantly increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge-driven strategy of constraining models.
Why Bother to Calibrate? Model Consistency and the Value of Prior Information
NASA Astrophysics Data System (ADS)
Hrachowitz, Markus; Fovet, Ophelie; Ruiz, Laurent; Euser, Tanja; Gharari, Shervan; Nijzink, Remko; Savenije, Hubert; Gascuel-Odoux, Chantal
2015-04-01
Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by 4 calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce 20 hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by using prior information about the system to impose "prior constraints", inferred from expert knowledge and to ensure a model which behaves well with respect to the modeller's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model set-up exhibited increased performance in the independent test period and skill to reproduce all 20 signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if efficiently counter-balanced by available prior constraints, can increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge driven strategy of constraining models.
Why Bother and Calibrate? Model Consistency and the Value of Prior Information.
NASA Astrophysics Data System (ADS)
Hrachowitz, M.; Fovet, O.; Ruiz, L.; Euser, T.; Gharari, S.; Nijzink, R.; Freer, J. E.; Savenije, H.; Gascuel-Odoux, C.
2014-12-01
Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by 4 calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce 20 hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by using prior information about the system to impose "prior constraints", inferred from expert knowledge and to ensure a model which behaves well with respect to the modeller's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model set-up exhibited increased performance in the independent test period and skill to reproduce all 20 signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if efficiently counter-balanced by available prior constraints, can increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge driven strategy of constraining models.
Blower, Sally; Go, Myong-Hyun
2011-07-19
Mathematical models are useful tools for understanding and predicting epidemics. A recent innovative modeling study by Stehle and colleagues addressed the issue of how complex models need to be to ensure accuracy. The authors collected data on face-to-face contacts during a two-day conference. They then constructed a series of dynamic social contact networks, each of which was used to model an epidemic generated by a fast-spreading airborne pathogen. Intriguingly, Stehle and colleagues found that increasing model complexity did not always increase accuracy. Specifically, the most detailed contact network and a simplified version of this network generated very similar results. These results are extremely interesting and require further exploration to determine their generalizability.
Jia, Xiuqin; Liang, Peipeng; Shi, Lin; Wang, Defeng; Li, Kuncheng
2015-01-01
In neuroimaging studies, increased task complexity can lead to increased activation in task-specific regions or to activation of additional regions. How the brain adapts to increased rule complexity during inductive reasoning remains unclear. In the current study, three types of problems were created: simple rule induction (i.e., SI, with rule complexity of 1), complex rule induction (i.e., CI, with rule complexity of 2), and perceptual control. Our findings revealed that increased activations accompany increased rule complexity in the right dorsal lateral prefrontal cortex (DLPFC) and medial posterior parietal cortex (precuneus). A cognitive model predicted both the behavioral and brain imaging results. The current findings suggest that neural activity in frontal and parietal regions is modulated by rule complexity, which may shed light on the neural mechanisms of inductive reasoning. Copyright © 2014. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Hrachowitz, Markus; Fovet, Ophelie; Ruiz, Laurent; Gascuel-Odoux, Chantal; Savenije, Hubert
2014-05-01
Hydrological models are frequently characterized by what is often considered to be adequate calibration performances. In many cases, however, these models experience a substantial uncertainty and performance decrease in validation periods, thus resulting in poor predictive power. Besides the likely presence of data errors, this observation can point towards wrong or insufficient representations of the underlying processes and their heterogeneity. In other words, right results are generated for the wrong reasons. Thus ways are sought to increase model consistency and to thereby satisfy the contrasting priorities of the need a) to increase model complexity and b) to limit model equifinality. In this study a stepwise model development approach is chosen to test the value of an exhaustive and systematic combined use of hydrological signatures, expert knowledge and readily available, yet anecdotal and rarely exploited, hydrological information for increasing model consistency towards generating the right answer for the right reasons. A simple 3-box, 7 parameter, conceptual HBV-type model, constrained by 4 calibration objective functions was able to adequately reproduce the hydrograph with comparatively high values for the 4 objective functions in the 5-year calibration period. However, closer inspection of the results showed a dramatic decrease of model performance in the 5-year validation period. In addition, assessing the model's skill to reproduce a range of 20 hydrological signatures including, amongst others, the flow duration curve, the autocorrelation function and the rising limb density, showed that it could not adequately reproduce the vast majority of these signatures, indicating a lack of model consistency. Subsequently model complexity was increased in a stepwise way to allow for more process heterogeneity. To limit model equifinality, increase in complexity was counter-balanced by a stepwise application of "realism constraints", inferred from expert knowledge (e.g. unsaturated storage capacity of hillslopes should exceed the one of wetlands) and anecdotal hydrological information (e.g. long-term estimates of actual evaporation obtained from the Budyko framework and long-term estimates of baseflow contribution) to ensure that the model is well behaved with respect to the modeller's perception of the system. A total of 11 model set-ups with increased complexity and an increased number of realism constraints was tested. It could be shown that in spite of largely unchanged calibration performance, compared to the simplest set-up, the most complex model set-up (12 parameters, 8 constraints) exhibited significantly increased performance in the validation period while uncertainty did not increase. In addition, the most complex model was characterized by a substantially increased skill to reproduce all 20 signatures, indicating a more suitable representation of the system. The results suggest that a model, "well" constrained by 4 calibration objective functions may still be an inadequate representation of the system and that increasing model complexity, if counter-balanced by realism constraints, can indeed increase predictive performance of a model and its skill to reproduce a range of hydrological signatures, but that it does not necessarily result in increased uncertainty. The results also strongly illustrate the need to move away from automated model calibration towards a more general expert-knowledge driven strategy of constraining models if a certain level of model consistency is to be achieved.
On the dangers of model complexity without ecological justification in species distribution modeling
David M. Bell; Daniel R. Schlaepfer
2016-01-01
Although biogeographic patterns are the product of complex ecological processes, the increasing com-plexity of correlative species distribution models (SDMs) is not always motivated by ecological theory,but by model fit. The validity of model projections, such as shifts in a speciesâ climatic niche, becomesquestionable particularly during extrapolations, such as for...
Rise and fall of political complexity in island South-East Asia and the Pacific.
Currie, Thomas E; Greenhill, Simon J; Gray, Russell D; Hasegawa, Toshikazu; Mace, Ruth
2010-10-14
There is disagreement about whether human political evolution has proceeded through a sequence of incremental increases in complexity, or whether larger, non-sequential increases have occurred. The extent to which societies have decreased in complexity is also unclear. These debates have continued largely in the absence of rigorous, quantitative tests. We evaluated six competing models of political evolution in Austronesian-speaking societies using phylogenetic methods. Here we show that in the best-fitting model political complexity rises and falls in a sequence of small steps. This is closely followed by another model in which increases are sequential but decreases can be either sequential or in bigger drops. The results indicate that large, non-sequential jumps in political complexity have not occurred during the evolutionary history of these societies. This suggests that, despite the numerous contingent pathways of human history, there are regularities in cultural evolution that can be detected using computational phylogenetic methods.
NASA Astrophysics Data System (ADS)
Salman Shahid, Syed; Bikson, Marom; Salman, Humaira; Wen, Peng; Ahfock, Tony
2014-06-01
Objectives. Computational methods are increasingly used to optimize transcranial direct current stimulation (tDCS) dose strategies and yet complexities of existing approaches limit their clinical access. Since predictive modelling indicates the relevance of subject/pathology based data and hence the need for subject specific modelling, the incremental clinical value of increasingly complex modelling methods must be balanced against the computational and clinical time and costs. For example, the incorporation of multiple tissue layers and measured diffusion tensor (DTI) based conductivity estimates increase model precision but at the cost of clinical and computational resources. Costs related to such complexities aggregate when considering individual optimization and the myriad of potential montages. Here, rather than considering if additional details change current-flow prediction, we consider when added complexities influence clinical decisions. Approach. Towards developing quantitative and qualitative metrics of value/cost associated with computational model complexity, we considered field distributions generated by two 4 × 1 high-definition montages (m1 = 4 × 1 HD montage with anode at C3 and m2 = 4 × 1 HD montage with anode at C1) and a single conventional (m3 = C3-Fp2) tDCS electrode montage. We evaluated statistical methods, including residual error (RE) and relative difference measure (RDM), to consider the clinical impact and utility of increased complexities, namely the influence of skull, muscle and brain anisotropic conductivities in a volume conductor model. Main results. Anisotropy modulated current-flow in a montage and region dependent manner. However, significant statistical changes, produced within montage by anisotropy, did not change qualitative peak and topographic comparisons across montages. Thus for the examples analysed, clinical decision on which dose to select would not be altered by the omission of anisotropic brain conductivity. Significance. Results illustrate the need to rationally balance the role of model complexity, such as anisotropy in detailed current flow analysis versus value in clinical dose design. However, when extending our analysis to include axonal polarization, the results provide presumably clinically meaningful information. Hence the importance of model complexity may be more relevant with cellular level predictions of neuromodulation.
Modeling OPC complexity for design for manufacturability
NASA Astrophysics Data System (ADS)
Gupta, Puneet; Kahng, Andrew B.; Muddu, Swamy; Nakagawa, Sam; Park, Chul-Hong
2005-11-01
Increasing design complexity in sub-90nm designs results in increased mask complexity and cost. Resolution enhancement techniques (RET) such as assist feature addition, phase shifting (attenuated PSM) and aggressive optical proximity correction (OPC) help in preserving feature fidelity in silicon but increase mask complexity and cost. Data volume increase with rise in mask complexity is becoming prohibitive for manufacturing. Mask cost is determined by mask write time and mask inspection time, which are directly related to the complexity of features printed on the mask. Aggressive RET increase complexity by adding assist features and by modifying existing features. Passing design intent to OPC has been identified as a solution for reducing mask complexity and cost in several recent works. The goal of design-aware OPC is to relax OPC tolerances of layout features to minimize mask cost, without sacrificing parametric yield. To convey optimal OPC tolerances for manufacturing, design optimization should drive OPC tolerance optimization using models of mask cost for devices and wires. Design optimization should be aware of impact of OPC correction levels on mask cost and performance of the design. This work introduces mask cost characterization (MCC) that quantifies OPC complexity, measured in terms of fracture count of the mask, for different OPC tolerances. MCC with different OPC tolerances is a critical step in linking design and manufacturing. In this paper, we present a MCC methodology that provides models of fracture count of standard cells and wire patterns for use in design optimization. MCC cannot be performed by designers as they do not have access to foundry OPC recipes and RET tools. To build a fracture count model, we perform OPC and fracturing on a limited set of standard cells and wire configurations with all tolerance combinations. Separately, we identify the characteristics of the layout that impact fracture count. Based on the fracture count (FC) data from OPC and mask data preparation runs, we build models of FC as function of OPC tolerances and layout parameters.
NASA Astrophysics Data System (ADS)
Germer, S.; Bens, O.; Hüttl, R. F.
2008-12-01
The scepticism of non-scientific local stakeholders about results from complex physical based models is a major problem concerning the development and implementation of local climate change adaptation measures. This scepticism originates from the high complexity of such models. Local stakeholders perceive complex models as black-box models, as it is impossible to gasp all underlying assumptions and mathematically formulated processes at a glance. The use of physical based models is, however, indispensible to study complex underlying processes and to predict future environmental changes. The increase of climate change adaptation efforts following the release of the latest IPCC report indicates that the communication of facts about what has already changed is an appropriate tool to trigger climate change adaptation. Therefore we suggest increasing the practice of empirical data analysis in addition to modelling efforts. The analysis of time series can generate results that are easier to comprehend for non-scientific stakeholders. Temporal trends and seasonal patterns of selected hydrological parameters (precipitation, evapotranspiration, groundwater levels and river discharge) can be identified and the dependence of trends and seasonal patters to land use, topography and soil type can be highlighted. A discussion about lag times between the hydrological parameters can increase the awareness of local stakeholders for delayed environment responses.
Quantifying uncertainty in high-resolution coupled hydrodynamic-ecosystem models
NASA Astrophysics Data System (ADS)
Allen, J. I.; Somerfield, P. J.; Gilbert, F. J.
2007-01-01
Marine ecosystem models are becoming increasingly complex and sophisticated, and are being used to estimate the effects of future changes in the earth system with a view to informing important policy decisions. Despite their potential importance, far too little attention has been, and is generally, paid to model errors and the extent to which model outputs actually relate to real-world processes. With the increasing complexity of the models themselves comes an increasing complexity among model results. If we are to develop useful modelling tools for the marine environment we need to be able to understand and quantify the uncertainties inherent in the simulations. Analysing errors within highly multivariate model outputs, and relating them to even more complex and multivariate observational data, are not trivial tasks. Here we describe the application of a series of techniques, including a 2-stage self-organising map (SOM), non-parametric multivariate analysis, and error statistics, to a complex spatio-temporal model run for the period 1988-1989 in the Southern North Sea, coinciding with the North Sea Project which collected a wealth of observational data. We use model output, large spatio-temporally resolved data sets and a combination of methodologies (SOM, MDS, uncertainty metrics) to simplify the problem and to provide tractable information on model performance. The use of a SOM as a clustering tool allows us to simplify the dimensions of the problem while the use of MDS on independent data grouped according to the SOM classification allows us to validate the SOM. The combination of classification and uncertainty metrics allows us to pinpoint the variables and associated processes which require attention in each region. We recommend the use of this combination of techniques for simplifying complex comparisons of model outputs with real data, and analysis of error distributions.
ERIC Educational Resources Information Center
Northrup, Jessie Bolz
2017-01-01
The present article proposes a new developmental model of how young infants adapt and respond to complex contingencies in their environment, and how this influences development. The model proposes that typically developing infants adjust to an increasingly complex environment in ways that make it easier for them to allocate limited attentional…
Modelling the evolution of complex conductivity during calcite precipitation on glass beads
NASA Astrophysics Data System (ADS)
Leroy, Philippe; Li, Shuai; Jougnot, Damien; Revil, André; Wu, Yuxin
2017-04-01
When pH and alkalinity increase, calcite frequently precipitates and hence modifies the petrophysical properties of porous media. The complex conductivity method can be used to directly monitor calcite precipitation in porous media because it is sensitive to the evolution of the mineralogy, pore structure and its connectivity. We have developed a mechanistic grain polarization model considering the electrochemical polarization of the Stern and diffuse layers surrounding calcite particles. Our complex conductivity model depends on the surface charge density of the Stern layer and on the electrical potential at the onset of the diffuse layer, which are computed using a basic Stern model of the calcite/water interface. The complex conductivity measurements of Wu et al. on a column packed with glass beads where calcite precipitation occurs are reproduced by our surface complexation and complex conductivity models. The evolution of the size and shape of calcite particles during the calcite precipitation experiment is estimated by our complex conductivity model. At the early stage of the calcite precipitation experiment, modelled particles sizes increase and calcite particles flatten with time because calcite crystals nucleate at the surface of glass beads and grow into larger calcite grains. At the later stage of the calcite precipitation experiment, modelled sizes and cementation exponents of calcite particles decrease with time because large calcite grains aggregate over multiple glass beads and only small calcite crystals polarize.
Hydrological model parameter dimensionality is a weak measure of prediction uncertainty
NASA Astrophysics Data System (ADS)
Pande, S.; Arkesteijn, L.; Savenije, H.; Bastidas, L. A.
2015-04-01
This paper shows that instability of hydrological system representation in response to different pieces of information and associated prediction uncertainty is a function of model complexity. After demonstrating the connection between unstable model representation and model complexity, complexity is analyzed in a step by step manner. This is done measuring differences between simulations of a model under different realizations of input forcings. Algorithms are then suggested to estimate model complexity. Model complexities of the two model structures, SAC-SMA (Sacramento Soil Moisture Accounting) and its simplified version SIXPAR (Six Parameter Model), are computed on resampled input data sets from basins that span across the continental US. The model complexities for SIXPAR are estimated for various parameter ranges. It is shown that complexity of SIXPAR increases with lower storage capacity and/or higher recession coefficients. Thus it is argued that a conceptually simple model structure, such as SIXPAR, can be more complex than an intuitively more complex model structure, such as SAC-SMA for certain parameter ranges. We therefore contend that magnitudes of feasible model parameters influence the complexity of the model selection problem just as parameter dimensionality (number of parameters) does and that parameter dimensionality is an incomplete indicator of stability of hydrological model selection and prediction problems.
[Analysis of a three-dimensional finite element model of atlas and axis complex fracture].
Tang, X M; Liu, C; Huang, K; Zhu, G T; Sun, H L; Dai, J; Tian, J W
2018-05-22
Objective: To explored the clinical application of the three-dimensional finite element model of atlantoaxial complex fracture. Methods: A three-dimensional finite element model of cervical spine (FEM/intact) was established by software of Abaqus6.12.On the basis of this model, a three-dimensional finite element model of four types of atlantoaxial complex fracture was established: C(1) fracture (Jefferson)+ C(2) fracture (type Ⅱfracture), Jefferson+ C(2) fracture(type Ⅲfracture), Jefferson+ C(2) fracture(Hangman), Jefferson+ stable C(2) fracture (FEM/fracture). The range of motion under flexion, extension, lateral bending and axial rotation were measured and compared with the model of cervical spine. Results: The three-dimensional finite element model of four types of atlantoaxial complex fracture had the same similarity and profile.The range of motion (ROM) of different segments had different changes.Compared with those in the normal model, the ROM of C(0/1) and C(1/2) in C(1) combined Ⅱ odontoid fracture model in flexion/extension, lateral bending and rotation increased by 57.45%, 29.34%, 48.09% and 95.49%, 88.52%, 36.71%, respectively.The ROM of C(0/1) and C(1/2) in C(1) combined Ⅲodontoid fracture model in flexion/extension, lateral bending and rotation increased by 47.01%, 27.30%, 45.31% and 90.38%, 27.30%, 30.0%.The ROM of C(0/1) and C(1/2) in C(1) combined Hangman fracture model in flexion/extension, lateral bending and rotation increased by 32.68%, 79.34%, 77.62% and 60.53%, 81.20%, 21.48%, respectively.The ROM of C(0/1) and C(1/2) in C(1) combined axis fracture model in flexion/extension, lateral bending and rotation increased by 15.00%, 29.30%, 8.47% and 37.87%, 75.57%, 8.30%, respectively. Conclusions: The three-dimensional finite element model can be used to simulate the biomechanics of atlantoaxial complex fracture.The ROM of atlantoaxial complex fracture is larger than nomal model, which indicates that surgical treatment should be performed.
Elementary Teachers' Selection and Use of Visual Models
ERIC Educational Resources Information Center
Lee, Tammy D.; Jones, M. Gail
2018-01-01
As science grows in complexity, science teachers face an increasing challenge of helping students interpret models that represent complex science systems. Little is known about how teachers select and use models when planning lessons. This mixed methods study investigated the pedagogical approaches and visual models used by elementary in-service…
Complex systems as lenses on learning and teaching
NASA Astrophysics Data System (ADS)
Hurford, Andrew C.
From metaphors to mathematized models, the complexity sciences are changing the ways disciplines view their worlds, and ideas borrowed from complexity are increasingly being used to structure conversations and guide research on teaching and learning. The purpose of this corpus of research is to further those conversations and to extend complex systems ideas, theories, and modeling to curricula and to research on learning and teaching. A review of the literatures of learning and of complexity science and a discussion of the intersections between those disciplines are provided. The work reported represents an evolving model of learning qua complex system and that evolution is the result of iterative cycles of design research. One of the signatures of complex systems is the presence of scale invariance and this line of research furnishes empirical evidence of scale invariant behaviors in the activity of learners engaged in participatory simulations. The offered discussion of possible causes for these behaviors and chaotic phase transitions in human learning favors real-time optimization of decision-making as the means for producing such behaviors. Beyond theoretical development and modeling, this work includes the development of teaching activities intended to introduce pre-service mathematics and science teachers to complex systems. While some of the learning goals for this activity focused on the introduction of complex systems as a content area, we also used complex systems to frame perspectives on learning. Results of scoring rubrics and interview responses from students illustrate attributes of the proposed model of complex systems learning and also how these pre-service teachers made sense of the ideas. Correlations between established theories of learning and a complex adaptive systems model of learning are established and made explicit, and a means for using complex systems ideas for designing instruction is offered. It is a fundamental assumption of this research and researcher that complex systems ideas and understandings can be appropriated from more complexity-developed disciplines and put to use modeling and building increasingly productive understandings of learning and teaching.
Lightweight approach to model traceability in a CASE tool
NASA Astrophysics Data System (ADS)
Vileiniskis, Tomas; Skersys, Tomas; Pavalkis, Saulius; Butleris, Rimantas; Butkiene, Rita
2017-07-01
A term "model-driven" is not at all a new buzzword within the ranks of system development community. Nevertheless, the ever increasing complexity of model-driven approaches keeps fueling all kinds of discussions around this paradigm and pushes researchers forward to research and develop new and more effective ways to system development. With the increasing complexity, model traceability, and model management as a whole, becomes indispensable activities of model-driven system development process. The main goal of this paper is to present a conceptual design and implementation of a practical lightweight approach to model traceability in a CASE tool.
Investigation of model-based physical design restrictions (Invited Paper)
NASA Astrophysics Data System (ADS)
Lucas, Kevin; Baron, Stanislas; Belledent, Jerome; Boone, Robert; Borjon, Amandine; Couderc, Christophe; Patterson, Kyle; Riviere-Cazaux, Lionel; Rody, Yves; Sundermann, Frank; Toublan, Olivier; Trouiller, Yorick; Urbani, Jean-Christophe; Wimmer, Karl
2005-05-01
As lithography and other patterning processes become more complex and more non-linear with each generation, the task of physical design rules necessarily increases in complexity also. The goal of the physical design rules is to define the boundary between the physical layout structures which will yield well from those which will not. This is essentially a rule-based pre-silicon guarantee of layout correctness. However the rapid increase in design rule requirement complexity has created logistical problems for both the design and process functions. Therefore, similar to the semiconductor industry's transition from rule-based to model-based optical proximity correction (OPC) due to increased patterning complexity, opportunities for improving physical design restrictions by implementing model-based physical design methods are evident. In this paper we analyze the possible need and applications for model-based physical design restrictions (MBPDR). We first analyze the traditional design rule evolution, development and usage methodologies for semiconductor manufacturers. Next we discuss examples of specific design rule challenges requiring new solution methods in the patterning regime of low K1 lithography and highly complex RET. We then evaluate possible working strategies for MBPDR in the process development and product design flows, including examples of recent model-based pre-silicon verification techniques. Finally we summarize with a proposed flow and key considerations for MBPDR implementation.
Balancing model complexity and measurements in hydrology
NASA Astrophysics Data System (ADS)
Van De Giesen, N.; Schoups, G.; Weijs, S. V.
2012-12-01
The Data Processing Inequality implies that hydrological modeling can only reduce, and never increase, the amount of information available in the original data used to formulate and calibrate hydrological models: I(X;Z(Y)) ≤ I(X;Y). Still, hydrologists around the world seem quite content building models for "their" watersheds to move our discipline forward. Hydrological models tend to have a hybrid character with respect to underlying physics. Most models make use of some well established physical principles, such as mass and energy balances. One could argue that such principles are based on many observations, and therefore add data. These physical principles, however, are applied to hydrological models that often contain concepts that have no direct counterpart in the observable physical universe, such as "buckets" or "reservoirs" that fill up and empty out over time. These not-so-physical concepts are more like the Artificial Neural Networks and Support Vector Machines of the Artificial Intelligence (AI) community. Within AI, one quickly came to the realization that by increasing model complexity, one could basically fit any dataset but that complexity should be controlled in order to be able to predict unseen events. The more data are available to train or calibrate the model, the more complex it can be. Many complexity control approaches exist in AI, with Solomonoff inductive inference being one of the first formal approaches, the Akaike Information Criterion the most popular, and Statistical Learning Theory arguably being the most comprehensive practical approach. In hydrology, complexity control has hardly been used so far. There are a number of reasons for that lack of interest, the more valid ones of which will be presented during the presentation. For starters, there are no readily available complexity measures for our models. Second, some unrealistic simplifications of the underlying complex physics tend to have a smoothing effect on possible model outcomes, thereby preventing the most obvious results of over-fitting. Thirdly, dependence within and between time series poses an additional analytical problem. Finally, there are arguments to be made that the often discussed "equifinality" in hydrological models is simply a different manifestation of the lack of complexity control. In turn, this points toward a general idea, which is actually quite popular in sciences other than hydrology, that additional data gathering is a good way to increase the information content of our descriptions of hydrological reality.
Simulating evolution of protein complexes through gene duplication and co-option.
Haarsma, Loren; Nelesen, Serita; VanAndel, Ethan; Lamine, James; VandeHaar, Peter
2016-06-21
We present a model of the evolution of protein complexes with novel functions through gene duplication, mutation, and co-option. Under a wide variety of input parameters, digital organisms evolve complexes of 2-5 bound proteins which have novel functions but whose component proteins are not independently functional. Evolution of complexes with novel functions happens more quickly as gene duplication rates increase, point mutation rates increase, protein complex functional probability increases, protein complex functional strength increases, and protein family size decreases. Evolution of complexity is inhibited when the metabolic costs of making proteins exceeds the fitness gain of having functional proteins, or when point mutation rates get so large the functional proteins undergo deleterious mutations faster than new functional complexes can evolve. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Luo, Yan; Zhang, Lifeng; Li, Ming; Sridhar, Seetharaman
2018-06-01
A complex nitride of Al x Mg(1- x)N was observed in silicon steels. A thermodynamic model was developed to predict the ferrite/nitride equilibrium in the Fe-Al-Mg-N alloy system, using published binary solubility products for stoichiometric phases. The model was used to estimate the solubility product of nitride compound, equilibrium ferrite, and nitride compositions, and the amounts of each phase, as a function of steel composition and temperature. In the current model, the molar ratio Al/(Al + Mg) in the complex nitride was great due to the low dissolved magnesium in steel. For a steel containing 0.52 wt pct Als, 10 ppm T.Mg., and 20 ppm T.N. at 1100 K (827 °C), the complex nitride was expressed by Al0.99496Mg0.00504N and the solubility product of this complex nitride was 2.95 × 10-7. In addition, the solution temperature of the complex nitride increased with increasing the nitrogen and aluminum in steel. The good agreement between the prediction and the detected precipitate compositions validated the current model.
Epidemic modeling in complex realities.
Colizza, Vittoria; Barthélemy, Marc; Barrat, Alain; Vespignani, Alessandro
2007-04-01
In our global world, the increasing complexity of social relations and transport infrastructures are key factors in the spread of epidemics. In recent years, the increasing availability of computer power has enabled both to obtain reliable data allowing one to quantify the complexity of the networks on which epidemics may propagate and to envision computational tools able to tackle the analysis of such propagation phenomena. These advances have put in evidence the limits of homogeneous assumptions and simple spatial diffusion approaches, and stimulated the inclusion of complex features and heterogeneities relevant in the description of epidemic diffusion. In this paper, we review recent progresses that integrate complex systems and networks analysis with epidemic modelling and focus on the impact of the various complex features of real systems on the dynamics of epidemic spreading.
Roberts, Susan L.; Van Wagtendonk, Jan W.; Miles, A. Keith; Kelt, Douglas A.; Lutz, James A.
2008-01-01
We evaluated the impact of fire severity and related spatial and vegetative parameters on small mammal populations in 2 yr- to 15 yr-old burns in Yosemite National Park, California, USA. We also developed habitat models that would predict small mammal responses to fires of differing severity. We hypothesized that fire severity would influence the abundances of small mammals through changes in vegetation composition, structure, and spatial habitat complexity. Deer mouse (Peromyscus maniculatus) abundance responded negatively to fire severity, and brush mouse (P. boylii) abundance increased with increasing oak tree (Quercus spp.) cover. Chipmunk (Neotamias spp.) abundance was best predicted through a combination of a negative response to oak tree cover and a positive response to spatial habitat complexity. California ground squirrel (Spermophilus beecheyi) abundance increased with increasing spatial habitat complexity. Our results suggest that fire severity, with subsequent changes in vegetation structure and habitat spatial complexity, can influence small mammal abundance patterns.
High Selection Pressure Promotes Increase in Cumulative Adaptive Culture
Vegvari, Carolin; Foley, Robert A.
2014-01-01
The evolution of cumulative adaptive culture has received widespread interest in recent years, especially the factors promoting its occurrence. Current evolutionary models suggest that an increase in population size may lead to an increase in cultural complexity via a higher rate of cultural transmission and innovation. However, relatively little attention has been paid to the role of natural selection in the evolution of cultural complexity. Here we use an agent-based simulation model to demonstrate that high selection pressure in the form of resource pressure promotes the accumulation of adaptive culture in spite of small population sizes and high innovation costs. We argue that the interaction of demography and selection is important, and that neither can be considered in isolation. We predict that an increase in cultural complexity is most likely to occur under conditions of population pressure relative to resource availability. Our model may help to explain why culture change can occur without major environmental change. We suggest that understanding the interaction between shifting selective pressures and demography is essential for explaining the evolution of cultural complexity. PMID:24489724
Acceleration techniques for dependability simulation. M.S. Thesis
NASA Technical Reports Server (NTRS)
Barnette, James David
1995-01-01
As computer systems increase in complexity, the need to project system performance from the earliest design and development stages increases. We have to employ simulation for detailed dependability studies of large systems. However, as the complexity of the simulation model increases, the time required to obtain statistically significant results also increases. This paper discusses an approach that is application independent and can be readily applied to any process-based simulation model. Topics include background on classical discrete event simulation and techniques for random variate generation and statistics gathering to support simulation.
Clarity versus complexity: land-use modeling as a practical tool for decision-makers
Sohl, Terry L.; Claggett, Peter
2013-01-01
The last decade has seen a remarkable increase in the number of modeling tools available to examine future land-use and land-cover (LULC) change. Integrated modeling frameworks, agent-based models, cellular automata approaches, and other modeling techniques have substantially improved the representation of complex LULC systems, with each method using a different strategy to address complexity. However, despite the development of new and better modeling tools, the use of these tools is limited for actual planning, decision-making, or policy-making purposes. LULC modelers have become very adept at creating tools for modeling LULC change, but complicated models and lack of transparency limit their utility for decision-makers. The complicated nature of many LULC models also makes it impractical or even impossible to perform a rigorous analysis of modeling uncertainty. This paper provides a review of land-cover modeling approaches and the issues causes by the complicated nature of models, and provides suggestions to facilitate the increased use of LULC models by decision-makers and other stakeholders. The utility of LULC models themselves can be improved by 1) providing model code and documentation, 2) through the use of scenario frameworks to frame overall uncertainties, 3) improving methods for generalizing key LULC processes most important to stakeholders, and 4) adopting more rigorous standards for validating models and quantifying uncertainty. Communication with decision-makers and other stakeholders can be improved by increasing stakeholder participation in all stages of the modeling process, increasing the transparency of model structure and uncertainties, and developing user-friendly decision-support systems to bridge the link between LULC science and policy. By considering these options, LULC science will be better positioned to support decision-makers and increase real-world application of LULC modeling results.
Dense power-law networks and simplicial complexes
NASA Astrophysics Data System (ADS)
Courtney, Owen T.; Bianconi, Ginestra
2018-05-01
There is increasing evidence that dense networks occur in on-line social networks, recommendation networks and in the brain. In addition to being dense, these networks are often also scale-free, i.e., their degree distributions follow P (k ) ∝k-γ with γ ∈(1 ,2 ] . Models of growing networks have been successfully employed to produce scale-free networks using preferential attachment, however these models can only produce sparse networks as the numbers of links and nodes being added at each time step is constant. Here we present a modeling framework which produces networks that are both dense and scale-free. The mechanism by which the networks grow in this model is based on the Pitman-Yor process. Variations on the model are able to produce undirected scale-free networks with exponent γ =2 or directed networks with power-law out-degree distribution with tunable exponent γ ∈(1 ,2 ) . We also extend the model to that of directed two-dimensional simplicial complexes. Simplicial complexes are generalization of networks that can encode the many body interactions between the parts of a complex system and as such are becoming increasingly popular to characterize different data sets ranging from social interacting systems to the brain. Our model produces dense directed simplicial complexes with power-law distribution of the generalized out-degrees of the nodes.
Contrasting model complexity under a changing climate in a headwaters catchment.
NASA Astrophysics Data System (ADS)
Foster, L.; Williams, K. H.; Maxwell, R. M.
2017-12-01
Alpine, snowmelt-dominated catchments are the source of water for more than 1/6th of the world's population. These catchments are topographically complex, leading to steep weather gradients and nonlinear relationships between water and energy fluxes. Recent evidence suggests that alpine systems are more sensitive to climate warming, but these regions are vastly simplified in climate models and operational water management tools due to computational limitations. Simultaneously, point-scale observations are often extrapolated to larger regions where feedbacks can both exacerbate or mitigate locally observed changes. It is critical to determine whether projected climate impacts are robust to different methodologies, including model complexity. Using high performance computing and an integrated model of a representative headwater catchment we determined the hydrologic response from 30 projected climate changes to precipitation, temperature and vegetation for the Rocky Mountains. Simulations were run with 100m and 1km resolution, and with and without lateral subsurface flow in order to vary model complexity. We found that model complexity alters nonlinear relationships between water and energy fluxes. Higher-resolution models predicted larger changes per degree of temperature increase than lower resolution models, suggesting that reductions to snowpack, surface water, and groundwater due to warming may be underestimated in simple models. Increases in temperature were found to have a larger impact on water fluxes and stores than changes in precipitation, corroborating previous research showing that mountain systems are significantly more sensitive to temperature changes than to precipitation changes and that increases in winter precipitation are unlikely to compensate for increased evapotranspiration in a higher energy environment. These numerical experiments help to (1) bracket the range of uncertainty in published literature of climate change impacts on headwater hydrology; (2) characterize the role of precipitation and temperature changes on water supply for snowmelt-dominated downstream basins; and (3) identify which climate impacts depend on the scale of simulation.
NASA Astrophysics Data System (ADS)
Li, Chunguang; Maini, Philip K.
2005-10-01
The Penna bit-string model successfully encompasses many phenomena of population evolution, including inheritance, mutation, evolution, and aging. If we consider social interactions among individuals in the Penna model, the population will form a complex network. In this paper, we first modify the Verhulst factor to control only the birth rate, and introduce activity-based preferential reproduction of offspring in the Penna model. The social interactions among individuals are generated by both inheritance and activity-based preferential increase. Then we study the properties of the complex network generated by the modified Penna model. We find that the resulting complex network has a small-world effect and the assortative mixing property.
Leder, Helmut
2017-01-01
Visual complexity is relevant for many areas ranging from improving usability of technical displays or websites up to understanding aesthetic experiences. Therefore, many attempts have been made to relate objective properties of images to perceived complexity in artworks and other images. It has been argued that visual complexity is a multidimensional construct mainly consisting of two dimensions: A quantitative dimension that increases complexity through number of elements, and a structural dimension representing order negatively related to complexity. The objective of this work is to study human perception of visual complexity utilizing two large independent sets of abstract patterns. A wide range of computational measures of complexity was calculated, further combined using linear models as well as machine learning (random forests), and compared with data from human evaluations. Our results confirm the adequacy of existing two-factor models of perceived visual complexity consisting of a quantitative and a structural factor (in our case mirror symmetry) for both of our stimulus sets. In addition, a non-linear transformation of mirror symmetry giving more influence to small deviations from symmetry greatly increased explained variance. Thus, we again demonstrate the multidimensional nature of human complexity perception and present comprehensive quantitative models of the visual complexity of abstract patterns, which might be useful for future experiments and applications. PMID:29099832
Testing the role of metal hydrolysis in the anomalous electrodeposition of Ni-Fe alloys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, T.M.; St. Clair, J.
1996-12-01
With the objective of testing several models of the anomalous codeposition (ACD) encountered in the electrodeposition of nickel-iron alloys, the effects of bath pH and complexing agents on the composition of deposits were examined. When the pH of the base line bath was increased from 3.0 to 5.0, the Ni/Fe mass ratio of the deposit increased (i.e., the deposition became less anomalous). The presence of tartrate ion in the bath produced a slight decrease in the Ni/Fe of the deposit. This complexing agent complexes ferric ion and thus prevents its precipitation but has little interaction with ferrous ion or nickelmore » ion under the electrodeposition conditions examined. The addition of ethylenediamine to the bath produced a significant increase in the Ni/Fe mass ratio. This complexing agent does not interact significantly with ferric ion or ferrous ion under the test conditions. None of these observations are consistent with the Dahms and Croll model of ACD. The effects of pH and tartaric acid on the deposit composition are consistent with the predictions of the Grande and Talbot model and the Matlosz model. The effect of ethylenediamine is not consistent with the Grande and Talbot model, but may be interpreted within the framework of the Matlosz model and the Hessami and Tobias model.« less
Monpays, Cécile; Deslauriers, Jessica; Sarret, Philippe; Grignon, Sylvain
2016-08-01
Schizophrenia is a chronic mental illness in which mitochondrial dysfunction has been suggested. Our laboratory recently developed a juvenile murine two-hit model (THM) of schizophrenia based on the combination of gestational inflammation, followed by juvenile restraint stress. We previously reported that relevant behaviors and neurochemical disturbances, including oxidative stress, were reversed by the antioxidant lipoic acid (LA), thereby pointing to the central role played by oxidative abnormalities and prompting us to investigate mitochondrial function. Mitochondrial activity was determined with the MitoXpress® commercial kit in two schizophrenia-relevant regions (prefrontal cortex (PFC) and striatum). Measurements were performed in state 3, with substrates for complex I- and complex II-induced respiratory activity (IRA). We observed an increase in complex I IRA in the PFC and striatum in both sexes but an increase in complex II activity only in males. LA treatment prevented this increase only in complex II IRA in males. Expression levels of the different respiratory chain complexes, as well as fission/fusion proteins and protein carbonylation, were unchanged. In conclusion, our juvenile schizophrenia THM shows an increase in mitochondrial activity reversed by LA, specifically in complex II IRA in males. Further investigations are required to determine the mechanisms of these modifications.
Complexity and demographic explanations of cumulative culture.
Querbes, Adrien; Vaesen, Krist; Houkes, Wybo
2014-01-01
Formal models have linked prehistoric and historical instances of technological change (e.g., the Upper Paleolithic transition, cultural loss in Holocene Tasmania, scientific progress since the late nineteenth century) to demographic change. According to these models, cumulation of technological complexity is inhibited by decreasing--while favoured by increasing--population levels. Here we show that these findings are contingent on how complexity is defined: demography plays a much more limited role in sustaining cumulative culture in case formal models deploy Herbert Simon's definition of complexity rather than the particular definitions of complexity hitherto assumed. Given that currently available empirical evidence doesn't afford discriminating proper from improper definitions of complexity, our robustness analyses put into question the force of recent demographic explanations of particular episodes of cultural change.
Andrianakis, I; Vernon, I; McCreesh, N; McKinley, T J; Oakley, J E; Nsubuga, R N; Goldstein, M; White, R G
2017-08-01
Complex stochastic models are commonplace in epidemiology, but their utility depends on their calibration to empirical data. History matching is a (pre)calibration method that has been applied successfully to complex deterministic models. In this work, we adapt history matching to stochastic models, by emulating the variance in the model outputs, and therefore accounting for its dependence on the model's input values. The method proposed is applied to a real complex epidemiological model of human immunodeficiency virus in Uganda with 22 inputs and 18 outputs, and is found to increase the efficiency of history matching, requiring 70% of the time and 43% fewer simulator evaluations compared with a previous variant of the method. The insight gained into the structure of the human immunodeficiency virus model, and the constraints placed on it, are then discussed.
Fast computation of derivative based sensitivities of PSHA models via algorithmic differentiation
NASA Astrophysics Data System (ADS)
Leövey, Hernan; Molkenthin, Christian; Scherbaum, Frank; Griewank, Andreas; Kuehn, Nicolas; Stafford, Peter
2015-04-01
Probabilistic seismic hazard analysis (PSHA) is the preferred tool for estimation of potential ground-shaking hazard due to future earthquakes at a site of interest. A modern PSHA represents a complex framework which combines different models with possible many inputs. Sensitivity analysis is a valuable tool for quantifying changes of a model output as inputs are perturbed, identifying critical input parameters and obtaining insight in the model behavior. Differential sensitivity analysis relies on calculating first-order partial derivatives of the model output with respect to its inputs. Moreover, derivative based global sensitivity measures (Sobol' & Kucherenko '09) can be practically used to detect non-essential inputs of the models, thus restricting the focus of attention to a possible much smaller set of inputs. Nevertheless, obtaining first-order partial derivatives of complex models with traditional approaches can be very challenging, and usually increases the computation complexity linearly with the number of inputs appearing in the models. In this study we show how Algorithmic Differentiation (AD) tools can be used in a complex framework such as PSHA to successfully estimate derivative based sensitivities, as is the case in various other domains such as meteorology or aerodynamics, without no significant increase in the computation complexity required for the original computations. First we demonstrate the feasibility of the AD methodology by comparing AD derived sensitivities to analytically derived sensitivities for a basic case of PSHA using a simple ground-motion prediction equation. In a second step, we derive sensitivities via AD for a more complex PSHA study using a ground motion attenuation relation based on a stochastic method to simulate strong motion. The presented approach is general enough to accommodate more advanced PSHA studies of higher complexity.
Trends in modeling Biomedical Complex Systems
Milanesi, Luciano; Romano, Paolo; Castellani, Gastone; Remondini, Daniel; Liò, Petro
2009-01-01
In this paper we provide an introduction to the techniques for multi-scale complex biological systems, from the single bio-molecule to the cell, combining theoretical modeling, experiments, informatics tools and technologies suitable for biological and biomedical research, which are becoming increasingly multidisciplinary, multidimensional and information-driven. The most important concepts on mathematical modeling methodologies and statistical inference, bioinformatics and standards tools to investigate complex biomedical systems are discussed and the prominent literature useful to both the practitioner and the theoretician are presented. PMID:19828068
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keller, J.; Lacava, W.; Austin, J.
2015-02-01
This work investigates the minimum level of fidelity required to accurately simulate wind turbine gearboxes using state-of-the-art design tools. Excessive model fidelity including drivetrain complexity, gearbox complexity, excitation sources, and imperfections, significantly increases computational time, but may not provide a commensurate increase in the value of the results. Essential designparameters are evaluated, including the planetary load-sharing factor, gear tooth load distribution, and sun orbit motion. Based on the sensitivity study results, recommendations for the minimum model fidelities are provided.
Hypercompetitive Environments: An Agent-based model approach
NASA Astrophysics Data System (ADS)
Dias, Manuel; Araújo, Tanya
Information technology (IT) environments are characterized by complex changes and rapid evolution. Globalization and the spread of technological innovation have increased the need for new strategic information resources, both from individual firms and management environments. Improvements in multidisciplinary methods and, particularly, the availability of powerful computational tools, are giving researchers an increasing opportunity to investigate management environments in their true complex nature. The adoption of a complex systems approach allows for modeling business strategies from a bottom-up perspective — understood as resulting from repeated and local interaction of economic agents — without disregarding the consequences of the business strategies themselves to individual behavior of enterprises, emergence of interaction patterns between firms and management environments. Agent-based models are at the leading approach of this attempt.
Instrumentational complexity of music genres and why simplicity sells.
Percino, Gamaliel; Klimek, Peter; Thurner, Stefan
2014-01-01
Listening habits are strongly influenced by two opposing aspects, the desire for variety and the demand for uniformity in music. In this work we quantify these two notions in terms of instrumentation and production technologies that are typically involved in crafting popular music. We assign an 'instrumentational complexity value' to each music style. Styles of low instrumentational complexity tend to have generic instrumentations that can also be found in many other styles. Styles of high complexity, on the other hand, are characterized by a large variety of instruments that can only be found in a small number of other styles. To model these results we propose a simple stochastic model that explicitly takes the capabilities of artists into account. We find empirical evidence that individual styles show dramatic changes in their instrumentational complexity over the last fifty years. 'New wave' or 'disco' quickly climbed towards higher complexity in the 70s and fell back to low complexity levels shortly afterwards, whereas styles like 'folk rock' remained at constant high instrumentational complexity levels. We show that changes in the instrumentational complexity of a style are related to its number of sales and to the number of artists contributing to that style. As a style attracts a growing number of artists, its instrumentational variety usually increases. At the same time the instrumentational uniformity of a style decreases, i.e. a unique stylistic and increasingly complex expression pattern emerges. In contrast, album sales of a given style typically increase with decreasing instrumentational complexity. This can be interpreted as music becoming increasingly formulaic in terms of instrumentation once commercial or mainstream success sets in.
NASA Astrophysics Data System (ADS)
Molnar, I. L.; Krol, M.; Mumford, K. G.
2016-12-01
Geoenvironmental models are becoming increasingly sophisticated as they incorporate rising numbers of mechanisms and process couplings to describe environmental scenarios. When combined with advances in computing and numerical techniques, these already complicated models are experiencing large increases in code complexity and simulation time. Although, this complexity has enabled breakthroughs in the ability to describe environmental problems, it is difficult to ensure that complex models are sufficiently robust and behave as intended. Many development tools used for testing software robustness have not seen widespread use in geoenvironmental sciences despite an increasing reliance on complex numerical models, leaving many models at risk of undiscovered errors and potentially improper validations. This study explores the use of unit testing, which independently examines small code elements to ensure each unit is working as intended as well as their integrated behaviour, to test the functionality and robustness of a coupled Electrical Resistive Heating (ERH) - Macroscopic Invasion Percolation (MIP) model. ERH is a thermal remediation technique where the soil is heated until boiling and volatile contaminants are stripped from the soil. There is significant interest in improving the efficiency of ERH, including taking advantage of low-temperature co-boiling behaviour which may reduce energy consumption. However, at lower co-boiling temperatures gas bubbles can form, mobilize and collapse in cooler areas, potentially contaminating previously clean zones. The ERH-MIP model was created to simulate the behaviour of gas bubbles in the subsurface and to evaluate ERH during co-boiling1. This study demonstrates how unit testing ensures that the model behaves in an expected manner and examines the robustness of every component within the ERH-MIP model. Once unit testing is established, the MIP module (a discrete gas transport algorithm for gas expansion, mobilization and fragmentation2) was validated against a two-dimensional light transmission visualization experiment 3. 1. Krol, M. M., et al. (2011), Adv. Water Resour. 2011, 34 (4), 537-549. 2. Mumford, K. G., et al. (2010), Adv. Water Resour. 2010, 33 (4), 504-513. 3. Hegele, P. R. and Mumford, K. G. Journal of Contaminant Hydrology 2014, 165, 24-36.
Intrinsic dimensionality predicts the saliency of natural dynamic scenes.
Vig, Eleonora; Dorr, Michael; Martinetz, Thomas; Barth, Erhardt
2012-06-01
Since visual attention-based computer vision applications have gained popularity, ever more complex, biologically inspired models seem to be needed to predict salient locations (or interest points) in naturalistic scenes. In this paper, we explore how far one can go in predicting eye movements by using only basic signal processing, such as image representations derived from efficient coding principles, and machine learning. To this end, we gradually increase the complexity of a model from simple single-scale saliency maps computed on grayscale videos to spatiotemporal multiscale and multispectral representations. Using a large collection of eye movements on high-resolution videos, supervised learning techniques fine-tune the free parameters whose addition is inevitable with increasing complexity. The proposed model, although very simple, demonstrates significant improvement in predicting salient locations in naturalistic videos over four selected baseline models and two distinct data labeling scenarios.
Ranking streamflow model performance based on Information theory metrics
NASA Astrophysics Data System (ADS)
Martinez, Gonzalo; Pachepsky, Yakov; Pan, Feng; Wagener, Thorsten; Nicholson, Thomas
2016-04-01
The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic model evaluation and selection. We simulated 10-year streamflow time series in five watersheds located in Texas, North Carolina, Mississippi, and West Virginia. Eight model of different complexity were applied. The information-theory based metrics were obtained after representing the time series as strings of symbols where different symbols corresponded to different quantiles of the probability distribution of streamflow. The symbol alphabet was used. Three metrics were computed for those strings - mean information gain that measures the randomness of the signal, effective measure complexity that characterizes predictability and fluctuation complexity that characterizes the presence of a pattern in the signal. The observed streamflow time series has smaller information content and larger complexity metrics than the precipitation time series. Watersheds served as information filters and and streamflow time series were less random and more complex than the ones of precipitation. This is reflected the fact that the watershed acts as the information filter in the hydrologic conversion process from precipitation to streamflow. The Nash Sutcliffe efficiency metric increased as the complexity of models increased, but in many cases several model had this efficiency values not statistically significant from each other. In such cases, ranking models by the closeness of the information-theory based parameters in simulated and measured streamflow time series can provide an additional criterion for the evaluation of hydrologic model performance.
Reliable low precision simulations in land surface models
NASA Astrophysics Data System (ADS)
Dawson, Andrew; Düben, Peter D.; MacLeod, David A.; Palmer, Tim N.
2017-12-01
Weather and climate models must continue to increase in both resolution and complexity in order that forecasts become more accurate and reliable. Moving to lower numerical precision may be an essential tool for coping with the demand for ever increasing model complexity in addition to increasing computing resources. However, there have been some concerns in the weather and climate modelling community over the suitability of lower precision for climate models, particularly for representing processes that change very slowly over long time-scales. These processes are difficult to represent using low precision due to time increments being systematically rounded to zero. Idealised simulations are used to demonstrate that a model of deep soil heat diffusion that fails when run in single precision can be modified to work correctly using low precision, by splitting up the model into a small higher precision part and a low precision part. This strategy retains the computational benefits of reduced precision whilst preserving accuracy. This same technique is also applied to a full complexity land surface model, resulting in rounding errors that are significantly smaller than initial condition and parameter uncertainties. Although lower precision will present some problems for the weather and climate modelling community, many of the problems can likely be overcome using a straightforward and physically motivated application of reduced precision.
Genotypic Complexity of Fisher’s Geometric Model
Hwang, Sungmin; Park, Su-Chan; Krug, Joachim
2017-01-01
Fisher’s geometric model was originally introduced to argue that complex adaptations must occur in small steps because of pleiotropic constraints. When supplemented with the assumption of additivity of mutational effects on phenotypic traits, it provides a simple mechanism for the emergence of genotypic epistasis from the nonlinear mapping of phenotypes to fitness. Of particular interest is the occurrence of reciprocal sign epistasis, which is a necessary condition for multipeaked genotypic fitness landscapes. Here we compute the probability that a pair of randomly chosen mutations interacts sign epistatically, which is found to decrease with increasing phenotypic dimension n, and varies nonmonotonically with the distance from the phenotypic optimum. We then derive expressions for the mean number of fitness maxima in genotypic landscapes comprised of all combinations of L random mutations. This number increases exponentially with L, and the corresponding growth rate is used as a measure of the complexity of the landscape. The dependence of the complexity on the model parameters is found to be surprisingly rich, and three distinct phases characterized by different landscape structures are identified. Our analysis shows that the phenotypic dimension, which is often referred to as phenotypic complexity, does not generally correlate with the complexity of fitness landscapes and that even organisms with a single phenotypic trait can have complex landscapes. Our results further inform the interpretation of experiments where the parameters of Fisher’s model have been inferred from data, and help to elucidate which features of empirical fitness landscapes can be described by this model. PMID:28450460
EMILiO: a fast algorithm for genome-scale strain design.
Yang, Laurence; Cluett, William R; Mahadevan, Radhakrishnan
2011-05-01
Systems-level design of cell metabolism is becoming increasingly important for renewable production of fuels, chemicals, and drugs. Computational models are improving in the accuracy and scope of predictions, but are also growing in complexity. Consequently, efficient and scalable algorithms are increasingly important for strain design. Previous algorithms helped to consolidate the utility of computational modeling in this field. To meet intensifying demands for high-performance strains, both the number and variety of genetic manipulations involved in strain construction are increasing. Existing algorithms have experienced combinatorial increases in computational complexity when applied toward the design of such complex strains. Here, we present EMILiO, a new algorithm that increases the scope of strain design to include reactions with individually optimized fluxes. Unlike existing approaches that would experience an explosion in complexity to solve this problem, we efficiently generated numerous alternate strain designs producing succinate, l-glutamate and l-serine. This was enabled by successive linear programming, a technique new to the area of computational strain design. Copyright © 2011 Elsevier Inc. All rights reserved.
Dahlin, Joakim S.; Ivarsson, Martin A.; Heyman, Birgitta; Hallgren, Jenny
2011-01-01
Mast cell numbers and allergen specific IgE are increased in the lungs of patients with allergic asthma and this can be reproduced in mouse models. The increased number of mast cells is likely due to recruitment of mast cell progenitors that mature in situ. We hypothesized that formation of IgE immune complexes in the lungs of sensitized mice increase the migration of mast cell progenitors to this organ. To study this, a model of allergic airway inflammation where mice were immunized with ovalbumin (OVA) in alum twice followed by three daily intranasal challenges of either OVA coupled to trinitrophenyl (TNP) alone or as immune complexes with IgE-anti-TNP, was used. Mast cell progenitors were quantified by a limiting dilution assay. IgE immune complex challenge of sensitized mice elicited three times more mast cell progenitors per lung than challenge with the same dose of antigen alone. This dose of antigen challenge alone did not increase the levels of mast cell progenitors compared to unchallenged mice. IgE immune complex challenge of sensitized mice also enhanced the frequency of mast cell progenitors per 106 mononuclear cells by 2.1-fold. The enhancement of lung mast cell progenitors by IgE immune complex challenge was lost in FcRγ deficient mice but not in CD23 deficient mice. Our data show that IgE immune complex challenge enhances the number of mast cell progenitors in the lung through activation of an Fc receptor associated with the FcRγ chain. This most likely takes place via activation of FcεRI, although activation via FcγRIV or a combination of the two receptors cannot be excluded. IgE immune complex-mediated enhancement of lung MCp numbers is a new reason to target IgE in therapies against allergic asthma. PMID:21625525
Food-web complexity emerging from ecological dynamics on adaptive networks.
Garcia-Domingo, Josep L; Saldaña, Joan
2007-08-21
Food webs are complex networks describing trophic interactions in ecological communities. Since Robert May's seminal work on random structured food webs, the complexity-stability debate is a central issue in ecology: does network complexity increase or decrease food-web persistence? A multi-species predator-prey model incorporating adaptive predation shows that the action of ecological dynamics on the topology of a food web (whose initial configuration is generated either by the cascade model or by the niche model) render, when a significant fraction of adaptive predators is present, similar hyperbolic complexity-persistence relationships as those observed in empirical food webs. It is also shown that the apparent positive relation between complexity and persistence in food webs generated under the cascade model, which has been pointed out in previous papers, disappears when the final connection is used instead of the initial one to explain species persistence.
Lucea, Marguerite B; Hindin, Michelle J; Kub, Joan; Campbell, Jacquelyn C
2012-01-01
A person's ability to minimize HIV risk is embedded in a complex, multidimensional context. In this study, we tested a model of how relationship power impacts IPV victimization, which in turn impacts HIV risk behaviors. We analyzed data from 474 young adult women (aged 15-31) in Cebu Province, Philippines, using structural equation modeling, and demonstrated good fit for the models. High relationship power is directly associated with increased IPV victimization, and IPV victimization is positively associated with increased HIV risk. We highlight in this article the complex dynamics to consider in HIV risk prevention among these young women.
LUCEA, MARGUERITE B.; HINDIN, MICHELLE J.; KUB, JOAN; CAMPBELL, JACQUELYN C.
2012-01-01
A person’s ability to minimize HIV risk is embedded in a complex, multidimensional context. In this study, we tested a model of how relationship power impacts IPV victimization, which in turn impacts HIV risk behaviors. We analyzed data from 474 young adult women (aged 15–31) in Cebu Province, Philippines, using structural equation modeling, and demonstrated good fit for the models. High relationship power is directly associated with increased IPV victimization, and IPV victimization is positively associated with increased HIV risk. We highlight in this article the complex dynamics to consider in HIV risk prevention among these young women. PMID:22420674
Bustamante, Carlos D.; Valero-Cuevas, Francisco J.
2010-01-01
The field of complex biomechanical modeling has begun to rely on Monte Carlo techniques to investigate the effects of parameter variability and measurement uncertainty on model outputs, search for optimal parameter combinations, and define model limitations. However, advanced stochastic methods to perform data-driven explorations, such as Markov chain Monte Carlo (MCMC), become necessary as the number of model parameters increases. Here, we demonstrate the feasibility and, what to our knowledge is, the first use of an MCMC approach to improve the fitness of realistically large biomechanical models. We used a Metropolis–Hastings algorithm to search increasingly complex parameter landscapes (3, 8, 24, and 36 dimensions) to uncover underlying distributions of anatomical parameters of a “truth model” of the human thumb on the basis of simulated kinematic data (thumbnail location, orientation, and linear and angular velocities) polluted by zero-mean, uncorrelated multivariate Gaussian “measurement noise.” Driven by these data, ten Markov chains searched each model parameter space for the subspace that best fit the data (posterior distribution). As expected, the convergence time increased, more local minima were found, and marginal distributions broadened as the parameter space complexity increased. In the 36-D scenario, some chains found local minima but the majority of chains converged to the true posterior distribution (confirmed using a cross-validation dataset), thus demonstrating the feasibility and utility of these methods for realistically large biomechanical problems. PMID:19272906
NASA Astrophysics Data System (ADS)
Marconi, S.; Collalti, A.; Santini, M.; Valentini, R.
2013-12-01
3D-CMCC-Forest Ecosystem Model is a process based model formerly developed for complex forest ecosystems to estimate growth, water and carbon cycles, phenology and competition processes on a daily/monthly time scale. The Model integrates some characteristics of the functional-structural tree models with the robustness of the light use efficiency approach. It treats different heights, ages and species as discrete classes, in competition for light (vertical structure) and space (horizontal structure). The present work evaluates the results of the recently developed daily version of 3D-CMCC-FEM for two neighboring different even aged and mono specific study cases. The former is a heterogeneous Pedunculate oak forest (Quercus robur L. ), the latter a more homogeneous Scot pine forest (Pinus sylvestris L.). The multi-layer approach has been evaluated against a series of simplified versions to determine whether the improved model complexity in canopy structure definition increases its predictive ability. Results show that a more complex structure (three height layers) should be preferable to simulate heterogeneous scenarios (Pedunculate oak stand), where heights distribution within the canopy justify the distinction in dominant, dominated and sub-dominated layers. On the contrary, it seems that using a multi-layer approach for more homogeneous stands (Scot pine stand) may be disadvantageous. Forcing the structure of an homogeneous stand to a multi-layer approach may in fact increase sources of uncertainty. On the other hand forcing complex forests to a mono layer simplified model, may cause an increase in mortality and a reduction in average DBH and Height. Compared with measured CO2 flux data, model results show good ability in estimating carbon sequestration trends, on both a monthly/seasonal and daily time scales. Moreover the model simulates quite well leaf phenology and the combined effects of the two different forest stands on CO2 fluxes.
NASA Astrophysics Data System (ADS)
Padhi, S.; Tokunaga, T.
2017-12-01
Adsorption of fluoride (F) on soil can control the mobility of F and subsequent contamination of groundwater. Hence, accurate evaluation of adsorption equilibrium is a prerequisite for understanding transport and fate of F in the subsurface. While there have been studies for the adsorption behavior of F with respect to single mineral constituents based on surface complexation models (SCM), F adsorption to natural soil in the presence of complexing agents needs much investigation. We evaluated the adsorption processes of F on a natural granitic soil from Tsukuba, Japan, as a function of initial F concentration, ionic strength, and initial pH. A SCM was developed to model F adsorption behavior. Four possible surface complexation reactions were postulated with and without including dissolved aluminum (Al) and Al-F complex sorption. Decrease in F adsorption with the increase in initial pH was observed in between the initial pH range of 4 to 9, and a decrease in the rate of the reduction of adsorbed F with respect to the increase in the initial pH was observed in the initial pH range of 5 to 7. Ionic strength variation in the range of 0 to 100mM had insignificant effect on F removal. Changes in solution pH were observed by comparing the solution before and after F adsorption experiments. At acidic pH, the solution pH increased, whereas at alkaline pH, the solution pH decreased after equilibrium. The SCM including dissolved Al and the adsorption of Al-F complex can simulate the experimental results quite successfully. Also, including dissolved Al and the adsorption of Al-F complex to the model explained the change in solution pH after F adsorption.
Governing Education in a Complex World. Educational Research and Innovation
ERIC Educational Resources Information Center
Burns, Tracey, Ed.; Köster, Florian, Ed.
2016-01-01
What models of governance are effective in complex education systems? In all systems an increasing number of stakeholders are involved in designing, delivering, and monitoring education. Like our societies, education systems are increasingly diverse regarding students, teachers, and communities, as well as the values and identities we expect…
Pope, Bernard J; Fitch, Blake G; Pitman, Michael C; Rice, John J; Reumann, Matthias
2011-01-01
Future multiscale and multiphysics models must use the power of high performance computing (HPC) systems to enable research into human disease, translational medical science, and treatment. Previously we showed that computationally efficient multiscale models will require the use of sophisticated hybrid programming models, mixing distributed message passing processes (e.g. the message passing interface (MPI)) with multithreading (e.g. OpenMP, POSIX pthreads). The objective of this work is to compare the performance of such hybrid programming models when applied to the simulation of a lightweight multiscale cardiac model. Our results show that the hybrid models do not perform favourably when compared to an implementation using only MPI which is in contrast to our results using complex physiological models. Thus, with regards to lightweight multiscale cardiac models, the user may not need to increase programming complexity by using a hybrid programming approach. However, considering that model complexity will increase as well as the HPC system size in both node count and number of cores per node, it is still foreseeable that we will achieve faster than real time multiscale cardiac simulations on these systems using hybrid programming models.
Modeling and complexity of stochastic interacting Lévy type financial price dynamics
NASA Astrophysics Data System (ADS)
Wang, Yiduan; Zheng, Shenzhou; Zhang, Wei; Wang, Jun; Wang, Guochao
2018-06-01
In attempt to reproduce and investigate nonlinear dynamics of security markets, a novel nonlinear random interacting price dynamics, which is considered as a Lévy type process, is developed and investigated by the combination of lattice oriented percolation and Potts dynamics, which concerns with the instinctive random fluctuation and the fluctuation caused by the spread of the investors' trading attitudes, respectively. To better understand the fluctuation complexity properties of the proposed model, the complexity analyses of random logarithmic price return and corresponding volatility series are preformed, including power-law distribution, Lempel-Ziv complexity and fractional sample entropy. In order to verify the rationality of the proposed model, the corresponding studies of actual security market datasets are also implemented for comparison. The empirical results reveal that this financial price model can reproduce some important complexity features of actual security markets to some extent. The complexity of returns decreases with the increase of parameters γ1 and β respectively, furthermore, the volatility series exhibit lower complexity than the return series
Instrumentational Complexity of Music Genres and Why Simplicity Sells
Percino, Gamaliel; Klimek, Peter; Thurner, Stefan
2014-01-01
Listening habits are strongly influenced by two opposing aspects, the desire for variety and the demand for uniformity in music. In this work we quantify these two notions in terms of instrumentation and production technologies that are typically involved in crafting popular music. We assign an ‘instrumentational complexity value’ to each music style. Styles of low instrumentational complexity tend to have generic instrumentations that can also be found in many other styles. Styles of high complexity, on the other hand, are characterized by a large variety of instruments that can only be found in a small number of other styles. To model these results we propose a simple stochastic model that explicitly takes the capabilities of artists into account. We find empirical evidence that individual styles show dramatic changes in their instrumentational complexity over the last fifty years. ‘New wave’ or ‘disco’ quickly climbed towards higher complexity in the 70s and fell back to low complexity levels shortly afterwards, whereas styles like ‘folk rock’ remained at constant high instrumentational complexity levels. We show that changes in the instrumentational complexity of a style are related to its number of sales and to the number of artists contributing to that style. As a style attracts a growing number of artists, its instrumentational variety usually increases. At the same time the instrumentational uniformity of a style decreases, i.e. a unique stylistic and increasingly complex expression pattern emerges. In contrast, album sales of a given style typically increase with decreasing instrumentational complexity. This can be interpreted as music becoming increasingly formulaic in terms of instrumentation once commercial or mainstream success sets in. PMID:25551631
NASA Astrophysics Data System (ADS)
Demaria, Eleonora M.; Nijssen, Bart; Wagener, Thorsten
2007-06-01
Current land surface models use increasingly complex descriptions of the processes that they represent. Increase in complexity is accompanied by an increase in the number of model parameters, many of which cannot be measured directly at large spatial scales. A Monte Carlo framework was used to evaluate the sensitivity and identifiability of ten parameters controlling surface and subsurface runoff generation in the Variable Infiltration Capacity model (VIC). Using the Monte Carlo Analysis Toolbox (MCAT), parameter sensitivities were studied for four U.S. watersheds along a hydroclimatic gradient, based on a 20-year data set developed for the Model Parameter Estimation Experiment (MOPEX). Results showed that simulated streamflows are sensitive to three parameters when evaluated with different objective functions. Sensitivity of the infiltration parameter (b) and the drainage parameter (exp) were strongly related to the hydroclimatic gradient. The placement of vegetation roots played an important role in the sensitivity of model simulations to the thickness of the second soil layer (thick2). Overparameterization was found in the base flow formulation indicating that a simplified version could be implemented. Parameter sensitivity was more strongly dictated by climatic gradients than by changes in soil properties. Results showed how a complex model can be reduced to a more parsimonious form, leading to a more identifiable model with an increased chance of successful regionalization to ungauged basins. Although parameter sensitivities are strictly valid for VIC, this model is representative of a wider class of macroscale hydrological models. Consequently, the results and methodology will have applicability to other hydrological models.
Stock, Kristin; Estrada, Marta F; Vidic, Suzana; Gjerde, Kjersti; Rudisch, Albin; Santo, Vítor E; Barbier, Michaël; Blom, Sami; Arundkar, Sharath C; Selvam, Irwin; Osswald, Annika; Stein, Yan; Gruenewald, Sylvia; Brito, Catarina; van Weerden, Wytske; Rotter, Varda; Boghaert, Erwin; Oren, Moshe; Sommergruber, Wolfgang; Chong, Yolanda; de Hoogt, Ronald; Graeser, Ralph
2016-07-01
Two-dimensional (2D) cell cultures growing on plastic do not recapitulate the three dimensional (3D) architecture and complexity of human tumors. More representative models are required for drug discovery and validation. Here, 2D culture and 3D mono- and stromal co-culture models of increasing complexity have been established and cross-comparisons made using three standard cell carcinoma lines: MCF7, LNCaP, NCI-H1437. Fluorescence-based growth curves, 3D image analysis, immunohistochemistry and treatment responses showed that end points differed according to cell type, stromal co-culture and culture format. The adaptable methodologies described here should guide the choice of appropriate simple and complex in vitro models.
Stock, Kristin; Estrada, Marta F.; Vidic, Suzana; Gjerde, Kjersti; Rudisch, Albin; Santo, Vítor E.; Barbier, Michaël; Blom, Sami; Arundkar, Sharath C.; Selvam, Irwin; Osswald, Annika; Stein, Yan; Gruenewald, Sylvia; Brito, Catarina; van Weerden, Wytske; Rotter, Varda; Boghaert, Erwin; Oren, Moshe; Sommergruber, Wolfgang; Chong, Yolanda; de Hoogt, Ronald; Graeser, Ralph
2016-01-01
Two-dimensional (2D) cell cultures growing on plastic do not recapitulate the three dimensional (3D) architecture and complexity of human tumors. More representative models are required for drug discovery and validation. Here, 2D culture and 3D mono- and stromal co-culture models of increasing complexity have been established and cross-comparisons made using three standard cell carcinoma lines: MCF7, LNCaP, NCI-H1437. Fluorescence-based growth curves, 3D image analysis, immunohistochemistry and treatment responses showed that end points differed according to cell type, stromal co-culture and culture format. The adaptable methodologies described here should guide the choice of appropriate simple and complex in vitro models. PMID:27364600
Cooperative Support a Model to Increase Minority Participation in Science
ERIC Educational Resources Information Center
Smith, Melvin O.
1978-01-01
A model is described that can be used to increase minority participation in the sciences and involves the cooperation of the business-industrial complex, higher education in the historically Black colleges and the government. (MN)
Systems for Teaching Complex Texts: A Proof-of-Concept Investigation
ERIC Educational Resources Information Center
Fisher, Douglas; Frey, Nancy
2016-01-01
In this article we investigate the systems that need to be in place for students to learn from increasingly complex texts. Our concept, drawn from past research, includes clear learning targets, teacher modeling, collaborative conversations, close reading, small group reading, and wide reading. Using a "proof of concept" model, we follow…
Modeling ultrasound propagation through material of increasing geometrical complexity.
Odabaee, Maryam; Odabaee, Mostafa; Pelekanos, Matthew; Leinenga, Gerhard; Götz, Jürgen
2018-06-01
Ultrasound is increasingly being recognized as a neuromodulatory and therapeutic tool, inducing a broad range of bio-effects in the tissue of experimental animals and humans. To achieve these effects in a predictable manner in the human brain, the thick cancellous skull presents a problem, causing attenuation. In order to overcome this challenge, as a first step, the acoustic properties of a set of simple bone-modeling resin samples that displayed an increasing geometrical complexity (increasing step sizes) were analyzed. Using two Non-Destructive Testing (NDT) transducers, we found that Wiener deconvolution predicted the Ultrasound Acoustic Response (UAR) and attenuation caused by the samples. However, whereas the UAR of samples with step sizes larger than the wavelength could be accurately estimated, the prediction was not accurate when the sample had a smaller step size. Furthermore, a Finite Element Analysis (FEA) performed in ANSYS determined that the scattering and refraction of sound waves was significantly higher in complex samples with smaller step sizes compared to simple samples with a larger step size. Together, this reveals an interaction of frequency and geometrical complexity in predicting the UAR and attenuation. These findings could in future be applied to poro-visco-elastic materials that better model the human skull. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
A Model of Ethnoviolence and Public Policy on College Campuses.
ERIC Educational Resources Information Center
Tryman, Mfanya D.
1992-01-01
Examines a model and provides possible causal explanations for the increasing number of acts of racial violence, the rise of racism on college campuses, and the attendant implications for public policy. Causes for increased racial violence are complex and can be outlined in the Holistic Model of Ethnoviolence. (JB)
The value of information for woodland management: Updating a state–transition model
Morris, William K.; Runge, Michael C.; Vesk, Peter A.
2017-01-01
Value of information (VOI) analyses reveal the expected benefit of reducing uncertainty to a decision maker. Most ecological VOI analyses have focused on population models rarely addressing more complex community models. We performed a VOI analysis for a complex state–transition model of Box-Ironbark Forest and Woodland management. With three management alternatives (limited harvest/firewood removal (HF), ecological thinning (ET), and no management), managing the system optimally (for 150 yr) with the original information would, on average, increase the amount of forest in a desirable state from 19% to 35% (a 16-percentage point increase). Resolving all uncertainty would, on average, increase the final percentage to 42% (a 19-percentage point increase). However, only resolving the uncertainty for a single parameter was worth almost two-thirds the value of resolving all uncertainty. We found the VOI to depend on the number of management options, increasing as the management flexibility increased. Our analyses show it is more cost-effective to monitor low-density regrowth forest than other states and more cost-effective to experiment with the no-management alternative than the other management alternatives. Importantly, the most cost-effective strategies did not include either the most desired forest states or the least understood management strategy, ET. This implies that managers cannot just rely on intuition to tell them where the most VOI will lie, as critical uncertainties in a complex system are sometimes cryptic.
Modular modelling with Physiome standards
Nickerson, David P.; Nielsen, Poul M. F.; Hunter, Peter J.
2016-01-01
Key points The complexity of computational models is increasing, supported by research in modelling tools and frameworks. But relatively little thought has gone into design principles for complex models.We propose a set of design principles for complex model construction with the Physiome standard modelling protocol CellML.By following the principles, models are generated that are extensible and are themselves suitable for reuse in larger models of increasing complexity.We illustrate these principles with examples including an architectural prototype linking, for the first time, electrophysiology, thermodynamically compliant metabolism, signal transduction, gene regulation and synthetic biology.The design principles complement other Physiome research projects, facilitating the application of virtual experiment protocols and model analysis techniques to assist the modelling community in creating libraries of composable, characterised and simulatable quantitative descriptions of physiology. Abstract The ability to produce and customise complex computational models has great potential to have a positive impact on human health. As the field develops towards whole‐cell models and linking such models in multi‐scale frameworks to encompass tissue, organ, or organism levels, reuse of previous modelling efforts will become increasingly necessary. Any modelling group wishing to reuse existing computational models as modules for their own work faces many challenges in the context of construction, storage, retrieval, documentation and analysis of such modules. Physiome standards, frameworks and tools seek to address several of these challenges, especially for models expressed in the modular protocol CellML. Aside from providing a general ability to produce modules, there has been relatively little research work on architectural principles of CellML models that will enable reuse at larger scales. To complement and support the existing tools and frameworks, we develop a set of principles to address this consideration. The principles are illustrated with examples that couple electrophysiology, signalling, metabolism, gene regulation and synthetic biology, together forming an architectural prototype for whole‐cell modelling (including human intervention) in CellML. Such models illustrate how testable units of quantitative biophysical simulation can be constructed. Finally, future relationships between modular models so constructed and Physiome frameworks and tools are discussed, with particular reference to how such frameworks and tools can in turn be extended to complement and gain more benefit from the results of applying the principles. PMID:27353233
Stochastic Simulation Service: Bridging the Gap between the Computational Expert and the Biologist
Banerjee, Debjani; Bellesia, Giovanni; Daigle, Bernie J.; Douglas, Geoffrey; Gu, Mengyuan; Gupta, Anand; Hellander, Stefan; Horuk, Chris; Nath, Dibyendu; Takkar, Aviral; Lötstedt, Per; Petzold, Linda R.
2016-01-01
We present StochSS: Stochastic Simulation as a Service, an integrated development environment for modeling and simulation of both deterministic and discrete stochastic biochemical systems in up to three dimensions. An easy to use graphical user interface enables researchers to quickly develop and simulate a biological model on a desktop or laptop, which can then be expanded to incorporate increasing levels of complexity. StochSS features state-of-the-art simulation engines. As the demand for computational power increases, StochSS can seamlessly scale computing resources in the cloud. In addition, StochSS can be deployed as a multi-user software environment where collaborators share computational resources and exchange models via a public model repository. We demonstrate the capabilities and ease of use of StochSS with an example of model development and simulation at increasing levels of complexity. PMID:27930676
Stochastic Simulation Service: Bridging the Gap between the Computational Expert and the Biologist
Drawert, Brian; Hellander, Andreas; Bales, Ben; ...
2016-12-08
We present StochSS: Stochastic Simulation as a Service, an integrated development environment for modeling and simulation of both deterministic and discrete stochastic biochemical systems in up to three dimensions. An easy to use graphical user interface enables researchers to quickly develop and simulate a biological model on a desktop or laptop, which can then be expanded to incorporate increasing levels of complexity. StochSS features state-of-the-art simulation engines. As the demand for computational power increases, StochSS can seamlessly scale computing resources in the cloud. In addition, StochSS can be deployed as a multi-user software environment where collaborators share computational resources andmore » exchange models via a public model repository. We also demonstrate the capabilities and ease of use of StochSS with an example of model development and simulation at increasing levels of complexity.« less
Ellis, Alicia M.; Garcia, Andres J.; Focks, Dana A.; Morrison, Amy C.; Scott, Thomas W.
2011-01-01
Models can be useful tools for understanding the dynamics and control of mosquito-borne disease. More detailed models may be more realistic and better suited for understanding local disease dynamics; however, evaluating model suitability, accuracy, and performance becomes increasingly difficult with greater model complexity. Sensitivity analysis is a technique that permits exploration of complex models by evaluating the sensitivity of the model to changes in parameters. Here, we present results of sensitivity analyses of two interrelated complex simulation models of mosquito population dynamics and dengue transmission. We found that dengue transmission may be influenced most by survival in each life stage of the mosquito, mosquito biting behavior, and duration of the infectious period in humans. The importance of these biological processes for vector-borne disease models and the overwhelming lack of knowledge about them make acquisition of relevant field data on these biological processes a top research priority. PMID:21813844
Wavelet analysis of head acceleration response under dirac excitation for early oedema detection.
Kostopoulos, V; Loutas, T H; Derdas, C; Douzinas, E
2008-04-01
The present work deals with the application of an innovative in-house developed wavelet-based methodology for the analysis of the acceleration responses of a human head complex model as a simulated diffused oedema progresses. The human head complex has been modeled as a structure consisting of three confocal prolate spheroids, whereas the three defined regions by the system of spheroids, from the outside to the inside, represent the scull, the region of cerebrospinal fluid, and the brain tissue. A Dirac-like pulse has been used to excite the human head complex model and the acceleration response of the system has been calculated and analyzed via the wavelet-based methodology. For the purpose of the present analysis, a wave propagation commercial finite element code, LS-DYNA 3D, has been used. The progressive diffused oedema was modeled via consecutive increases in brain volume accompanied by a decrease in brain density. It was shown that even a small increase in brain volume (at the level of 0.5%) can be identified by the effect it has on the vibration characteristics of the human head complex. More precisely, it was found that for some of the wavelet decomposition levels, the energy content changes monotonically as the brain volume increases, thus providing a useful index of monitoring an oncoming brain oedema before any brain damage appears due to uncontrolled intracranial hypertension. For the purpose of the present work and for the levels of brain volume increase considered in the present analysis, no pressure increase was assumed into the cranial vault and, associatively, no brain compliance variation.
Cypko, Mario A; Stoehr, Matthaeus; Kozniewski, Marcin; Druzdzel, Marek J; Dietz, Andreas; Berliner, Leonard; Lemke, Heinz U
2017-11-01
Oncological treatment is being increasingly complex, and therefore, decision making in multidisciplinary teams is becoming the key activity in the clinical pathways. The increased complexity is related to the number and variability of possible treatment decisions that may be relevant to a patient. In this paper, we describe validation of a multidisciplinary cancer treatment decision in the clinical domain of head and neck oncology. Probabilistic graphical models and corresponding inference algorithms, in the form of Bayesian networks, can support complex decision-making processes by providing a mathematically reproducible and transparent advice. The quality of BN-based advice depends on the quality of the model. Therefore, it is vital to validate the model before it is applied in practice. For an example BN subnetwork of laryngeal cancer with 303 variables, we evaluated 66 patient records. To validate the model on this dataset, a validation workflow was applied in combination with quantitative and qualitative analyses. In the subsequent analyses, we observed four sources of imprecise predictions: incorrect data, incomplete patient data, outvoting relevant observations, and incorrect model. Finally, the four problems were solved by modifying the data and the model. The presented validation effort is related to the model complexity. For simpler models, the validation workflow is the same, although it may require fewer validation methods. The validation success is related to the model's well-founded knowledge base. The remaining laryngeal cancer model may disclose additional sources of imprecise predictions.
ERIC Educational Resources Information Center
VanLehn, Kurt; Chung, Greg; Grover, Sachin; Madni, Ayesha; Wetzel, Jon
2016-01-01
A common hypothesis is that students will more deeply understand dynamic systems and other complex phenomena if they construct computational models of them. Attempts to demonstrate the advantages of model construction have been stymied by the long time required for students to acquire skill in model construction. In order to make model…
Marquet, Pablo A.; Santoro, Calogero M.; Latorre, Claudio; Standen, Vivien G.; Abades, Sebastián R.; Rivadeneira, Marcelo M.; Arriaza, Bernardo; Hochberg, Michael E.
2012-01-01
The emergence of complex cultural practices in simple hunter-gatherer groups poses interesting questions on what drives social complexity and what causes the emergence and disappearance of cultural innovations. Here we analyze the conditions that underlie the emergence of artificial mummification in the Chinchorro culture in the coastal Atacama Desert in northern Chile and southern Peru. We provide empirical and theoretical evidence that artificial mummification appeared during a period of increased coastal freshwater availability and marine productivity, which caused an increase in human population size and accelerated the emergence of cultural innovations, as predicted by recent models of cultural and technological evolution. Under a scenario of increasing population size and extreme aridity (with little or no decomposition of corpses) a simple demographic model shows that dead individuals may have become a significant part of the landscape, creating the conditions for the manipulation of the dead that led to the emergence of complex mortuary practices. PMID:22891345
Making the little things count: modeling the development of understory trees in complex stands
Peter J. Gould; Connie. Harrington
2013-01-01
Forest growth models are useful for asking âWhat if?â questions when evaluating silvicultural treatments intended to increase the complexity of future stands. What if we thinned to level A or B? How would it aff ect the growth rates of understory trees? How many trees would survive? To answer these types of questions, a growth model needs to...
Some Observations on the Current Status of Performing Finite Element Analyses
NASA Technical Reports Server (NTRS)
Raju, Ivatury S.; Knight, Norman F., Jr; Shivakumar, Kunigal N.
2015-01-01
Aerospace structures are complex high-performance structures. Advances in reliable and efficient computing and modeling tools are enabling analysts to consider complex configurations, build complex finite element models, and perform analysis rapidly. Many of the early career engineers of today are very proficient in the usage of modern computers, computing engines, complex software systems, and visualization tools. These young engineers are becoming increasingly efficient in building complex 3D models of complicated aerospace components. However, the current trends demonstrate blind acceptance of the results of the finite element analysis results. This paper is aimed at raising an awareness of this situation. Examples of the common encounters are presented. To overcome the current trends, some guidelines and suggestions for analysts, senior engineers, and educators are offered.
ERIC Educational Resources Information Center
Nelson, Tenneisha; Squires, Vicki
2017-01-01
Organizations are faced with solving increasingly complex problems. Addressing these issues requires effective leadership that can facilitate a collaborative problem solving approach where multiple perspectives are leveraged. In this conceptual paper, we critique the effectiveness of earlier leadership models in tackling complex organizational…
Model-Based Engineering for Supply Chain Risk Management
2015-09-30
Privacy, 2009 [19] Julien Delange Wheel Brake System Example using AADL; Feiler, Peter; Hansson, Jörgen; de Niz, Dionisio; & Wrage, Lutz. System ...University Software Engineering Institute Abstract—Expanded use of commercial components has increased the complexity of system assurance...verification. Model- based engineering (MBE) offers a means to design, develop, analyze, and maintain a complex system architecture. Architecture Analysis
Hill, Renee J.; Chopra, Pradeep; Richardi, Toni
2012-01-01
Abstract Explaining the etiology of Complex Regional Pain Syndrome (CRPS) from the psychogenic model is exceedingly unsophisticated, because neurocognitive deficits, neuroanatomical abnormalities, and distortions in cognitive mapping are features of CRPS pathology. More importantly, many people who have developed CRPS have no history of mental illness. The psychogenic model offers comfort to physicians and mental health practitioners (MHPs) who have difficulty understanding pain maintained by newly uncovered neuro inflammatory processes. With increased education about CRPS through a biopsychosocial perspective, both physicians and MHPs can better diagnose, treat, and manage CRPS symptomatology. PMID:24223338
Reduced complexity modeling of Arctic delta dynamics
NASA Astrophysics Data System (ADS)
Piliouras, A.; Lauzon, R.; Rowland, J. C.
2017-12-01
How water and sediment are routed through deltas has important implications for our understanding of nutrient and sediment fluxes to the coastal ocean. These fluxes may be especially important in Arctic environments, because the Arctic ocean receives a disproportionately large amount of river discharge and high latitude regions are expected to be particularly vulnerable to climate change. The Arctic has some of the world's largest but least studied deltas. This lack of data is due to remote and hazardous conditions, sparse human populations, and limited remote sensing resources. In the absence of data, complex models may be of limited scientific utility in understanding Arctic delta dynamics. To overcome this challenge, we adapt the reduced complexity delta-building model DeltaRCM for Arctic environments to explore the influence of sea ice and permafrost on delta morphology and dynamics. We represent permafrost by increasing the threshold for sediment erosion, as permafrost has been found to increase cohesion and reduce channel migration rates. The presence of permafrost in the model results in the creation of more elongate channels, fewer active channels, and a rougher shoreline. We consider several effects of sea ice, including introducing friction which increases flow resistance, constriction of flow by landfast ice, and changes in effective water surface elevation. Flow constriction and increased friction from ice results in a rougher shoreline, more frequent channel switching, decreased channel migration rates, and enhanced deposition offshore of channel mouths. The reduced complexity nature of the model is ideal for generating a basic understanding of which processes unique to Arctic environments may have important effects on delta evolution, and it allows us to explore a variety of rules for incorporating those processes into the model to inform future Arctic delta modelling efforts. Finally, we plan to use the modeling results to determine how the presence of permafrost and sea ice may influence delta morphology and the resulting large-scale patterns of water and sediment fluxes at the coast.
Cyclodextrin controlled release of poorly water-soluble drugs from hydrogels.
Woldum, Henriette Sie; Larsen, Kim Lambertsen; Madsen, Flemming
2008-01-01
The effect of 2-hydroxypropyl-beta-cyclodextrin and gamma-cyclodextrin on the release of ibuprofen, ketoprofen and prednisolone was studied. Stability constants calculated for inclusion complexes show size dependence for complexes with both cyclodextrins. Hydrogels were prepared by ultraviolet irradiation and release of each model drug was studied. For drugs formulated using cyclodextrins an increase in the achievable concentration and in the release from hydrogels was obtained due to increased solubility, although the solubility of all gamma-cyclodextrin complexes was limited. The load also was increased by adjusting pH for the acidic drugs and this exceeds the increase obtained with gamma-cyclodextrin addition.
Samara, Ziyad; Fiamma, Marie-Noëlle; Bautin, Nathalie; Ranohavimparany, Anja; Le Coz, Patrick; Golmard, Jean-Louis; Darré, Pierre; Zelter, Marc; Poon, Chi-Sang; Similowski, Thomas
2011-01-01
Human ventilation at rest exhibits mathematical chaos-like complexity that can be described as long-term unpredictability mediated (in whole or in part) by some low-dimensional nonlinear deterministic process. Although various physiological and pathological situations can affect respiratory complexity, the underlying mechanisms remain incompletely elucidated. If such chaos-like complexity is an intrinsic property of central respiratory generators, it should appear or increase when these structures mature or are stimulated. To test this hypothesis, we employed the isolated tadpole brainstem model [Rana (Pelophylax) esculenta] and recorded the neural respiratory output (buccal and lung rhythms) of pre- (n = 8) and postmetamorphic tadpoles (n = 8), at physiologic (7.8) and acidic pH (7.4). We analyzed the root mean square of the cranial nerve V or VII neurograms. Development and acidosis had no effect on buccal period. Lung frequency increased with development (P < 0.0001). It also increased with acidosis, but in postmetamorphic tadpoles only (P < 0.05). The noise-titration technique evidenced low-dimensional nonlinearities in all the postmetamorphic brainstems, at both pH. Chaos-like complexity, assessed through the noise limit, increased from pH 7.8 to pH 7.4 (P < 0.01). In contrast, linear models best fitted the ventilatory rhythm in all but one of the premetamorphic preparations at pH 7.8 (P < 0.005 vs. postmetamorphic) and in four at pH 7.4 (not significant vs. postmetamorphic). Therefore, in a lower vertebrate model, the brainstem respiratory central rhythm generator accounts for ventilatory chaos-like complexity, especially in the postmetamorphic stage and at low pH. According to the ventilatory generators homology theory, this may also be the case in mammals. PMID:21325645
Straus, Christian; Samara, Ziyad; Fiamma, Marie-Noëlle; Bautin, Nathalie; Ranohavimparany, Anja; Le Coz, Patrick; Golmard, Jean-Louis; Darré, Pierre; Zelter, Marc; Poon, Chi-Sang; Similowski, Thomas
2011-05-01
Human ventilation at rest exhibits mathematical chaos-like complexity that can be described as long-term unpredictability mediated (in whole or in part) by some low-dimensional nonlinear deterministic process. Although various physiological and pathological situations can affect respiratory complexity, the underlying mechanisms remain incompletely elucidated. If such chaos-like complexity is an intrinsic property of central respiratory generators, it should appear or increase when these structures mature or are stimulated. To test this hypothesis, we employed the isolated tadpole brainstem model [Rana (Pelophylax) esculenta] and recorded the neural respiratory output (buccal and lung rhythms) of pre- (n = 8) and postmetamorphic tadpoles (n = 8), at physiologic (7.8) and acidic pH (7.4). We analyzed the root mean square of the cranial nerve V or VII neurograms. Development and acidosis had no effect on buccal period. Lung frequency increased with development (P < 0.0001). It also increased with acidosis, but in postmetamorphic tadpoles only (P < 0.05). The noise-titration technique evidenced low-dimensional nonlinearities in all the postmetamorphic brainstems, at both pH. Chaos-like complexity, assessed through the noise limit, increased from pH 7.8 to pH 7.4 (P < 0.01). In contrast, linear models best fitted the ventilatory rhythm in all but one of the premetamorphic preparations at pH 7.8 (P < 0.005 vs. postmetamorphic) and in four at pH 7.4 (not significant vs. postmetamorphic). Therefore, in a lower vertebrate model, the brainstem respiratory central rhythm generator accounts for ventilatory chaos-like complexity, especially in the postmetamorphic stage and at low pH. According to the ventilatory generators homology theory, this may also be the case in mammals.
MolPrint3D: Enhanced 3D Printing of Ball-and-Stick Molecular Models
ERIC Educational Resources Information Center
Paukstelis, Paul J.
2018-01-01
The increased availability of noncommercial 3D printers has provided instructors and students improved access to printing technology. However, printing complex ball-and-stick molecular structures faces distinct challenges, including the need for support structures that increase with molecular complexity. MolPrint3D is a software add-on for the…
NASA Technical Reports Server (NTRS)
Sinha, Neeraj; Brinckman, Kevin; Jansen, Bernard; Seiner, John
2011-01-01
A method was developed of obtaining propulsive base flow data in both hot and cold jet environments, at Mach numbers and altitude of relevance to NASA launcher designs. The base flow data was used to perform computational fluid dynamics (CFD) turbulence model assessments of base flow predictive capabilities in order to provide increased confidence in base thermal and pressure load predictions obtained from computational modeling efforts. Predictive CFD analyses were used in the design of the experiments, available propulsive models were used to reduce program costs and increase success, and a wind tunnel facility was used. The data obtained allowed assessment of CFD/turbulence models in a complex flow environment, working within a building-block procedure to validation, where cold, non-reacting test data was first used for validation, followed by more complex reacting base flow validation.
Refiners Switch to RFG Complex Model
1998-01-01
On January 1, 1998, domestic and foreign refineries and importers must stop using the "simple" model and begin using the "complex" model to calculate emissions of volatile organic compounds (VOC), toxic air pollutants (TAP), and nitrogen oxides (NOx) from motor gasoline. The primary differences between application of the two models is that some refineries may have to meet stricter standards for the sulfur and olefin content of the reformulated gasoline (RFG) they produce and all refineries will now be held accountable for NOx emissions. Requirements for calculating emissions from conventional gasoline under the anti-dumping rule similarly change for exhaust TAP and NOx. However, the change to the complex model is not expected to result in an increase in the price premium for RFG or constrain supplies.
Attempt to generalize fractional-order electric elements to complex-order ones
NASA Astrophysics Data System (ADS)
Si, Gangquan; Diao, Lijie; Zhu, Jianwei; Lei, Yuhang; Zhang, Yanbin
2017-06-01
The complex derivative {D}α +/- {{j}β }, with α, β \\in R+ is a generalization of the concept of integer derivative, where α=1, β=0. Fractional-order electric elements and circuits are becoming more and more attractive. In this paper, the complex-order electric elements concept is proposed for the first time, and the complex-order elements are modeled and analyzed. Some interesting phenomena are found that the real part of the order affects the phase of output signal, and the imaginary part affects the amplitude for both the complex-order capacitor and complex-order memristor. More interesting is that the complex-order capacitor can do well at the time of fitting electrochemistry impedance spectra. The complex-order memristor is also analyzed. The area inside the hysteresis loops increases with the increasing of the imaginary part of the order and decreases with the increasing of the real part. Some complex case of complex-order memristors hysteresis loops are analyzed at last, whose loop has touching points beyond the origin of the coordinate system.
A Framework for Reliability and Safety Analysis of Complex Space Missions
NASA Technical Reports Server (NTRS)
Evans, John W.; Groen, Frank; Wang, Lui; Austin, Rebekah; Witulski, Art; Mahadevan, Nagabhushan; Cornford, Steven L.; Feather, Martin S.; Lindsey, Nancy
2017-01-01
Long duration and complex mission scenarios are characteristics of NASA's human exploration of Mars, and will provide unprecedented challenges. Systems reliability and safety will become increasingly demanding and management of uncertainty will be increasingly important. NASA's current pioneering strategy recognizes and relies upon assurance of crew and asset safety. In this regard, flexibility to develop and innovate in the emergence of new design environments and methodologies, encompassing modeling of complex systems, is essential to meet the challenges.
NASA Technical Reports Server (NTRS)
Leith, Andrew C.; Mckinnon, William B.
1991-01-01
The effective cohesion of the cratered region during crater collapse is determined via the widths of slump terraces of complex craters. Terrace widths are measured for complex craters on Mercury; these generally increase outward toward the rim for a given crater, and the width of the outermost major terrace is generally an increasing function of crater diameter. The terrace widths on Mercury and a gravity-driven slump model are used to estimate the strength of the cratered region immediately after impact (about 1-2 MPa). A comparison with the previous study of lunar complex craters by Pearce and Melosh (1986) indicates that the transient strength of cratered Mercurian crust is no greater than that of the moon. The strength estimates vary only slightly with the geometric model used to restore the outermost major terrace to its precollapse configuration and are consistent with independent strength estimates from the simple-to-complex crater depth/diameter transition.
STEPS: Modeling and Simulating Complex Reaction-Diffusion Systems with Python
Wils, Stefan; Schutter, Erik De
2008-01-01
We describe how the use of the Python language improved the user interface of the program STEPS. STEPS is a simulation platform for modeling and stochastic simulation of coupled reaction-diffusion systems with complex 3-dimensional boundary conditions. Setting up such models is a complicated process that consists of many phases. Initial versions of STEPS relied on a static input format that did not cleanly separate these phases, limiting modelers in how they could control the simulation and becoming increasingly complex as new features and new simulation algorithms were added. We solved all of these problems by tightly integrating STEPS with Python, using SWIG to expose our existing simulation code. PMID:19623245
Managing complexity in simulations of land surface and near-surface processes
Coon, Ethan T.; Moulton, J. David; Painter, Scott L.
2016-01-12
Increasing computing power and the growing role of simulation in Earth systems science have led to an increase in the number and complexity of processes in modern simulators. We present a multiphysics framework that specifies interfaces for coupled processes and automates weak and strong coupling strategies to manage this complexity. Process management is enabled by viewing the system of equations as a tree, where individual equations are associated with leaf nodes and coupling strategies with internal nodes. A dynamically generated dependency graph connects a variable to its dependencies, streamlining and automating model evaluation, easing model development, and ensuring models aremore » modular and flexible. Additionally, the dependency graph is used to ensure that data requirements are consistent between all processes in a given simulation. Here we discuss the design and implementation of these concepts within the Arcos framework, and demonstrate their use for verification testing and hypothesis evaluation in numerical experiments.« less
Simulation Study of CO2-EOR in Tight Oil Reservoirs with Complex Fracture Geometries
Zuloaga-Molero, Pavel; Yu, Wei; Xu, Yifei; Sepehrnoori, Kamy; Li, Baozhen
2016-01-01
The recent development of tight oil reservoirs has led to an increase in oil production in the past several years due to the progress in horizontal drilling and hydraulic fracturing. However, the expected oil recovery factor from these reservoirs is still very low. CO2-based enhanced oil recovery is a suitable solution to improve the recovery. One challenge of the estimation of the recovery is to properly model complex hydraulic fracture geometries which are often assumed to be planar due to the limitation of local grid refinement approach. More flexible methods like the use of unstructured grids can significantly increase the computational demand. In this study, we introduce an efficient methodology of the embedded discrete fracture model to explicitly model complex fracture geometries. We build a compositional reservoir model to investigate the effects of complex fracture geometries on performance of CO2 Huff-n-Puff and CO2 continuous injection. The results confirm that the appropriate modelling of the fracture geometry plays a critical role in the estimation of the incremental oil recovery. This study also provides new insights into the understanding of the impacts of CO2 molecular diffusion, reservoir permeability, and natural fractures on the performance of CO2-EOR processes in tight oil reservoirs. PMID:27628131
A Spatially Continuous Model of Carbohydrate Digestion and Transport Processes in the Colon
Moorthy, Arun S.; Brooks, Stephen P. J.; Kalmokoff, Martin; Eberl, Hermann J.
2015-01-01
A spatially continuous mathematical model of transport processes, anaerobic digestion and microbial complexity as would be expected in the human colon is presented. The model is a system of first-order partial differential equations with context determined number of dependent variables, and stiff, non-linear source terms. Numerical simulation of the model is used to elucidate information about the colon-microbiota complex. It is found that the composition of materials on outflow of the model does not well-describe the composition of material in other model locations, and inferences using outflow data varies according to model reactor representation. Additionally, increased microbial complexity allows the total microbial community to withstand major system perturbations in diet and community structure. However, distribution of strains and functional groups within the microbial community can be modified depending on perturbation length and microbial kinetic parameters. Preliminary model extensions and potential investigative opportunities using the computational model are discussed. PMID:26680208
Advanced Techniques for Ultrasonic Imaging in the Presence of Material and Geometrical Complexity
NASA Astrophysics Data System (ADS)
Brath, Alexander Joseph
The complexity of modern engineering systems is increasing in several ways: advances in materials science are leading to the design of materials which are optimized for material strength, conductivity, temperature resistance etc., leading to complex material microstructure; the combination of additive manufacturing and shape optimization algorithms are leading to components with incredibly intricate geometrical complexity; and engineering systems are being designed to operate at larger scales in ever harsher environments. As a result, at the same time that there is an increasing need for reliable and accurate defect detection and monitoring capabilities, many of the currently available non-destructive evaluation techniques are rendered ineffective by this increasing material and geometrical complexity. This thesis addresses the challenges posed by inspection and monitoring problems in complex engineering systems with a three-part approach. In order to address material complexities, a model of wavefront propagation in anisotropic materials is developed, along with efficient numerical techniques to solve for the wavefront propagation in inhomogeneous, anisotropic material. Since material and geometrical complexities significantly affect the ability of ultrasonic energy to penetrate into the specimen, measurement configurations are tailored to specific applications which utilize arrays of either piezoelectric (PZT) or electromagnetic acoustic transducers (EMAT). These measurement configurations include novel array architectures as well as the exploration of ice as an acoustic coupling medium. Imaging algorithms which were previously developed for isotropic materials with simple geometry are adapted to utilize the more powerful wavefront propagation model and novel measurement configurations.
USE OF MODELS FOR GAMMA SHIELDING STUDIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clifford, C.E.
1962-02-01
The use of models for shielding studies of buildings exposed to gamma radiation was evaluated by comparing the dose distributions produced in a blockhouse with movable inside walls exposed to 0.66 Mev gamma radiation with corresponding distributions in an iron 1 to 10 scale model. The effects of air and ground scaling on the readings in the model were also investigated. Iron appeared to be a suitable model material for simple closed buildings but for more complex structures it appeared that the use of iron models would progressively overestimite the gamms shielding protection as the complexity increased. (auth)
NASA Astrophysics Data System (ADS)
Fedorova, I. V.; Khatuntseva, E. A.; Krest'yaninov, M. A.; Safonova, L. P.
2016-02-01
Proton transfer along the hydrogen bond in complexes of DMF with H3PO4, H3PO3, CH3H2PO3, and their dimers has been investigated by the B3LYP/6-31++G** method in combination with the C-PCM model. When the Oacid···ODMF distance ( R) in the scanning procedure is not fixed, the energy profile in all cases has a single well. When this distance is fixed, there can be a proton transfer in all of the complexes in the gas phase at R > 2.6 Å; if solvation is taken into account, proton transfer can take place at R > 2.4 Å ( R > 2.5 Å for DMF complexes with CH3H2PO3 and its dimer). The height of the energy barrier to proton transfer increases with increasing R. Proton transfer is energetically most favorable in the DMF-phosphoric acid complexes. The structural and energetic characteristics of the hydrogen-bonded complexes calculated on the basis of the solvation model are compared with the same parameters for the complexes in the gas phase.
A systems-based approach for integrated design of materials, products and design process chains
NASA Astrophysics Data System (ADS)
Panchal, Jitesh H.; Choi, Hae-Jin; Allen, Janet K.; McDowell, David L.; Mistree, Farrokh
2007-12-01
The concurrent design of materials and products provides designers with flexibility to achieve design objectives that were not previously accessible. However, the improved flexibility comes at a cost of increased complexity of the design process chains and the materials simulation models used for executing the design chains. Efforts to reduce the complexity generally result in increased uncertainty. We contend that a systems based approach is essential for managing both the complexity and the uncertainty in design process chains and simulation models in concurrent material and product design. Our approach is based on simplifying the design process chains systematically such that the resulting uncertainty does not significantly affect the overall system performance. Similarly, instead of striving for accurate models for multiscale systems (that are inherently complex), we rely on making design decisions that are robust to uncertainties in the models. Accordingly, we pursue hierarchical modeling in the context of design of multiscale systems. In this paper our focus is on design process chains. We present a systems based approach, premised on the assumption that complex systems can be designed efficiently by managing the complexity of design process chains. The approach relies on (a) the use of reusable interaction patterns to model design process chains, and (b) consideration of design process decisions using value-of-information based metrics. The approach is illustrated using a Multifunctional Energetic Structural Material (MESM) design example. Energetic materials store considerable energy which can be released through shock-induced detonation; conventionally, they are not engineered for strength properties. The design objectives for the MESM in this paper include both sufficient strength and energy release characteristics. The design is carried out by using models at different length and time scales that simulate different aspects of the system. Finally, by applying the method to the MESM design problem, we show that the integrated design of materials and products can be carried out more efficiently by explicitly accounting for design process decisions with the hierarchy of models.
NASA Astrophysics Data System (ADS)
Dirnbeck, Matthew R.
Biological systems pose a challenge both for learners and teachers because they are complex systems mediated by feedback loops; networks of cause-effect relationships; and non-linear, hierarchical, and emergent properties. Teachers and scientists routinely use models to communicate ideas about complex systems. Model-based pedagogies engage students in model construction as a means of practicing higher-order reasoning skills. One such modeling paradigm describes systems in terms of their structures, behaviors, and functions (SBF). The SBF framework is a simple modeling language that has been used to teach about complex biological systems. Here, we used student-generated SBF models to assess students' causal reasoning in the context of a novel biological problem on an exam. We compared students' performance on the modeling problem, their performance on a set of knowledge/comprehension questions, and their performance on a set of scientific reasoning questions. We found that students who performed well on knowledge and understanding questions also constructed more networked, higher quality models. Previous studies have shown that learners' mental maps increase in complexity with increased expertise. We wanted to investigate if biology students with varying levels of training in biology showed a similar pattern when constructing system models. In a pilot study, we administered the same modeling problem to two additional groups of students: 1) an animal physiology course for students pursuing a major in biology (n=37) and 2) an exercise physiology course for non-majors (n=27). We found that there was no significant difference in model organization across the three student populations, but there was a significant difference in the ability to represent function between the three populations. Between the three groups the non-majors had the lowest function scores, the introductory majors had the middle function scores, and the upper division majors had the highest function scores.
Statistical Techniques Complement UML When Developing Domain Models of Complex Dynamical Biosystems.
Williams, Richard A; Timmis, Jon; Qwarnstrom, Eva E
2016-01-01
Computational modelling and simulation is increasingly being used to complement traditional wet-lab techniques when investigating the mechanistic behaviours of complex biological systems. In order to ensure computational models are fit for purpose, it is essential that the abstracted view of biology captured in the computational model, is clearly and unambiguously defined within a conceptual model of the biological domain (a domain model), that acts to accurately represent the biological system and to document the functional requirements for the resultant computational model. We present a domain model of the IL-1 stimulated NF-κB signalling pathway, which unambiguously defines the spatial, temporal and stochastic requirements for our future computational model. Through the development of this model, we observe that, in isolation, UML is not sufficient for the purpose of creating a domain model, and that a number of descriptive and multivariate statistical techniques provide complementary perspectives, in particular when modelling the heterogeneity of dynamics at the single-cell level. We believe this approach of using UML to define the structure and interactions within a complex system, along with statistics to define the stochastic and dynamic nature of complex systems, is crucial for ensuring that conceptual models of complex dynamical biosystems, which are developed using UML, are fit for purpose, and unambiguously define the functional requirements for the resultant computational model.
Statistical Techniques Complement UML When Developing Domain Models of Complex Dynamical Biosystems
Timmis, Jon; Qwarnstrom, Eva E.
2016-01-01
Computational modelling and simulation is increasingly being used to complement traditional wet-lab techniques when investigating the mechanistic behaviours of complex biological systems. In order to ensure computational models are fit for purpose, it is essential that the abstracted view of biology captured in the computational model, is clearly and unambiguously defined within a conceptual model of the biological domain (a domain model), that acts to accurately represent the biological system and to document the functional requirements for the resultant computational model. We present a domain model of the IL-1 stimulated NF-κB signalling pathway, which unambiguously defines the spatial, temporal and stochastic requirements for our future computational model. Through the development of this model, we observe that, in isolation, UML is not sufficient for the purpose of creating a domain model, and that a number of descriptive and multivariate statistical techniques provide complementary perspectives, in particular when modelling the heterogeneity of dynamics at the single-cell level. We believe this approach of using UML to define the structure and interactions within a complex system, along with statistics to define the stochastic and dynamic nature of complex systems, is crucial for ensuring that conceptual models of complex dynamical biosystems, which are developed using UML, are fit for purpose, and unambiguously define the functional requirements for the resultant computational model. PMID:27571414
Impact of gastrectomy procedural complexity on surgical outcomes and hospital comparisons.
Mohanty, Sanjay; Paruch, Jennifer; Bilimoria, Karl Y; Cohen, Mark; Strong, Vivian E; Weber, Sharon M
2015-08-01
Most risk adjustment approaches adjust for patient comorbidities and the primary procedure. However, procedures done at the same time as the index case may increase operative risk and merit inclusion in adjustment models for fair hospital comparisons. Our objectives were to evaluate the impact of surgical complexity on postoperative outcomes and hospital comparisons in gastric cancer surgery. Patients who underwent gastric resection for cancer were identified from a large clinical dataset. Procedure complexity was characterized using secondary procedure CPT codes and work relative value units (RVUs). Regression models were developed to evaluate the association between complexity variables and outcomes. The impact of complexity adjustment on model performance and hospital comparisons was examined. Among 3,467 patients who underwent gastrectomy for adenocarcinoma, 2,171 operations were distal and 1,296 total. A secondary procedure was reported for 33% of distal gastrectomies and 59% of total gastrectomies. Six of 10 secondary procedures were associated with adverse outcomes. For example, patients who underwent a synchronous bowel resection had a higher risk of mortality (odds ratio [OR], 2.14; 95% CI, 1.07-4.29) and reoperation (OR, 2.09; 95% CI, 1.26-3.47). Model performance was slightly better for nearly all outcomes with complexity adjustment (mortality c-statistics: standard model, 0.853; secondary procedure model, 0.858; RVU model, 0.855). Hospital ranking did not change substantially after complexity adjustment. Surgical complexity variables are associated with adverse outcomes in gastrectomy, but complexity adjustment does not affect hospital rankings appreciably. Copyright © 2015 Elsevier Inc. All rights reserved.
Impact of Increased Corn Production on Ground Water Quality and Human Health
In this study, we use a complex coupled modeling system to assess the impacts of increased corn production on groundwater. In particular, we show how the models provide new information on the drivers of contamination in groundwater, and then relate pollutant concentration change...
Moving alcohol prevention research forward-Part I: introducing a complex systems paradigm.
Apostolopoulos, Yorghos; Lemke, Michael K; Barry, Adam E; Lich, Kristen Hassmiller
2018-02-01
The drinking environment is a complex system consisting of a number of heterogeneous, evolving and interacting components, which exhibit circular causality and emergent properties. These characteristics reduce the efficacy of commonly used research approaches, which typically do not account for the underlying dynamic complexity of alcohol consumption and the interdependent nature of diverse factors influencing misuse over time. We use alcohol misuse among college students in the United States as an example for framing our argument for a complex systems paradigm. A complex systems paradigm, grounded in socio-ecological and complex systems theories and computational modeling and simulation, is introduced. Theoretical, conceptual, methodological and analytical underpinnings of this paradigm are described in the context of college drinking prevention research. The proposed complex systems paradigm can transcend limitations of traditional approaches, thereby fostering new directions in alcohol prevention research. By conceptualizing student alcohol misuse as a complex adaptive system, computational modeling and simulation methodologies and analytical techniques can be used. Moreover, use of participatory model-building approaches to generate simulation models can further increase stakeholder buy-in, understanding and policymaking. A complex systems paradigm for research into alcohol misuse can provide a holistic understanding of the underlying drinking environment and its long-term trajectory, which can elucidate high-leverage preventive interventions. © 2017 Society for the Study of Addiction.
Event- and Time-Driven Techniques Using Parallel CPU-GPU Co-processing for Spiking Neural Networks
Naveros, Francisco; Garrido, Jesus A.; Carrillo, Richard R.; Ros, Eduardo; Luque, Niceto R.
2017-01-01
Modeling and simulating the neural structures which make up our central neural system is instrumental for deciphering the computational neural cues beneath. Higher levels of biological plausibility usually impose higher levels of complexity in mathematical modeling, from neural to behavioral levels. This paper focuses on overcoming the simulation problems (accuracy and performance) derived from using higher levels of mathematical complexity at a neural level. This study proposes different techniques for simulating neural models that hold incremental levels of mathematical complexity: leaky integrate-and-fire (LIF), adaptive exponential integrate-and-fire (AdEx), and Hodgkin-Huxley (HH) neural models (ranged from low to high neural complexity). The studied techniques are classified into two main families depending on how the neural-model dynamic evaluation is computed: the event-driven or the time-driven families. Whilst event-driven techniques pre-compile and store the neural dynamics within look-up tables, time-driven techniques compute the neural dynamics iteratively during the simulation time. We propose two modifications for the event-driven family: a look-up table recombination to better cope with the incremental neural complexity together with a better handling of the synchronous input activity. Regarding the time-driven family, we propose a modification in computing the neural dynamics: the bi-fixed-step integration method. This method automatically adjusts the simulation step size to better cope with the stiffness of the neural model dynamics running in CPU platforms. One version of this method is also implemented for hybrid CPU-GPU platforms. Finally, we analyze how the performance and accuracy of these modifications evolve with increasing levels of neural complexity. We also demonstrate how the proposed modifications which constitute the main contribution of this study systematically outperform the traditional event- and time-driven techniques under increasing levels of neural complexity. PMID:28223930
Stochastic simulation of multiscale complex systems with PISKaS: A rule-based approach.
Perez-Acle, Tomas; Fuenzalida, Ignacio; Martin, Alberto J M; Santibañez, Rodrigo; Avaria, Rodrigo; Bernardin, Alejandro; Bustos, Alvaro M; Garrido, Daniel; Dushoff, Jonathan; Liu, James H
2018-03-29
Computational simulation is a widely employed methodology to study the dynamic behavior of complex systems. Although common approaches are based either on ordinary differential equations or stochastic differential equations, these techniques make several assumptions which, when it comes to biological processes, could often lead to unrealistic models. Among others, model approaches based on differential equations entangle kinetics and causality, failing when complexity increases, separating knowledge from models, and assuming that the average behavior of the population encompasses any individual deviation. To overcome these limitations, simulations based on the Stochastic Simulation Algorithm (SSA) appear as a suitable approach to model complex biological systems. In this work, we review three different models executed in PISKaS: a rule-based framework to produce multiscale stochastic simulations of complex systems. These models span multiple time and spatial scales ranging from gene regulation up to Game Theory. In the first example, we describe a model of the core regulatory network of gene expression in Escherichia coli highlighting the continuous model improvement capacities of PISKaS. The second example describes a hypothetical outbreak of the Ebola virus occurring in a compartmentalized environment resembling cities and highways. Finally, in the last example, we illustrate a stochastic model for the prisoner's dilemma; a common approach from social sciences describing complex interactions involving trust within human populations. As whole, these models demonstrate the capabilities of PISKaS providing fertile scenarios where to explore the dynamics of complex systems. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ridley, Mora K.; Hiemstra, T; Machesky, Michael L.
2012-01-01
The adsorption of Y3+ and Nd3+ onto rutile has been evaluated over a wide range of pH (3 11) and surface loading conditions, as well as at two ionic strengths (0.03 and 0.3 m), and temperatures (25 and 50 C). The experimental results reveal the same adsorption behavior for the two trivalent ions onto the rutile surface, with Nd3+ first adsorbing at slightly lower pH values. The adsorption of both Y3+ and Nd3+ commences at pH values below the pHznpc of rutile. The experimental results were evaluated using a charge distribution (CD) and multisite complexation (MUSIC) model, and Basic Sternmore » layer description of the electric double layer (EDL). The coordination geometry of possible surface complexes were constrained by molecular-level information obtained from X-ray standing wave measurements and molecular dynamic (MD) simulation studies. X-ray standing wave measurements showed an inner-sphere tetradentate complex for Y3+ adsorption onto the (110) rutile surface (Zhang et al., 2004b). TheMDsimulation studies suggest additional bidentate complexes may form. The CD values for all surface species were calculated based on a bond valence interpretation of the surface complexes identified by X-ray and MD. The calculated CD values were corrected for the effect of dipole orientation of interfacial water. At low pH, the tetradentate complex provided excellent fits to the Y3+ and Nd3+ experimental data. The experimental and surface complexation modeling results show a strong pH dependence, and suggest that the tetradentate surface species hydrolyze with increasing pH. Furthermore, with increased surface loading of Y3+ on rutile the tetradentate binding mode was augmented by a hydrolyzed-bidentate Y3+ surface complex. Collectively, the experimental and surface complexation modeling results demonstrate that solution chemistry and surface loading impacts Y3+ surface speciation. The approach taken of incorporating molecular-scale information into surface complexation models (SCMs) should aid in elucidating a fundamental understating of ion-adsorption reactions.« less
NASA Astrophysics Data System (ADS)
Ridley, Moira K.; Hiemstra, Tjisse; Machesky, Michael L.; Wesolowski, David J.; van Riemsdijk, Willem H.
2012-10-01
The adsorption of Y3+ and Nd3+ onto rutile has been evaluated over a wide range of pH (3-11) and surface loading conditions, as well as at two ionic strengths (0.03 and 0.3 m), and temperatures (25 and 50 °C). The experimental results reveal the same adsorption behavior for the two trivalent ions onto the rutile surface, with Nd3+ first adsorbing at slightly lower pH values. The adsorption of both Y3+ and Nd3+ commences at pH values below the pHznpc of rutile. The experimental results were evaluated using a charge distribution (CD) and multisite complexation (MUSIC) model, and Basic Stern layer description of the electric double layer (EDL). The coordination geometry of possible surface complexes were constrained by molecular-level information obtained from X-ray standing wave measurements and molecular dynamic (MD) simulation studies. X-ray standing wave measurements showed an inner-sphere tetradentate complex for Y3+ adsorption onto the (1 1 0) rutile surface (Zhang et al., 2004b). The MD simulation studies suggest additional bidentate complexes may form. The CD values for all surface species were calculated based on a bond valence interpretation of the surface complexes identified by X-ray and MD. The calculated CD values were corrected for the effect of dipole orientation of interfacial water. At low pH, the tetradentate complex provided excellent fits to the Y3+ and Nd3+ experimental data. The experimental and surface complexation modeling results show a strong pH dependence, and suggest that the tetradentate surface species hydrolyze with increasing pH. Furthermore, with increased surface loading of Y3+ on rutile the tetradentate binding mode was augmented by a hydrolyzed-bidentate Y3+ surface complex. Collectively, the experimental and surface complexation modeling results demonstrate that solution chemistry and surface loading impacts Y3+ surface speciation. The approach taken of incorporating molecular-scale information into surface complexation models (SCMs) should aid in elucidating a fundamental understating of ion-adsorption reactions.
NASA Astrophysics Data System (ADS)
Toropov, Andrey A.; Toropova, Alla P.
2018-06-01
Predictive model of logP for Pt(II) and Pt(IV) complexes built up with the Monte Carlo method using the CORAL software has been validated with six different splits into the training and validation sets. The improving of the predictive potential of models for six different splits has been obtained using so-called index of ideality of correlation. The suggested models give possibility to extract molecular features, which cause the increase or vice versa decrease of the logP.
U.S. Geological Survey Groundwater Modeling Software: Making Sense of a Complex Natural Resource
Provost, Alden M.; Reilly, Thomas E.; Harbaugh, Arlen W.; Pollock, David W.
2009-01-01
Computer models of groundwater systems simulate the flow of groundwater, including water levels, and the transport of chemical constituents and thermal energy. Groundwater models afford hydrologists a framework on which to organize their knowledge and understanding of groundwater systems, and they provide insights water-resources managers need to plan effectively for future water demands. Building on decades of experience, the U.S. Geological Survey (USGS) continues to lead in the development and application of computer software that allows groundwater models to address scientific and management questions of increasing complexity.
Computer modeling and simulation of human movement. Applications in sport and rehabilitation.
Neptune, R R
2000-05-01
Computer modeling and simulation of human movement plays an increasingly important role in sport and rehabilitation, with applications ranging from sport equipment design to understanding pathologic gait. The complex dynamic interactions within the musculoskeletal and neuromuscular systems make analyzing human movement with existing experimental techniques difficult but computer modeling and simulation allows for the identification of these complex interactions and causal relationships between input and output variables. This article provides an overview of computer modeling and simulation and presents an example application in the field of rehabilitation.
NASA Astrophysics Data System (ADS)
Munoz-Carpena, R.; Muller, S. J.; Chu, M.; Kiker, G. A.; Perz, S. G.
2014-12-01
Model Model complexity resulting from the need to integrate environmental system components cannot be understated. In particular, additional emphasis is urgently needed on rational approaches to guide decision making through uncertainties surrounding the integrated system across decision-relevant scales. However, in spite of the difficulties that the consideration of modeling uncertainty represent for the decision process, it should not be avoided or the value and science behind the models will be undermined. These two issues; i.e., the need for coupled models that can answer the pertinent questions and the need for models that do so with sufficient certainty, are the key indicators of a model's relevance. Model relevance is inextricably linked with model complexity. Although model complexity has advanced greatly in recent years there has been little work to rigorously characterize the threshold of relevance in integrated and complex models. Formally assessing the relevance of the model in the face of increasing complexity would be valuable because there is growing unease among developers and users of complex models about the cumulative effects of various sources of uncertainty on model outputs. In particular, this issue has prompted doubt over whether the considerable effort going into further elaborating complex models will in fact yield the expected payback. New approaches have been proposed recently to evaluate the uncertainty-complexity-relevance modeling trilemma (Muller, Muñoz-Carpena and Kiker, 2011) by incorporating state-of-the-art global sensitivity and uncertainty analysis (GSA/UA) in every step of the model development so as to quantify not only the uncertainty introduced by the addition of new environmental components, but the effect that these new components have over existing components (interactions, non-linear responses). Outputs from the analysis can also be used to quantify system resilience (stability, alternative states, thresholds or tipping points) in the face of environmental and anthropogenic change (Perz, Muñoz-Carpena, Kiker and Holt, 2013), and through MonteCarlo mapping potential management activities over the most important factors or processes to influence the system towards behavioral (desirable) outcomes (Chu-Agor, Muñoz-Carpena et al., 2012).
Complexity reduction of biochemical rate expressions.
Schmidt, Henning; Madsen, Mads F; Danø, Sune; Cedersund, Gunnar
2008-03-15
The current trend in dynamical modelling of biochemical systems is to construct more and more mechanistically detailed and thus complex models. The complexity is reflected in the number of dynamic state variables and parameters, as well as in the complexity of the kinetic rate expressions. However, a greater level of complexity, or level of detail, does not necessarily imply better models, or a better understanding of the underlying processes. Data often does not contain enough information to discriminate between different model hypotheses, and such overparameterization makes it hard to establish the validity of the various parts of the model. Consequently, there is an increasing demand for model reduction methods. We present a new reduction method that reduces complex rational rate expressions, such as those often used to describe enzymatic reactions. The method is a novel term-based identifiability analysis, which is easy to use and allows for user-specified reductions of individual rate expressions in complete models. The method is one of the first methods to meet the classical engineering objective of improved parameter identifiability without losing the systems biology demand of preserved biochemical interpretation. The method has been implemented in the Systems Biology Toolbox 2 for MATLAB, which is freely available from http://www.sbtoolbox2.org. The Supplementary Material contains scripts that show how to use it by applying the method to the example models, discussed in this article.
Castellazzi, Giovanni; D'Altri, Antonio Maria; Bitelli, Gabriele; Selvaggi, Ilenia; Lambertini, Alessandro
2015-07-28
In this paper, a new semi-automatic procedure to transform three-dimensional point clouds of complex objects to three-dimensional finite element models is presented and validated. The procedure conceives of the point cloud as a stacking of point sections. The complexity of the clouds is arbitrary, since the procedure is designed for terrestrial laser scanner surveys applied to buildings with irregular geometry, such as historical buildings. The procedure aims at solving the problems connected to the generation of finite element models of these complex structures by constructing a fine discretized geometry with a reduced amount of time and ready to be used with structural analysis. If the starting clouds represent the inner and outer surfaces of the structure, the resulting finite element model will accurately capture the whole three-dimensional structure, producing a complex solid made by voxel elements. A comparison analysis with a CAD-based model is carried out on a historical building damaged by a seismic event. The results indicate that the proposed procedure is effective and obtains comparable models in a shorter time, with an increased level of automation.
Surface complexation modeling of americium sorption onto volcanic tuff.
Ding, M; Kelkar, S; Meijer, A
2014-10-01
Results of a surface complexation model (SCM) for americium sorption on volcanic rocks (devitrified and zeolitic tuff) are presented. The model was developed using PHREEQC and based on laboratory data for americium sorption on quartz. Available data for sorption of americium on quartz as a function of pH in dilute groundwater can be modeled with two surface reactions involving an americium sulfate and an americium carbonate complex. It was assumed in applying the model to volcanic rocks from Yucca Mountain, that the surface properties of volcanic rocks can be represented by a quartz surface. Using groundwaters compositionally representative of Yucca Mountain, americium sorption distribution coefficient (Kd, L/Kg) values were calculated as function of pH. These Kd values are close to the experimentally determined Kd values for americium sorption on volcanic rocks, decreasing with increasing pH in the pH range from 7 to 9. The surface complexation constants, derived in this study, allow prediction of sorption of americium in a natural complex system, taking into account the inherent uncertainty associated with geochemical conditions that occur along transport pathways. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
McCormack, Kimberly A.; Hesse, Marc A.
2018-04-01
We model the subsurface hydrologic response to the 7.6 Mw subduction zone earthquake that occurred on the plate interface beneath the Nicoya peninsula in Costa Rica on September 5, 2012. The regional-scale poroelastic model of the overlying plate integrates seismologic, geodetic and hydrologic data sets to predict the post-seismic poroelastic response. A representative two-dimensional model shows that thrust earthquakes with a slip width less than a third of their depth produce complex multi-lobed pressure perturbations in the shallow subsurface. This leads to multiple poroelastic relaxation timescales that may overlap with the longer viscoelastic timescales. In the three-dimensional model, the complex slip distribution of 2012 Nicoya event and its small width to depth ratio lead to a pore pressure distribution comprising multiple trench parallel ridges of high and low pressure. This leads to complex groundwater flow patterns, non-monotonic variations in predicted well water levels, and poroelastic relaxation on multiple time scales. The model also predicts significant tectonically driven submarine groundwater discharge off-shore. In the weeks following the earthquake, the predicted net submarine groundwater discharge in the study area increases, creating a 100 fold increase in net discharge relative to topography-driven flow over the first 30 days. Our model suggests the hydrological response on land is more complex than typically acknowledged in tectonic studies. This may complicate the interpretation of transient post-seismic surface deformations. Combined tectonic-hydrological observation networks have the potential to reduce such ambiguities.
The University in the Knowledge Economy: The Triple Helix Model and Its Implications
ERIC Educational Resources Information Center
Zheng, Peijun; Harris, Michael
2007-01-01
In the context of the global knowledge economy, the three major players--university, industry, and government--are becoming increasingly interdependent. As more intensified interactions and relationships of increasing complexity among the institutions evolve, the Triple Helix model attempts to describe not only interactions among university,…
A New Funding Model for Extension
ERIC Educational Resources Information Center
Brown, Paul W.; Otto, Daniel M.; Ouart, Michael D.
2006-01-01
The traditional funding model of the Cooperative Extension System has been stretched to its limits by increasing demand for information and programs without concurrent increases in funding by the public sector. As the social, economic, and political environments have evolved and become more complex, extension is often asked to apply the expertise…
NASA Astrophysics Data System (ADS)
Hoepfer, Matthias
Over the last two decades, computer modeling and simulation have evolved as the tools of choice for the design and engineering of dynamic systems. With increased system complexities, modeling and simulation become essential enablers for the design of new systems. Some of the advantages that modeling and simulation-based system design allows for are the replacement of physical tests to ensure product performance, reliability and quality, the shortening of design cycles due to the reduced need for physical prototyping, the design for mission scenarios, the invoking of currently nonexisting technologies, and the reduction of technological and financial risks. Traditionally, dynamic systems are modeled in a monolithic way. Such monolithic models include all the data, relations and equations necessary to represent the underlying system. With increased complexity of these models, the monolithic model approach reaches certain limits regarding for example, model handling and maintenance. Furthermore, while the available computer power has been steadily increasing according to Moore's Law (a doubling in computational power every 10 years), the ever-increasing complexities of new models have negated the increased resources available. Lastly, modern systems and design processes are interdisciplinary, enforcing the necessity to make models more flexible to be able to incorporate different modeling and design approaches. The solution to bypassing the shortcomings of monolithic models is cosimulation. In a very general sense, co-simulation addresses the issue of linking together different dynamic sub-models to a model which represents the overall, integrated dynamic system. It is therefore an important enabler for the design of interdisciplinary, interconnected, highly complex dynamic systems. While a basic co-simulation setup can be very easy, complications can arise when sub-models display behaviors such as algebraic loops, singularities, or constraints. This work frames the co-simulation approach to modeling and simulation. It lays out the general approach to dynamic system co-simulation, and gives a comprehensive overview of what co-simulation is and what it is not. It creates a taxonomy of the requirements and limits of co-simulation, and the issues arising with co-simulating sub-models. Possible solutions towards resolving the stated problems are investigated to a certain depth. A particular focus is given to the issue of time stepping. It will be shown that for dynamic models, the selection of the simulation time step is a crucial issue with respect to computational expense, simulation accuracy, and error control. The reasons for this are discussed in depth, and a time stepping algorithm for co-simulation with unknown dynamic sub-models is proposed. Motivations and suggestions for the further treatment of selected issues are presented.
NASA Technical Reports Server (NTRS)
Handley, Thomas H., Jr.; Preheim, Larry E.
1990-01-01
Data systems requirements in the Earth Observing System (EOS) Space Station Freedom (SSF) eras indicate increasing data volume, increased discipline interplay, higher complexity and broader data integration and interpretation. A response to the needs of the interdisciplinary investigator is proposed, considering the increasing complexity and rising costs of scientific investigation. The EOS Data Information System, conceived to be a widely distributed system with reliable communication links between central processing and the science user community, is described. Details are provided on information architecture, system models, intelligent data management of large complex databases, and standards for archiving ancillary data, using a research library, a laboratory and collaboration services.
Liotta, Flavia; d'Antonio, Giuseppe; Esposito, Giovanni; Fabbricino, Massimiliano; Frunzo, Luigi; van Hullebusch, Eric D; Lens, Piet N L; Pirozzi, Francesco
2014-01-01
The role of the moisture content and particle size (PS) on the disintegration of complex organic matter during the wet anaerobic digestion (AD) process was investigated. A range of total solids (TS) from 5% to 11.3% and PS from 0.25 to 15 mm was evaluated using carrot waste as model complex organic matter. The experimental results showed that the methane production rate decreased with higher TS and PS. A modified version of the AD model no.1 for complex organic substrates was used to model the experimental data. The simulations showed a decrease of the disintegration rate constants with increasing TS and PS. The results of the biomethanation tests were used to calibrate and validate the applied model. In particular, the values of the disintegration constant for various TS and PS were determined. The simulations showed good agreement between the numerical and observed data.
Qualitative models and experimental investigation of chaotic NOR gates and set/reset flip-flops
NASA Astrophysics Data System (ADS)
Rahman, Aminur; Jordan, Ian; Blackmore, Denis
2018-01-01
It has been observed through experiments and SPICE simulations that logical circuits based upon Chua's circuit exhibit complex dynamical behaviour. This behaviour can be used to design analogues of more complex logic families and some properties can be exploited for electronics applications. Some of these circuits have been modelled as systems of ordinary differential equations. However, as the number of components in newer circuits increases so does the complexity. This renders continuous dynamical systems models impractical and necessitates new modelling techniques. In recent years, some discrete dynamical models have been developed using various simplifying assumptions. To create a robust modelling framework for chaotic logical circuits, we developed both deterministic and stochastic discrete dynamical models, which exploit the natural recurrence behaviour, for two chaotic NOR gates and a chaotic set/reset flip-flop. This work presents a complete applied mathematical investigation of logical circuits. Experiments on our own designs of the above circuits are modelled and the models are rigorously analysed and simulated showing surprisingly close qualitative agreement with the experiments. Furthermore, the models are designed to accommodate dynamics of similarly designed circuits. This will allow researchers to develop ever more complex chaotic logical circuits with a simple modelling framework.
Qualitative models and experimental investigation of chaotic NOR gates and set/reset flip-flops.
Rahman, Aminur; Jordan, Ian; Blackmore, Denis
2018-01-01
It has been observed through experiments and SPICE simulations that logical circuits based upon Chua's circuit exhibit complex dynamical behaviour. This behaviour can be used to design analogues of more complex logic families and some properties can be exploited for electronics applications. Some of these circuits have been modelled as systems of ordinary differential equations. However, as the number of components in newer circuits increases so does the complexity. This renders continuous dynamical systems models impractical and necessitates new modelling techniques. In recent years, some discrete dynamical models have been developed using various simplifying assumptions. To create a robust modelling framework for chaotic logical circuits, we developed both deterministic and stochastic discrete dynamical models, which exploit the natural recurrence behaviour, for two chaotic NOR gates and a chaotic set/reset flip-flop. This work presents a complete applied mathematical investigation of logical circuits. Experiments on our own designs of the above circuits are modelled and the models are rigorously analysed and simulated showing surprisingly close qualitative agreement with the experiments. Furthermore, the models are designed to accommodate dynamics of similarly designed circuits. This will allow researchers to develop ever more complex chaotic logical circuits with a simple modelling framework.
Petri net modelling of biological networks.
Chaouiya, Claudine
2007-07-01
Mathematical modelling is increasingly used to get insights into the functioning of complex biological networks. In this context, Petri nets (PNs) have recently emerged as a promising tool among the various methods employed for the modelling and analysis of molecular networks. PNs come with a series of extensions, which allow different abstraction levels, from purely qualitative to more complex quantitative models. Noteworthily, each of these models preserves the underlying graph, which depicts the interactions between the biological components. This article intends to present the basics of the approach and to foster the potential role PNs could play in the development of the computational systems biology.
Multiscale agent-based cancer modeling.
Zhang, Le; Wang, Zhihui; Sagotsky, Jonathan A; Deisboeck, Thomas S
2009-04-01
Agent-based modeling (ABM) is an in silico technique that is being used in a variety of research areas such as in social sciences, economics and increasingly in biomedicine as an interdisciplinary tool to study the dynamics of complex systems. Here, we describe its applicability to integrative tumor biology research by introducing a multi-scale tumor modeling platform that understands brain cancer as a complex dynamic biosystem. We summarize significant findings of this work, and discuss both challenges and future directions for ABM in the field of cancer research.
Tests of high-resolution simulations over a region of complex terrain in Southeast coast of Brazil
NASA Astrophysics Data System (ADS)
Chou, Sin Chan; Luís Gomes, Jorge; Ristic, Ivan; Mesinger, Fedor; Sueiro, Gustavo; Andrade, Diego; Lima-e-Silva, Pedro Paulo
2013-04-01
The Eta Model is used operationally by INPE at the Centre for Weather Forecasts and Climate Studies (CPTEC) to produce weather forecasts over South America since 1997. The model has gone through upgrades along these years. In order to prepare the model for operational higher resolution forecasts, the model is configured and tested over a region of complex topography located near the coast of Southeast Brazil. The model domain includes the two Brazilians cities, Rio de Janeiro and Sao Paulo, urban areas, preserved tropical forest, pasture fields, and complex terrain where it can rise from sea level up to about 1000 m. Accurate near-surface wind direction and magnitude are needed for the power plant emergency plan. Besides, the region suffers from frequent events of floods and landslides, therefore accurate local forecasts are required for disaster warnings. The objective of this work is to carry out a series of numerical experiments to test and evaluate high resolution simulations in this complex area. Verification of model runs uses observations taken from the nuclear power plant and higher resolution reanalyses data. The runs were tested in a period when flow was predominately forced by local conditions and in a period forced by frontal passage. The Eta Model was configured initially with 2-km horizontal resolution and 50 layers. The Eta-2km is a second nesting, it is driven by Eta-15km, which in its turn is driven by Era-Interim reanalyses. The series of experiments consists of replacing surface layer stability function, adjusting cloud microphysics scheme parameters, further increasing vertical and horizontal resolutions. By replacing the stability function for the stable conditions substantially increased the katabatic winds and verified better against the tower wind data. Precipitation produced by the model was excessive in the region. Increasing vertical resolution to 60 layers caused a further increase in precipitation production. This excessive precipitation was reduced by adjusting some parameters in the cloud microphysics scheme. Precipitation overestimate still occurs and further tests are still necessary. The increase of horizontal resolution to 1 km required adjusting model diffusion parameters and refining divergence calculations. Available observations in the region for a thorough evaluation is a major constraint.
Dobson, Ian; Carreras, Benjamin A; Lynch, Vickie E; Newman, David E
2007-06-01
We give an overview of a complex systems approach to large blackouts of electric power transmission systems caused by cascading failure. Instead of looking at the details of particular blackouts, we study the statistics and dynamics of series of blackouts with approximate global models. Blackout data from several countries suggest that the frequency of large blackouts is governed by a power law. The power law makes the risk of large blackouts consequential and is consistent with the power system being a complex system designed and operated near a critical point. Power system overall loading or stress relative to operating limits is a key factor affecting the risk of cascading failure. Power system blackout models and abstract models of cascading failure show critical points with power law behavior as load is increased. To explain why the power system is operated near these critical points and inspired by concepts from self-organized criticality, we suggest that power system operating margins evolve slowly to near a critical point and confirm this idea using a power system model. The slow evolution of the power system is driven by a steady increase in electric loading, economic pressures to maximize the use of the grid, and the engineering responses to blackouts that upgrade the system. Mitigation of blackout risk should account for dynamical effects in complex self-organized critical systems. For example, some methods of suppressing small blackouts could ultimately increase the risk of large blackouts.
Adsorption of selenium by amorphous iron oxyhydroxide and manganese dioxide
Balistrieri, L.S.; Chao, T.T.
1990-01-01
This work compares and models the adsorption of selenium and other anions on a neutral to alkaline surface (amorphous iron oxyhydroxide) and an acidic surface (manganese dioxide). Selenium adsorption on these oxides is examined as a function of pH, particle concentration, oxidation state, and competing anion concentration in order to assess how these factors might influence the mobility of selenium in the environment. The data indicate that 1. 1) amorphous iron oxyhydroxide has a greater affinity for selenium than manganese dioxide, 2. 2) selenite [Se(IV)] adsorption increases with decreasing pH and increasing particle concentration and is stronger than selenate [Se(VI)] adsorption on both oxides, and 3. 3) selenate does not adsorb on manganese dioxide. The relative affinity of selenate and selenite for the oxides and the lack of adsorption of selenate on a strongly acidic surface suggests that selenate forms outer-sphere complexes while selenite forms inner-sphere complexes with the surfaces. The data also indicate that the competition sequence of other anions with respect to selenite adsorption at pH 7.0 is phosphate > silicate > molybdate > fluoride > sulfate on amorphous iron oxyhydroxide and molybdate ??? phosphate > silicate > fluoride > sulfate on manganese dioxide. The adsorption of phosphate, molybdate, and silicate on these oxides as a function of pH indicates that the competition sequences reflect the relative affinities of these anions for the surfaces. The Triple Layer surface complexation model is used to provide a quantitative description of these observations and to assess the importance of surface site heterogeneity on anion adsorption. The modeling results suggest that selenite forms binuclear, innersphere complexes with amorphous iron oxyhydroxide and monodentate, inner-sphere complexes with manganese dioxide and that selenate forms outer-sphere, monodentate complexes with amorphous iron oxyhydroxide. The heterogeneity of the oxide surface sites is reflected in decreasing equilibrium constants for selenite with increasing adsorption density and both experimental observations and modeling results suggest that manganese dioxide has fewer sites of higher energy for selenite adsorption than amorphous iron oxyhydroxide. Modeling and interpreting the adsorption of phosphate, molybdate, and silicate on the oxides are made difficult by the lack of constraint in choosing surface species and the fact that equally good fits can be obtained with different surface species. Finally, predictions of anion competition using the model results from single adsorbate systems are not very successful because the model does not account for surface site heterogeneity. Selenite adsorption data from a multi-adsorbate system could be fit if the equilibrium constant for selenite is decreased with increasing anion adsorption density. ?? 1990.
NASA Astrophysics Data System (ADS)
Xu, Jingjiang; Song, Shaozhen; Li, Yuandong; Wang, Ruikang K.
2018-01-01
Optical coherence tomography angiography (OCTA) is increasingly becoming a popular inspection tool for biomedical imaging applications. By exploring the amplitude, phase and complex information available in OCT signals, numerous algorithms have been proposed that contrast functional vessel networks within microcirculatory tissue beds. However, it is not clear which algorithm delivers optimal imaging performance. Here, we investigate systematically how amplitude and phase information have an impact on the OCTA imaging performance, to establish the relationship of amplitude and phase stability with OCT signal-to-noise ratio (SNR), time interval and particle dynamics. With either repeated A-scan or repeated B-scan imaging protocols, the amplitude noise increases with the increase of OCT SNR; however, the phase noise does the opposite, i.e. it increases with the decrease of OCT SNR. Coupled with experimental measurements, we utilize a simple Monte Carlo (MC) model to simulate the performance of amplitude-, phase- and complex-based algorithms for OCTA imaging, the results of which suggest that complex-based algorithms deliver the best performance when the phase noise is < ~40 mrad. We also conduct a series of in vivo vascular imaging in animal models and human retina to verify the findings from the MC model through assessing the OCTA performance metrics of vessel connectivity, image SNR and contrast-to-noise ratio. We show that for all the metrics assessed, the complex-based algorithm delivers better performance than either the amplitude- or phase-based algorithms for both the repeated A-scan and the B-scan imaging protocols, which agrees well with the conclusion drawn from the MC simulations.
Xu, Jingjiang; Song, Shaozhen; Li, Yuandong; Wang, Ruikang K
2017-12-19
Optical coherence tomography angiography (OCTA) is increasingly becoming a popular inspection tool for biomedical imaging applications. By exploring the amplitude, phase and complex information available in OCT signals, numerous algorithms have been proposed that contrast functional vessel networks within microcirculatory tissue beds. However, it is not clear which algorithm delivers optimal imaging performance. Here, we investigate systematically how amplitude and phase information have an impact on the OCTA imaging performance, to establish the relationship of amplitude and phase stability with OCT signal-to-noise ratio (SNR), time interval and particle dynamics. With either repeated A-scan or repeated B-scan imaging protocols, the amplitude noise increases with the increase of OCT SNR; however, the phase noise does the opposite, i.e. it increases with the decrease of OCT SNR. Coupled with experimental measurements, we utilize a simple Monte Carlo (MC) model to simulate the performance of amplitude-, phase- and complex-based algorithms for OCTA imaging, the results of which suggest that complex-based algorithms deliver the best performance when the phase noise is < ~40 mrad. We also conduct a series of in vivo vascular imaging in animal models and human retina to verify the findings from the MC model through assessing the OCTA performance metrics of vessel connectivity, image SNR and contrast-to-noise ratio. We show that for all the metrics assessed, the complex-based algorithm delivers better performance than either the amplitude- or phase-based algorithms for both the repeated A-scan and the B-scan imaging protocols, which agrees well with the conclusion drawn from the MC simulations.
NASA Astrophysics Data System (ADS)
Rocha, Alby D.; Groen, Thomas A.; Skidmore, Andrew K.; Darvishzadeh, Roshanak; Willemen, Louise
2017-11-01
The growing number of narrow spectral bands in hyperspectral remote sensing improves the capacity to describe and predict biological processes in ecosystems. But it also poses a challenge to fit empirical models based on such high dimensional data, which often contain correlated and noisy predictors. As sample sizes, to train and validate empirical models, seem not to be increasing at the same rate, overfitting has become a serious concern. Overly complex models lead to overfitting by capturing more than the underlying relationship, and also through fitting random noise in the data. Many regression techniques claim to overcome these problems by using different strategies to constrain complexity, such as limiting the number of terms in the model, by creating latent variables or by shrinking parameter coefficients. This paper is proposing a new method, named Naïve Overfitting Index Selection (NOIS), which makes use of artificially generated spectra, to quantify the relative model overfitting and to select an optimal model complexity supported by the data. The robustness of this new method is assessed by comparing it to a traditional model selection based on cross-validation. The optimal model complexity is determined for seven different regression techniques, such as partial least squares regression, support vector machine, artificial neural network and tree-based regressions using five hyperspectral datasets. The NOIS method selects less complex models, which present accuracies similar to the cross-validation method. The NOIS method reduces the chance of overfitting, thereby avoiding models that present accurate predictions that are only valid for the data used, and too complex to make inferences about the underlying process.
Westö, Johan; May, Patrick J C
2018-05-02
Receptive field (RF) models are an important tool for deciphering neural responses to sensory stimuli. The two currently popular RF models are multi-filter linear-nonlinear (LN) models and context models. Models are, however, never correct and they rely on assumptions to keep them simple enough to be interpretable. As a consequence, different models describe different stimulus-response mappings, which may or may not be good approximations of real neural behavior. In the current study, we take up two tasks: First, we introduce new ways to estimate context models with realistic nonlinearities, that is, with logistic and exponential functions. Second, we evaluate context models and multi-filter LN models in terms of how well they describe recorded data from complex cells in cat primary visual cortex. Our results, based on single-spike information and correlation coefficients, indicate that context models outperform corresponding multi-filter LN models of equal complexity (measured in terms of number of parameters), with the best increase in performance being achieved by the novel context models. Consequently, our results suggest that the multi-filter LN-model framework is suboptimal for describing the behavior of complex cells: the context-model framework is clearly superior while still providing interpretable quantizations of neural behavior.
NASA Astrophysics Data System (ADS)
Jackson-Blake, L. A.; Sample, J. E.; Wade, A. J.; Helliwell, R. C.; Skeffington, R. A.
2017-07-01
Catchment-scale water quality models are increasingly popular tools for exploring the potential effects of land management, land use change and climate change on water quality. However, the dynamic, catchment-scale nutrient models in common usage are complex, with many uncertain parameters requiring calibration, limiting their usability and robustness. A key question is whether this complexity is justified. To explore this, we developed a parsimonious phosphorus model, SimplyP, incorporating a rainfall-runoff model and a biogeochemical model able to simulate daily streamflow, suspended sediment, and particulate and dissolved phosphorus dynamics. The model's complexity was compared to one popular nutrient model, INCA-P, and the performance of the two models was compared in a small rural catchment in northeast Scotland. For three land use classes, less than six SimplyP parameters must be determined through calibration, the rest may be based on measurements, while INCA-P has around 40 unmeasurable parameters. Despite substantially simpler process-representation, SimplyP performed comparably to INCA-P in both calibration and validation and produced similar long-term projections in response to changes in land management. Results support the hypothesis that INCA-P is overly complex for the study catchment. We hope our findings will help prompt wider model comparison exercises, as well as debate among the water quality modeling community as to whether today's models are fit for purpose. Simpler models such as SimplyP have the potential to be useful management and research tools, building blocks for future model development (prototype code is freely available), or benchmarks against which more complex models could be evaluated.
Huber, Evelyn; Kleinknecht-Dolf, Michael; Müller, Marianne; Kugler, Christiane; Spirig, Rebecca
2017-06-01
To define the concept of patient-related complexity of nursing care in acute care hospitals and to operationalize it in a questionnaire. The concept of patient-related complexity of nursing care in acute care hospitals has not been conclusively defined in the literature. The operationalization in a corresponding questionnaire is necessary, given the increased significance of the topic, due to shortened lengths of stay and increased patient morbidity. Hybrid model of concept development and embedded mixed-methods design. The theoretical phase of the hybrid model involved a literature review and the development of a working definition. In the fieldwork phase of 2015 and 2016, an embedded mixed-methods design was applied with complexity assessments of all patients at five Swiss hospitals using our newly operationalized questionnaire 'Complexity of Nursing Care' over 1 month. These data will be analysed with structural equation modelling. Twelve qualitative case studies will be embedded. They will be analysed using a structured process of constructing case studies and content analysis. In the final analytic phase, the quantitative and qualitative data will be merged and added to the results of the theoretical phase for a common interpretation. Cantonal Ethics Committee Zurich judged the research programme as unproblematic in December 2014 and May 2015. Following the phases of the hybrid model and using an embedded mixed-methods design can reach an in-depth understanding of patient-related complexity of nursing care in acute care hospitals, a final version of the questionnaire and an acknowledged definition of the concept. © 2016 John Wiley & Sons Ltd.
Kulikova, Olga I; Berezhnoy, Daniil S; Stvolinsky, Sergey L; Lopachev, Alexander V; Orlova, Valentina S; Fedorova, Tatiana N
2018-06-01
In a model of early-stage Parkinson's disease induced by a single intranasal administration of 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP) to Wistar rats, a neuroprotective effect of a new derivative of carnosine and α-lipoic acid (C/LA nanomicellar complex) was demonstrated. Acute intraperitoneal administration of carnosine, α-lipoic acid and C/LA complex following MPTP administration normalized the total antioxidant activity in the brain tissue. Of all the compounds tested only C/LA complex normalized the metabolism of dopamine (DA) and serotonin (5-HT), while its components did not show similar effects when used separately. C/LA complex effectively restored the level of DA metabolites: the level of DOPAC was increased by 24.7 ± 5.6% compared to the animals that had received MPTP only, and the level of HVA was restored to the values observed in the intact animals. Integral metabolic indices of DA (DOPAC/DA and HVA/DA ratios) and 5-HT turnover (5-HIAA/5-HT ratio) in the striatum tended to increase in case of C/LA complex administration. Copyright © 2018 Elsevier Inc. All rights reserved.
Allometric scaling enhances stability in complex food webs.
Brose, Ulrich; Williams, Richard J; Martinez, Neo D
2006-11-01
Classic local stability theory predicts that complex ecological networks are unstable and are unlikely to persist despite empiricists' abundant documentation of such complexity in nature. This contradiction has puzzled biologists for decades. While some have explored how stability may be achieved in small modules of a few interacting species, rigorous demonstrations of how large complex and ecologically realistic networks dynamically persist remain scarce and inadequately understood. Here, we help fill this void by combining structural models of complex food webs with nonlinear bioenergetic models of population dynamics parameterized by biological rates that are allometrically scaled to populations' average body masses. Increasing predator-prey body mass ratios increase population persistence up to a saturation level that is reached by invertebrate and ectotherm vertebrate predators when being 10 or 100 times larger than their prey respectively. These values are corroborated by empirical predator-prey body mass ratios from a global data base. Moreover, negative effects of diversity (i.e. species richness) on stability (i.e. population persistence) become neutral or positive relationships at these empirical ratios. These results demonstrate that the predator-prey body mass ratios found in nature may be key to enabling persistence of populations in complex food webs and stabilizing the diversity of natural ecosystems.
NASA Technical Reports Server (NTRS)
Kleb, William L.; Wood, William A.
2004-01-01
The computational simulation community is not routinely publishing independently verifiable tests to accompany new models or algorithms. A survey reveals that only 22% of new models published are accompanied by tests suitable for independently verifying the new model. As the community develops larger codes with increased functionality, and hence increased complexity in terms of the number of building block components and their interactions, it becomes prohibitively expensive for each development group to derive the appropriate tests for each component. Therefore, the computational simulation community is building its collective castle on a very shaky foundation of components with unpublished and unrepeatable verification tests. The computational simulation community needs to begin publishing component level verification tests before the tide of complexity undermines its foundation.
Modeling complex flow structures and drag around a submerged plant of varied posture
NASA Astrophysics Data System (ADS)
Boothroyd, Richard J.; Hardy, Richard J.; Warburton, Jeff; Marjoribanks, Timothy I.
2017-04-01
Although vegetation is present in many rivers, the bulk of past work concerned with modeling the influence of vegetation on flow has considered vegetation to be morphologically simple and has generally neglected the complexity of natural plants. Here we report on a combined flume and numerical model experiment which incorporates time-averaged plant posture, collected through terrestrial laser scanning, into a computational fluid dynamics model to predict flow around a submerged riparian plant. For three depth-limited flow conditions (Reynolds number = 65,000-110,000), plant dynamics were recorded through high-definition video imagery, and the numerical model was validated against flow velocities collected with an acoustic Doppler velocimeter. The plant morphology shows an 18% reduction in plant height and a 14% increase in plant length, compressing and reducing the volumetric canopy morphology as the Reynolds number increases. Plant shear layer turbulence is dominated by Kelvin-Helmholtz type vortices generated through shear instability, the frequency of which is estimated to be between 0.20 and 0.30 Hz, increasing with Reynolds number. These results demonstrate the significant effect that the complex morphology of natural plants has on in-stream drag, and allow a physically determined, species-dependent drag coefficient to be calculated. Given the importance of vegetation in river corridor management, the approach developed here demonstrates the necessity to account for plant motion when calculating vegetative resistance.
Stollenwerk, Kenneth G.
1998-01-01
A natural-gradient tracer test was conducted in an unconfined sand and gravel aquifer on Cape Cod, Massachusetts. Molybdate was included in the injectate to study the effects of variable groundwater chemistry on its aqueous distribution and to evaluate the reliability of laboratory experiments for identifying and quantifying reactions that control the transport of reactive solutes in groundwater. Transport of molybdate in this aquifer was controlled by adsorption. The amount adsorbed varied with aqueous chemistry that changed with depth as freshwater recharge mixed with a plume of sewage-contaminated groundwater. Molybdate adsorption was strongest near the water table where pH (5.7) and the concentration of the competing solutes phosphate (2.3 micromolar) and sulfate (86 micromolar) were low. Adsorption of molybdate decreased with depth as pH increased to 6.5, phosphate increased to 40 micromolar, and sulfate increased to 340 micromolar. A one-site diffuse-layer surface-complexation model and a two-site diffuse-layer surface-complexation model were used to simulate adsorption. Reactions and equilibrium constants for both models were determined in laboratory experiments and used in the reactive-transport model PHAST to simulate the two-dimensional transport of molybdate during the tracer test. No geochemical parameters were adjusted in the simulation to improve the fit between model and field data. Both models simulated the travel distance of the molybdate cloud to within 10% during the 2-year tracer test; however, the two-site diffuse-layer model more accurately simulated the molybdate concentration distribution within the cloud.
The statistical geometry of transcriptome divergence in cell-type evolution and cancer.
Liang, Cong; Forrest, Alistair R R; Wagner, Günter P
2015-01-14
In evolution, body plan complexity increases due to an increase in the number of individualized cell types. Yet, there is very little understanding of the mechanisms that produce this form of organismal complexity. One model for the origin of novel cell types is the sister cell-type model. According to this model, each cell type arises together with a sister cell type through specialization from an ancestral cell type. A key prediction of the sister cell-type model is that gene expression profiles of cell types exhibit tree structure. Here we present a statistical model for detecting tree structure in transcriptomic data and apply it to transcriptomes from ENCODE and FANTOM5. We show that transcriptomes of normal cells harbour substantial amounts of hierarchical structure. In contrast, cancer cell lines have less tree structure, suggesting that the emergence of cancer cells follows different principles from that of evolutionary cell-type origination.
DOT National Transportation Integrated Search
2002-09-01
This is volume I1 of a two-volume report of a study to increase the scope and clarity of air pollution models for : depressed highway and street canyon sites. It presents the atmospheric wind tunnel program conducted to increase the : data base and i...
NASA Astrophysics Data System (ADS)
Jug, Mario; Mennini, Natascia; Melani, Fabrizio; Maestrelli, Francesca; Mura, Paola
2010-11-01
A novel method, which simultaneously exploits experimental (NMR) and theoretically calculated data obtained by a molecular modelling technique, was proposed, to obtain deeper insight into inclusion geometry and possible stereoselective binding of bupivacaine hydrochloride with selected cyclodextrin derivatives. Sulphobuthylether-β-cyclodextrin and water soluble polymeric β-cyclodextrin demonstrated to be the best complexing agents for the drug, resulting in formation of the most stable inclusion complexes with the highest increase in aqueous drug solubility. The drug-carrier binding modes with these cyclodextrins and phenomena which may be directly related to the higher stability and better aqueous solubility of complexes formed were discussed in details.
Transforming Multidisciplinary Customer Requirements to Product Design Specifications
NASA Astrophysics Data System (ADS)
Ma, Xiao-Jie; Ding, Guo-Fu; Qin, Sheng-Feng; Li, Rong; Yan, Kai-Yin; Xiao, Shou-Ne; Yang, Guang-Wu
2017-09-01
With the increasing of complexity of complex mechatronic products, it is necessary to involve multidisciplinary design teams, thus, the traditional customer requirements modeling for a single discipline team becomes difficult to be applied in a multidisciplinary team and project since team members with various disciplinary backgrounds may have different interpretations of the customers' requirements. A new synthesized multidisciplinary customer requirements modeling method is provided for obtaining and describing the common understanding of customer requirements (CRs) and more importantly transferring them into a detailed and accurate product design specifications (PDS) to interact with different team members effectively. A case study of designing a high speed train verifies the rationality and feasibility of the proposed multidisciplinary requirement modeling method for complex mechatronic product development. This proposed research offersthe instruction to realize the customer-driven personalized customization of complex mechatronic product.
Cognitive complexity of the medical record is a risk factor for major adverse events.
Roberson, David; Connell, Michael; Dillis, Shay; Gauvreau, Kimberlee; Gore, Rebecca; Heagerty, Elaina; Jenkins, Kathy; Ma, Lin; Maurer, Amy; Stephenson, Jessica; Schwartz, Margot
2014-01-01
Patients in tertiary care hospitals are more complex than in the past, but the implications of this are poorly understood as "patient complexity" has been difficult to quantify. We developed a tool, the Complexity Ruler, to quantify the amount of data (as bits) in the patient’s medical record. We designated the amount of data in the medical record as the cognitive complexity of the medical record (CCMR). We hypothesized that CCMR is a useful surrogate for true patient complexity and that higher CCMR correlates with risk of major adverse events. The Complexity Ruler was validated by comparing the measured CCMR with physician rankings of patient complexity on specific inpatient services. It was tested in a case-control model of all patients with major adverse events at a tertiary care pediatric hospital from 2005 to 2006. The main outcome measure was an externally reported major adverse event. We measured CCMR for 24 hours before the event, and we estimated lifetime CCMR. Above empirically derived cutoffs, 24-hour and lifetime CCMR were risk factors for major adverse events (odds ratios, 5.3 and 6.5, respectively). In a multivariate analysis, CCMR alone was essentially as predictive of risk as a model that started with 30-plus clinical factors. CCMR correlates with physician assessment of complexity and risk of adverse events. We hypothesize that increased CCMR increases the risk of physician cognitive overload. An automated version of the Complexity Ruler could allow identification of at-risk patients in real time.
Perlovich, German L; Skar, Merete; Bauer-Brandl, Annette
2003-10-01
Cyclodextrins are often used in order to increase the aqueous solubility of drug substances by complexation. In order to investigate the complexation reaction of ibuprofen and hydroxypropyl-beta-cyclodextrin, titration calorimetry was used as a direct method. The thermodynamic parameters of the complexation process (stability constant, K(11); complexation enthalpy, deltaH(c) degrees ) were obtained in two different buffer systems (citric acid/sodium-phosphate and phosphoric acid) at various pH values. Based on these data the relative contributions of the enthalpic and entropic terms of the Gibbs energy to the complexation process have been analyzed. In both buffers the enthalpic and entropic terms are of different sign and this case corresponds to a 'nonclassical' model of hydrophobic interaction. In citric buffer, the main driving force of complexation is the entropy, which increases from 60 to 67% while the pH of the solution increases from 3.2 to 8.0. However, for the phosphoric buffer the entropic term decreases from 60 to 45%, while the pH-value of the solution increases from 5.0 to 8.2, and the driving force of the complexation process changes from entropy to enthalpy. The experimental data of the present study are compared to results of other authors and discrepancies discussed in detail.
Urban Modification of Convection and Rainfall in Complex Terrain
NASA Astrophysics Data System (ADS)
Freitag, B. M.; Nair, U. S.; Niyogi, D.
2018-03-01
Despite a globally growing proportion of cities located in regions of complex terrain, interactions between urbanization and complex terrain and their meteorological impacts are not well understood. We utilize numerical model simulations and satellite data products to investigate such impacts over San Miguel de Tucumán, Argentina. Numerical modeling experiments show urbanization results in 20-30% less precipitation downwind of the city and an eastward shift in precipitation upwind. Our experiments show that changes in surface energy, boundary layer dynamics, and thermodynamics induced by urbanization interact synergistically with the persistent forcing of atmospheric flow by complex terrain. With urbanization increasing in mountainous regions, land-atmosphere feedbacks can exaggerate meteorological forcings leading to weather impacts that require important considerations for sustainable development of urban regions within complex terrain.
Qian, Xinyi Lisa; Yarnal, Careen M; Almeida, David M
2013-01-01
Affective complexity, a manifestation of psychological well-being, refers to the relative independence between positive and negative affect (PA, NA). According to the Dynamic Model of Affect (DMA), stressful situations lead to highly inverse PA-NA relationship, reducing affective complexity. Meanwhile, positive events can sustain affective complexity by restoring PA-NA independence. Leisure, a type of positive events, has been identified as a coping resource. This study used the DMA to assess whether leisure time helps restore affective complexity on stressful days. We found that on days with more leisure time than usual, an individual experienced less negative PA-NA relationship after daily stressful events. The finding demonstrates the value of leisure time as a coping resource and the DMA's contribution to coping research.
Modeling Structure and Dynamics of Protein Complexes with SAXS Profiles
Schneidman-Duhovny, Dina; Hammel, Michal
2018-01-01
Small-angle X-ray scattering (SAXS) is an increasingly common and useful technique for structural characterization of molecules in solution. A SAXS experiment determines the scattering intensity of a molecule as a function of spatial frequency, termed SAXS profile. SAXS profiles can be utilized in a variety of molecular modeling applications, such as comparing solution and crystal structures, structural characterization of flexible proteins, assembly of multi-protein complexes, and modeling of missing regions in the high-resolution structure. Here, we describe protocols for modeling atomic structures based on SAXS profiles. The first protocol is for comparing solution and crystal structures including modeling of missing regions and determination of the oligomeric state. The second protocol performs multi-state modeling by finding a set of conformations and their weights that fit the SAXS profile starting from a single-input structure. The third protocol is for protein-protein docking based on the SAXS profile of the complex. We describe the underlying software, followed by demonstrating their application on interleukin 33 (IL33) with its primary receptor ST2 and DNA ligase IV-XRCC4 complex. PMID:29605933
An Efficient Model-based Diagnosis Engine for Hybrid Systems Using Structural Model Decomposition
NASA Technical Reports Server (NTRS)
Bregon, Anibal; Narasimhan, Sriram; Roychoudhury, Indranil; Daigle, Matthew; Pulido, Belarmino
2013-01-01
Complex hybrid systems are present in a large range of engineering applications, like mechanical systems, electrical circuits, or embedded computation systems. The behavior of these systems is made up of continuous and discrete event dynamics that increase the difficulties for accurate and timely online fault diagnosis. The Hybrid Diagnosis Engine (HyDE) offers flexibility to the diagnosis application designer to choose the modeling paradigm and the reasoning algorithms. The HyDE architecture supports the use of multiple modeling paradigms at the component and system level. However, HyDE faces some problems regarding performance in terms of complexity and time. Our focus in this paper is on developing efficient model-based methodologies for online fault diagnosis in complex hybrid systems. To do this, we propose a diagnosis framework where structural model decomposition is integrated within the HyDE diagnosis framework to reduce the computational complexity associated with the fault diagnosis of hybrid systems. As a case study, we apply our approach to a diagnostic testbed, the Advanced Diagnostics and Prognostics Testbed (ADAPT), using real data.
LEGO products have become more complex
2018-01-01
The LEGO Group has become the largest toy company in the world and they can look back to a proud history of more than 50 years of producing bricks and other toys. Starting with a simple set of basic bricks their range of toys appeared to have increased in complexity over the years. We processed the inventories of most sets from 1955–2015 and our analysis showed that LEGO sets have become bigger, more colorful and more specialized. The vocabulary of bricks has increased significantly resulting in sets sharing fewer bricks. The increased complexity of LEGO sets and bricks enables skilled builders to design ever more amazing models but it may also overwhelm less skilled or younger builders. PMID:29293655
LEGO products have become more complex.
Bartneck, Christoph; Moltchanova, Elena
2018-01-01
The LEGO Group has become the largest toy company in the world and they can look back to a proud history of more than 50 years of producing bricks and other toys. Starting with a simple set of basic bricks their range of toys appeared to have increased in complexity over the years. We processed the inventories of most sets from 1955-2015 and our analysis showed that LEGO sets have become bigger, more colorful and more specialized. The vocabulary of bricks has increased significantly resulting in sets sharing fewer bricks. The increased complexity of LEGO sets and bricks enables skilled builders to design ever more amazing models but it may also overwhelm less skilled or younger builders.
Neighbor effect in complexation of a conjugated polymer.
Sosorev, Andrey; Zapunidi, Sergey
2013-09-19
Charge-transfer complex (CTC) formation between a conjugated polymer and low-molecular-weight organic acceptor is proposed to be driven by the neighbor effect. Formation of a CTC on the polymer chain results in an increased probability of new CTC formation near the existing one. We present an analytical model for CTC distribution considering the neighbor effect, based on the principles of statistical mechanics. This model explains the experimentally observed threshold-like dependence of the CTC concentration on the acceptor content in a polymer:acceptor blend. It also allows us to evaluate binding energies of the complexes.
How rare is complex life in the Milky Way?
Bounama, Christine; von Bloh, Werner; Franck, Siegfried
2007-10-01
An integrated Earth system model was applied to calculate the number of habitable Earth-analog planets that are likely to have developed primitive (unicellular) and complex (multicellular) life in extrasolar planetary systems. The model is based on the global carbon cycle mediated by life and driven by increasing stellar luminosity and plate tectonics. We assumed that the hypothetical primitive and complex life forms differed in their temperature limits and CO(2) tolerances. Though complex life would be more vulnerable to environmental stress, its presence would amplify weathering processes on a terrestrial planet. The model allowed us to calculate the average number of Earth-analog planets that may harbor such life by using the formation rate of Earth-like planets in the Milky Way as well as the size of a habitable zone that could support primitive and complex life forms. The number of planets predicted to bear complex life was found to be approximately 2 orders of magnitude lower than the number predicted for primitive life forms. Our model predicted a maximum abundance of such planets around 1.8 Ga ago and allowed us to calculate the average distance between potentially habitable planets in the Milky Way. If the model predictions are accurate, the future missions DARWIN (up to a probability of 65%) and TPF (up to 20%) are likely to detect at least one planet with a biosphere composed of complex life.
Competitive sorption of carbonate and arsenic to hematite: combined ATR-FTIR and batch experiments.
Brechbühl, Yves; Christl, Iso; Elzinga, Evert J; Kretzschmar, Ruben
2012-07-01
The competitive sorption of carbonate and arsenic to hematite was investigated in closed-system batch experiments. The experimental conditions covered a pH range of 3-7, arsenate concentrations of 3-300 μM, and arsenite concentrations of 3-200 μM. Dissolved carbonate concentrations were varied by fixing the CO(2) partial pressure at 0.39 (atmospheric), 10, or 100 hPa. Sorption data were modeled with a one-site three plane model considering carbonate and arsenate surface complexes derived from ATR-FTIR spectroscopy analyses. Macroscopic sorption data revealed that in the pH range 3-7, carbonate was a weak competitor for both arsenite and arsenate. The competitive effect of carbonate increased with increasing CO(2) partial pressure and decreasing arsenic concentrations. For arsenate, sorption was reduced by carbonate only at slightly acidic to neutral pH values, whereas arsenite sorption was decreased across the entire pH range. ATR-FTIR spectra indicated the predominant formation of bidentate binuclear inner-sphere surface complexes for both sorbed arsenate and sorbed carbonate. Surface complexation modeling based on the dominant arsenate and carbonate surface complexes indicated by ATR-FTIR and assuming inner-sphere complexation of arsenite successfully described the macroscopic sorption data. Our results imply that in natural arsenic-contaminated systems where iron oxide minerals are important sorbents, dissolved carbonate may increase aqueous arsenite concentrations, but will affect dissolved arsenate concentrations only at neutral to alkaline pH and at very high CO(2) partial pressures. Copyright © 2012 Elsevier Inc. All rights reserved.
Updating the debate on model complexity
Simmons, Craig T.; Hunt, Randall J.
2012-01-01
As scientists who are trying to understand a complex natural world that cannot be fully characterized in the field, how can we best inform the society in which we live? This founding context was addressed in a special session, “Complexity in Modeling: How Much is Too Much?” convened at the 2011 Geological Society of America Annual Meeting. The session had a variety of thought-provoking presentations—ranging from philosophy to cost-benefit analyses—and provided some areas of broad agreement that were not evident in discussions of the topic in 1998 (Hunt and Zheng, 1999). The session began with a short introduction during which model complexity was framed borrowing from an economic concept, the Law of Diminishing Returns, and an example of enjoyment derived by eating ice cream. Initially, there is increasing satisfaction gained from eating more ice cream, to a point where the gain in satisfaction starts to decrease, ending at a point when the eater sees no value in eating more ice cream. A traditional view of model complexity is similar—understanding gained from modeling can actually decrease if models become unnecessarily complex. However, oversimplified models—those that omit important aspects of the problem needed to make a good prediction—can also limit and confound our understanding. Thus, the goal of all modeling is to find the “sweet spot” of model sophistication—regardless of whether complexity was added sequentially to an overly simple model or collapsed from an initial highly parameterized framework that uses mathematics and statistics to attain an optimum (e.g., Hunt et al., 2007). Thus, holistic parsimony is attained, incorporating “as simple as possible,” as well as the equally important corollary “but no simpler.”
Biofabricated constructs as tissue models: a short review.
Costa, Pedro F
2015-04-01
Biofabrication is currently able to provide reliable models for studying the development of cells and tissues into multiple environments. As the complexity of biofabricated constructs is becoming increasingly higher their ability to closely mimic native tissues and organs is also increasing. Various biofabrication technologies currently allow to precisely build cell/tissue constructs at multiple dimension ranges with great accuracy. Such technologies are also able to assemble together multiple types of cells and/or materials and generate constructs closely mimicking various types of tissues. Furthermore, the high degree of automation involved in these technologies enables the study of large arrays of testing conditions within increasingly smaller and automated devices both in vitro and in vivo. Despite not yet being able to generate constructs similar to complex tissues and organs, biofabrication is rapidly evolving in that direction. One major hurdle to be overcome in order for such level of complex detail to be achieved is the ability to generate complex vascular structures within biofabricated constructs. This review describes several of the most relevant technologies and methodologies currently utilized within biofabrication and provides as well a brief overview of their current and future potential applications.
Stephen R. Shifley; Hong S. He; Heike Lischke; Wen J. Wang; Wenchi Jin; Eric J. Gustafson; Jonathan R. Thompson; Frank R. Thompson; William D. Dijak; Jian Yang
2017-01-01
Context. Quantitative models of forest dynamics have followed a progression toward methods with increased detail, complexity, and spatial extent. Objectives. We highlight milestones in the development of forest dynamics models and identify future research and application opportunities. Methods. We reviewed...
Validating and Optimizing the Effects of Model Progression in Simulation-Based Inquiry Learning
ERIC Educational Resources Information Center
Mulder, Yvonne G.; Lazonder, Ard W.; de Jong, Ton; Anjewierden, Anjo; Bollen, Lars
2012-01-01
Model progression denotes the organization of the inquiry learning process in successive phases of increasing complexity. This study investigated the effectiveness of model progression in general, and explored the added value of either broadening or narrowing students' possibilities to change model progression phases. Results showed that…
Background / Question / Methods Planning for the recovery of threatened species is increasingly informed by spatially-explicit population models. However, using simulation model results to guide land management decisions can be difficult due to the volume and complexity of model...
Exploring component-based approaches in forest landscape modeling
H. S. He; D. R. Larsen; D. J. Mladenoff
2002-01-01
Forest management issues are increasingly required to be addressed in a spatial context, which has led to the development of spatially explicit forest landscape models. The numerous processes, complex spatial interactions, and diverse applications in spatial modeling make the development of forest landscape models difficult for any single research group. New...
Underestimation of simulated N2O flux in a model comparison of DayCent, DNDC, and EPIC
USDA-ARS?s Scientific Manuscript database
Process-based models are increasingly used as tool for studying complex agroecosystem interactions N2O emissions from agricultural fields. The widespread use of these models to conduct research and inform policy benefits from periodic model comparisons that assess the state of agroecosystem modeling...
Acoustic backscatter models of fish: Gradual or punctuated evolution
NASA Astrophysics Data System (ADS)
Horne, John K.
2004-05-01
Sound-scattering characteristics of aquatic organisms are routinely investigated using theoretical and numerical models. Development of the inverse approach by van Holliday and colleagues in the 1970s catalyzed the development and validation of backscatter models for fish and zooplankton. As the understanding of biological scattering properties increased, so did the number and computational sophistication of backscatter models. The complexity of data used to represent modeled organisms has also evolved in parallel to model development. Simple geometric shapes representing body components or the whole organism have been replaced by anatomically accurate representations derived from imaging sensors such as computer-aided tomography (CAT) scans. In contrast, Medwin and Clay (1998) recommend that fish and zooplankton should be described by simple theories and models, without acoustically superfluous extensions. Since van Holliday's early work, how has data and computational complexity influenced accuracy and precision of model predictions? How has the understanding of aquatic organism scattering properties increased? Significant steps in the history of model development will be identified and changes in model results will be characterized and compared. [Work supported by ONR and the Alaska Fisheries Science Center.
Protein docking by the interface structure similarity: how much structure is needed?
Sinha, Rohita; Kundrotas, Petras J; Vakser, Ilya A
2012-01-01
The increasing availability of co-crystallized protein-protein complexes provides an opportunity to use template-based modeling for protein-protein docking. Structure alignment techniques are useful in detection of remote target-template similarities. The size of the structure involved in the alignment is important for the success in modeling. This paper describes a systematic large-scale study to find the optimal definition/size of the interfaces for the structure alignment-based docking applications. The results showed that structural areas corresponding to the cutoff values <12 Å across the interface inadequately represent structural details of the interfaces. With the increase of the cutoff beyond 12 Å, the success rate for the benchmark set of 99 protein complexes, did not increase significantly for higher accuracy models, and decreased for lower-accuracy models. The 12 Å cutoff was optimal in our interface alignment-based docking, and a likely best choice for the large-scale (e.g., on the scale of the entire genome) applications to protein interaction networks. The results provide guidelines for the docking approaches, including high-throughput applications to modeled structures.
Castellazzi, Giovanni; D’Altri, Antonio Maria; Bitelli, Gabriele; Selvaggi, Ilenia; Lambertini, Alessandro
2015-01-01
In this paper, a new semi-automatic procedure to transform three-dimensional point clouds of complex objects to three-dimensional finite element models is presented and validated. The procedure conceives of the point cloud as a stacking of point sections. The complexity of the clouds is arbitrary, since the procedure is designed for terrestrial laser scanner surveys applied to buildings with irregular geometry, such as historical buildings. The procedure aims at solving the problems connected to the generation of finite element models of these complex structures by constructing a fine discretized geometry with a reduced amount of time and ready to be used with structural analysis. If the starting clouds represent the inner and outer surfaces of the structure, the resulting finite element model will accurately capture the whole three-dimensional structure, producing a complex solid made by voxel elements. A comparison analysis with a CAD-based model is carried out on a historical building damaged by a seismic event. The results indicate that the proposed procedure is effective and obtains comparable models in a shorter time, with an increased level of automation. PMID:26225978
Saffer, D.M.; Bekins, B.A.
2006-01-01
At many subduction zones, accretionary complexes form as sediment is off-scraped from the subducting plate. Mechanical models that treat accretionary complexes as critically tapered wedges of sediment demonstrate that pore pressure controls their taper angle by modifying basal and internal shear strength. Here, we combine a numerical model of groundwater flow with critical taper theory to quantify the effects of sediment and de??collement permeability, sediment thickness, sediment partitioning between accretion and underthrusting, and plate convergence rate on steady state pore pressure. Our results show that pore pressure in accretionary wedges can be viewed as a dynamically maintained response to factors which drive pore pressure (source terms) and those that limit flow (permeability and drainage path length). We find that sediment permeability and incoming sediment thickness are the most important factors, whereas fault permeability and the partitioning of sediment have a small effect. For our base case model scenario, as sediment permeability is increased, pore pressure decreases from near-lithostatic to hydrostatic values and allows stable taper angles to increase from ??? 2.5?? to 8??-12.5??. With increased sediment thickness in our models (from 100 to 8000 m), increased pore pressure drives a decrease in stable taper angle from 8.4??-12.5?? to 15?? to <4??) with increased sediment thickness (from <1 to 7 km). One key implication is that hydrologic properties may strongly influence the strength of the crust in a wide range of geologic settings. Copyright 2006 by the American Geophysical Union.
Family Environment and Cognitive Development: Twelve Analytic Models
ERIC Educational Resources Information Center
Walberg, Herbert J.; Marjoribanks, Kevin
1976-01-01
The review indicates that refined measures of the family environment and the use of complex statistical models increase the understanding of the relationships between socioeconomic status, sibling variables, family environment, and cognitive development. (RC)
GUIDELINES TO ASSESSING REGIONAL VULNERABILITIES
Decision-makers today face increasingly complex environmental problems that require integrative and innovative approaches for analyzing, modeling, and interpreting various types of information. ReVA acknowledges this need and is designed to evaluate methods and models for synthe...
A Telecommunications Industry Primer: A Systems Model.
ERIC Educational Resources Information Center
Obermier, Timothy R.; Tuttle, Ronald H.
2003-01-01
Describes the Telecommunications Systems Model to help technical educators and students understand the increasingly complex telecommunications infrastructure. Specifically looks at ownership and regulatory status, service providers, transport medium, network protocols, and end-user services. (JOW)
Evolution of complexity in the zebrafish synapse proteome
Bayés, Àlex; Collins, Mark O.; Reig-Viader, Rita; Gou, Gemma; Goulding, David; Izquierdo, Abril; Choudhary, Jyoti S.; Emes, Richard D.; Grant, Seth G. N.
2017-01-01
The proteome of human brain synapses is highly complex and is mutated in over 130 diseases. This complexity arose from two whole-genome duplications early in the vertebrate lineage. Zebrafish are used in modelling human diseases; however, its synapse proteome is uncharacterized, and whether the teleost-specific genome duplication (TSGD) influenced complexity is unknown. We report the characterization of the proteomes and ultrastructure of central synapses in zebrafish and analyse the importance of the TSGD. While the TSGD increases overall synapse proteome complexity, the postsynaptic density (PSD) proteome of zebrafish has lower complexity than mammals. A highly conserved set of ∼1,000 proteins is shared across vertebrates. PSD ultrastructural features are also conserved. Lineage-specific proteome differences indicate that vertebrate species evolved distinct synapse types and functions. The data sets are a resource for a wide range of studies and have important implications for the use of zebrafish in modelling human synaptic diseases. PMID:28252024
Factor complexity of crash occurrence: An empirical demonstration using boosted regression trees.
Chung, Yi-Shih
2013-12-01
Factor complexity is a characteristic of traffic crashes. This paper proposes a novel method, namely boosted regression trees (BRT), to investigate the complex and nonlinear relationships in high-variance traffic crash data. The Taiwanese 2004-2005 single-vehicle motorcycle crash data are used to demonstrate the utility of BRT. Traditional logistic regression and classification and regression tree (CART) models are also used to compare their estimation results and external validities. Both the in-sample cross-validation and out-of-sample validation results show that an increase in tree complexity provides improved, although declining, classification performance, indicating a limited factor complexity of single-vehicle motorcycle crashes. The effects of crucial variables including geographical, time, and sociodemographic factors explain some fatal crashes. Relatively unique fatal crashes are better approximated by interactive terms, especially combinations of behavioral factors. BRT models generally provide improved transferability than conventional logistic regression and CART models. This study also discusses the implications of the results for devising safety policies. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Timmermans, Joris; Gastellu-Etchegorry, Jean Philippe; van der Tol, Christiaan; Verhoef, Wout; Vekerdy, Zoltan; Su, Zhongbo
2017-04-01
Accurate estimation of the radiative transfer (RT) over vegetation is the corner stone of agricultural and hydrological remote sensing applications. Present remote sensing sensors mostly use traditional optical, thermal and microwave observations. However with these traditional observations characterization of the light efficiency and photosynthetic rate can only be accomplished indirectly. A promising new method of observing these processes is by using the fluorescent emitted radiation. This approach was recently highlighted due to the selection of the FLEX sensor as a future Earth Explorer by the European Space agency (ESA). Several modelling activities have been undertaken to better understand the technical feasibilities of this sensor. Within these studies, the SCOPE model has been chosen as the baseline algorithm. This model combines a detailed RT description of the canopy, using a discrete version of the SAIL model, with a description of photosynthetic processes (by use of the Farquhar/Ball-Berry model). Consequently, this model is capable of simulating simultaneously the biophysical processes and jointly the fluorescent, optical and thermal RT. The SAIL model however is a 1D RT model and consequently provides higher uncertainties with increasing vegetation structures. The main objective of this research is to investigate the limitations of the RT model component of the SCOPE model over complex canopies. In particular the aim of this research is to evaluate the validity for increasingly structural complex canopies', on the bidirectional reflectance distribution functions (BRDF) of these canopies. This was accomplished by evaluating the simulated outgoing radiation from SCOPE/SAIL against simulations of the DART 3D RT model. In total nine different scenarios were simulated with the DART RTM with increasing structural complexity, ranging from the simple 'Plot' scenario to the highly complex 'Multiple Crown' scenario. The canopy parameters are retrieved from a terrestrial laser scan of the Speulderbos in the Netherlands. The comparison between DART and SCOPE/SLC models showed a good match for the simple scenarios. Calculated rMSDs showed lower than 7.5% errors for crown coverage values lower than 0.87, with the Near-Hotspot viewing angles found to be the largest contributor to this deviation. For more complex scenarios (using Multiple Crowns), the comparison between SCOPE and DART showed mixed results. Good results were obtained for crown coverage values of 0.93, with rMSD (6.77% and 5.96%), lower than the defined threshold value, except near hotspot. For scenarios with crown coverages lower than 0.93 the rMSD were too large to validate the use of SCOPE model. When considering the Soil Leaf Canopy (SLC) model, an improved version of SAIL that considers the canopy clumping, better results were obtained for these complex scenarios, with good agreement for medium crown coverage values (0.93 and 0.87) with rMSD (6.33% and 5.99; 6.66% and 7.12%). This indicates that the radiative transfer model within SCOPE might be upgraded in the future.
Nitric oxide bioavailability in the microcirculation: insights from mathematical models.
Tsoukias, Nikolaos M
2008-11-01
Over the last 30 years nitric oxide (NO) has emerged as a key signaling molecule involved in a number of physiological functions, including in the regulation of microcirculatory tone. Despite significant scientific contributions, fundamental questions about NO's role in the microcirculation remain unanswered. Mathematical modeling can assist in investigations of microcirculatory NO physiology and address experimental limitations in quantifying vascular NO concentrations. The number of mathematical models investigating the fate of NO in the vasculature has increased over the last few years, and new models are continuously emerging, incorporating an increasing level of complexity and detail. Models investigate mechanisms that affect NO availability in health and disease. They examine the significance of NO release from nonendothelial sources, the effect of transient release, and the complex interaction of NO with other substances, such as heme-containing proteins and reactive oxygen species. Models are utilized to test and generate hypotheses for the mechanisms that regulate NO-dependent signaling in the microcirculation.
NASA Astrophysics Data System (ADS)
Wray, Timothy J.
Computational fluid dynamics (CFD) is routinely used in performance prediction and design of aircraft, turbomachinery, automobiles, and in many other industrial applications. Despite its wide range of use, deficiencies in its prediction accuracy still exist. One critical weakness is the accurate simulation of complex turbulent flows using the Reynolds-Averaged Navier-Stokes equations in conjunction with a turbulence model. The goal of this research has been to develop an eddy viscosity type turbulence model to increase the accuracy of flow simulations for mildly separated flows, flows with rotation and curvature effects, and flows with surface roughness. It is accomplished by developing a new zonal one-equation turbulence model which relies heavily on the flow physics; it is now known in the literature as the Wray-Agarwal one-equation turbulence model. The effectiveness of the new model is demonstrated by comparing its results with those obtained by the industry standard one-equation Spalart-Allmaras model and two-equation Shear-Stress-Transport k - o model and experimental data. Results for subsonic, transonic, and supersonic flows in and about complex geometries are presented. It is demonstrated that the Wray-Agarwal model can provide the industry and CFD researchers an accurate, efficient, and reliable turbulence model for the computation of a large class of complex turbulent flows.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brewer, Shannon K.; Worthington, Thomas A.; Mollenhauer, Robert
Ecohydrology combines empiricism, data analytics, and the integration of models to characterize linkages between ecological and hydrological processes. A challenge for practitioners is determining which models best generalizes heterogeneity in hydrological behaviour, including water fluxes across spatial and temporal scales, integrating environmental and socio–economic activities to determine best watershed management practices and data requirements. We conducted a literature review and synthesis of hydrologic, hydraulic, water quality, and ecological models designed for solving interdisciplinary questions. We reviewed 1,275 papers and identified 178 models that have the capacity to answer an array of research questions about ecohydrology or ecohydraulics. Of these models,more » 43 were commonly applied due to their versatility, accessibility, user–friendliness, and excellent user–support. Forty–one of 43 reviewed models were linked to at least 1 other model especially: Water Quality Analysis Simulation Program (linked to 21 other models), Soil and Water Assessment Tool (19), and Hydrologic Engineering Center's River Analysis System (15). However, model integration was still relatively infrequent. There was substantial variation in model applications, possibly an artefact of the regional focus of research questions, simplicity of use, quality of user–support efforts, or a limited understanding of model applicability. Simply increasing the interoperability of model platforms, transformation of models to user–friendly forms, increasing user–support, defining the reliability and risk associated with model results, and increasing awareness of model applicability may promote increased use of models across subdisciplines. Furthermore, the current availability of models allows an array of interdisciplinary questions to be addressed, and model choice relates to several factors including research objective, model complexity, ability to link to other models, and interface choice.« less
Brewer, Shannon K.; Worthington, Thomas; Mollenhauer, Robert; Stewart, David; McManamay, Ryan; Guertault, Lucie; Moore, Desiree
2018-01-01
Ecohydrology combines empiricism, data analytics, and the integration of models to characterize linkages between ecological and hydrological processes. A challenge for practitioners is determining which models best generalizes heterogeneity in hydrological behaviour, including water fluxes across spatial and temporal scales, integrating environmental and socio‐economic activities to determine best watershed management practices and data requirements. We conducted a literature review and synthesis of hydrologic, hydraulic, water quality, and ecological models designed for solving interdisciplinary questions. We reviewed 1,275 papers and identified 178 models that have the capacity to answer an array of research questions about ecohydrology or ecohydraulics. Of these models, 43 were commonly applied due to their versatility, accessibility, user‐friendliness, and excellent user‐support. Forty‐one of 43 reviewed models were linked to at least 1 other model especially: Water Quality Analysis Simulation Program (linked to 21 other models), Soil and Water Assessment Tool (19), and Hydrologic Engineering Center's River Analysis System (15). However, model integration was still relatively infrequent. There was substantial variation in model applications, possibly an artefact of the regional focus of research questions, simplicity of use, quality of user‐support efforts, or a limited understanding of model applicability. Simply increasing the interoperability of model platforms, transformation of models to user‐friendly forms, increasing user‐support, defining the reliability and risk associated with model results, and increasing awareness of model applicability may promote increased use of models across subdisciplines. Nonetheless, the current availability of models allows an array of interdisciplinary questions to be addressed, and model choice relates to several factors including research objective, model complexity, ability to link to other models, and interface choice.
Brewer, Shannon K.; Worthington, Thomas A.; Mollenhauer, Robert; ...
2018-04-06
Ecohydrology combines empiricism, data analytics, and the integration of models to characterize linkages between ecological and hydrological processes. A challenge for practitioners is determining which models best generalizes heterogeneity in hydrological behaviour, including water fluxes across spatial and temporal scales, integrating environmental and socio–economic activities to determine best watershed management practices and data requirements. We conducted a literature review and synthesis of hydrologic, hydraulic, water quality, and ecological models designed for solving interdisciplinary questions. We reviewed 1,275 papers and identified 178 models that have the capacity to answer an array of research questions about ecohydrology or ecohydraulics. Of these models,more » 43 were commonly applied due to their versatility, accessibility, user–friendliness, and excellent user–support. Forty–one of 43 reviewed models were linked to at least 1 other model especially: Water Quality Analysis Simulation Program (linked to 21 other models), Soil and Water Assessment Tool (19), and Hydrologic Engineering Center's River Analysis System (15). However, model integration was still relatively infrequent. There was substantial variation in model applications, possibly an artefact of the regional focus of research questions, simplicity of use, quality of user–support efforts, or a limited understanding of model applicability. Simply increasing the interoperability of model platforms, transformation of models to user–friendly forms, increasing user–support, defining the reliability and risk associated with model results, and increasing awareness of model applicability may promote increased use of models across subdisciplines. Furthermore, the current availability of models allows an array of interdisciplinary questions to be addressed, and model choice relates to several factors including research objective, model complexity, ability to link to other models, and interface choice.« less
2016 International Land Model Benchmarking (ILAMB) Workshop Report
NASA Technical Reports Server (NTRS)
Hoffman, Forrest M.; Koven, Charles D.; Keppel-Aleks, Gretchen; Lawrence, David M.; Riley, William J.; Randerson, James T.; Ahlstrom, Anders; Abramowitz, Gabriel; Baldocchi, Dennis D.; Best, Martin J.;
2016-01-01
As earth system models (ESMs) become increasingly complex, there is a growing need for comprehensive and multi-faceted evaluation of model projections. To advance understanding of terrestrial biogeochemical processes and their interactions with hydrology and climate under conditions of increasing atmospheric carbon dioxide, new analysis methods are required that use observations to constrain model predictions, inform model development, and identify needed measurements and field experiments. Better representations of biogeochemistryclimate feedbacks and ecosystem processes in these models are essential for reducing the acknowledged substantial uncertainties in 21st century climate change projections.
2016 International Land Model Benchmarking (ILAMB) Workshop Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffman, Forrest M.; Koven, Charles D.; Keppel-Aleks, Gretchen
As Earth system models become increasingly complex, there is a growing need for comprehensive and multi-faceted evaluation of model projections. To advance understanding of biogeochemical processes and their interactions with hydrology and climate under conditions of increasing atmospheric carbon dioxide, new analysis methods are required that use observations to constrain model predictions, inform model development, and identify needed measurements and field experiments. Better representations of biogeochemistry–climate feedbacks and ecosystem processes in these models are essential for reducing uncertainties associated with projections of climate change during the remainder of the 21st century.
Rivas, Elena; Lang, Raymond; Eddy, Sean R
2012-02-01
The standard approach for single-sequence RNA secondary structure prediction uses a nearest-neighbor thermodynamic model with several thousand experimentally determined energy parameters. An attractive alternative is to use statistical approaches with parameters estimated from growing databases of structural RNAs. Good results have been reported for discriminative statistical methods using complex nearest-neighbor models, including CONTRAfold, Simfold, and ContextFold. Little work has been reported on generative probabilistic models (stochastic context-free grammars [SCFGs]) of comparable complexity, although probabilistic models are generally easier to train and to use. To explore a range of probabilistic models of increasing complexity, and to directly compare probabilistic, thermodynamic, and discriminative approaches, we created TORNADO, a computational tool that can parse a wide spectrum of RNA grammar architectures (including the standard nearest-neighbor model and more) using a generalized super-grammar that can be parameterized with probabilities, energies, or arbitrary scores. By using TORNADO, we find that probabilistic nearest-neighbor models perform comparably to (but not significantly better than) discriminative methods. We find that complex statistical models are prone to overfitting RNA structure and that evaluations should use structurally nonhomologous training and test data sets. Overfitting has affected at least one published method (ContextFold). The most important barrier to improving statistical approaches for RNA secondary structure prediction is the lack of diversity of well-curated single-sequence RNA secondary structures in current RNA databases.
Rivas, Elena; Lang, Raymond; Eddy, Sean R.
2012-01-01
The standard approach for single-sequence RNA secondary structure prediction uses a nearest-neighbor thermodynamic model with several thousand experimentally determined energy parameters. An attractive alternative is to use statistical approaches with parameters estimated from growing databases of structural RNAs. Good results have been reported for discriminative statistical methods using complex nearest-neighbor models, including CONTRAfold, Simfold, and ContextFold. Little work has been reported on generative probabilistic models (stochastic context-free grammars [SCFGs]) of comparable complexity, although probabilistic models are generally easier to train and to use. To explore a range of probabilistic models of increasing complexity, and to directly compare probabilistic, thermodynamic, and discriminative approaches, we created TORNADO, a computational tool that can parse a wide spectrum of RNA grammar architectures (including the standard nearest-neighbor model and more) using a generalized super-grammar that can be parameterized with probabilities, energies, or arbitrary scores. By using TORNADO, we find that probabilistic nearest-neighbor models perform comparably to (but not significantly better than) discriminative methods. We find that complex statistical models are prone to overfitting RNA structure and that evaluations should use structurally nonhomologous training and test data sets. Overfitting has affected at least one published method (ContextFold). The most important barrier to improving statistical approaches for RNA secondary structure prediction is the lack of diversity of well-curated single-sequence RNA secondary structures in current RNA databases. PMID:22194308
Chun, Ji-Yeon; Cho, Hyung-Yong; Min, Sang-Gi
2014-01-01
This study investigated the effects of γ-aminobutylic acid (GABA) on the quality and sensorial properties of both the GABA/NaCl complex and model meat products. GABA/NaCl complex was prepared by spray-drying, and the surface dimensions, morphology, rheology, and saltiness were characterized. For model meat products, pork patties were prepared by replacing NaCl with GABA. For characteristics of the complex, increasing GABA concentration increased the surface dimensions of the complex. However, GABA did not affect the rheological properties of solutions containing the complex. The addition of 2% GABA exhibited significantly higher saltiness than the control (no GABA treatment). In the case of pork patties, sensory testing indicated that the addition of GABA decreased the saltiness intensity. Both the intensity of juiciness and tenderness of patties containing GABA also scored lower than the control, based on the NaCl reduction. These results were consistent with the quality characteristics (cooking loss and texture profile analysis). Nevertheless, overall acceptability of the pork patties showed that up to 1.5%, patties containing GABA did not significantly differ from the control. Consequently, the results indicated that GABA has a potential application in meat products, but also manifested a deterioration of quality by the NaCl reduction, which warrants further exploration. PMID:26761294
A Digital Ecosystems Model of Assessment Feedback on Student Learning
ERIC Educational Resources Information Center
Gomez, Stephen; Andersson, Holger; Park, Julian; Maw, Stephen; Crook, Anne; Orsmond, Paul
2013-01-01
The term ecosystem has been used to describe complex interactions between living organisms and the physical world. The principles underlying ecosystems can also be applied to complex human interactions in the digital world. As internet technologies make an increasing contribution to teaching and learning practice in higher education, the…
Prospects of application of additive technologies for increasing the efficiency of impeller machines
NASA Astrophysics Data System (ADS)
Belova, O. V.; Borisov, Yu. A.
2017-08-01
Impeller machine is a device in which the flow path carries out the supply (or retraction) of mechanical energy to the flow of a working fluid passing through the machine. To increase the efficiency of impeller machines, it is necessary to use design modern technologies, namely the use of numerical methods for conducting research in the field of gas dynamics, as well as additive manufacturing (AM) for the of both prototypes and production model. AM technologies are deservedly rightly called revolutionary because they give unique possibility for manufacturing products, creating perfect forms, both light and durable. The designers face the challenge of developing a new design methodology, since AM allows the use of the concept of "Complexity For Free". The "Complexity For Free" conception is based on: complexity of the form; hierarchical complexity; complexity of the material; functional complexity. The new technical items design method according to a functional principle is also investigated.
NASA Astrophysics Data System (ADS)
Cooper, Rebecca Elizabeth; Eusterhues, Karin; Wegner, Carl-Eric; Totsche, Kai Uwe; Küsel, Kirsten
2017-11-01
The formation of Fe(III) oxides in natural environments occurs in the presence of natural organic matter (OM), resulting in the formation of OM-mineral complexes that form through adsorption or coprecipitation processes. Thus, microbial Fe(III) reduction in natural environments most often occurs in the presence of OM-mineral complexes rather than pure Fe(III) minerals. This study investigated to what extent does the content of adsorbed or coprecipitated OM on ferrihydrite influence the rate of Fe(III) reduction by Shewanella oneidensis MR-1, a model Fe(III)-reducing microorganism, in comparison to a microbial consortium extracted from the acidic, Fe-rich Schlöppnerbrunnen fen. We found that increased OM content led to increased rates of microbial Fe(III) reduction by S. oneidensis MR-1 in contrast to earlier findings with the model organism Geobacter bremensis. Ferrihydrite-OM coprecipitates were reduced slightly faster than ferrihydrites with adsorbed OM. Surprisingly, the complex microbial consortia stimulated by a mixture of electrons donors (lactate, acetate, and glucose) mimics S. oneidensis under the same experimental Fe(III)-reducing conditions suggesting similar mechanisms of electron transfer whether or not the OM is adsorbed or coprecipitated to the mineral surfaces. We also followed potential shifts of the microbial community during the incubation via 16S rRNA gene sequence analyses to determine variations due to the presence of adsorbed or coprecipitated OM-ferrihydrite complexes in contrast to pure ferrihydrite. Community profile analyses showed no enrichment of typical model Fe(III)-reducing bacteria, such as Shewanella or Geobacter sp., but an enrichment of fermenters (e.g., Enterobacteria) during pure ferrihydrite incubations which are known to use Fe(III) as an electron sink. Instead, OM-mineral complexes favored the enrichment of microbes including Desulfobacteria and Pelosinus sp., both of which can utilize lactate and acetate as an electron donor under Fe(III)-reducing conditions. In summary, this study shows that increasing concentrations of OM in OM-mineral complexes determines microbial Fe(III) reduction rates and shapes the microbial community structure involved in the reductive dissolution of ferrihydrite. Similarities observed between the complex Fe(III)-reducing microbial consortia and the model Fe(III)-reducer S. oneidensis MR-1 suggest electron-shuttling mechanisms dominate in OM-rich environments, including soils, sediments, and fens, where natural OM interacts with Fe(III) oxides during mineral formation.
An ecohydrologic model for a shallow groundwater urban environment.
Arden, Sam; Ma, Xin Cissy; Brown, Mark
2014-01-01
The urban environment is a patchwork of natural and artificial surfaces that results in complex interactions with and impacts to natural hydrologic cycles. Evapotranspiration is a major hydrologic flow that is often altered through urbanization, although the mechanisms of change are sometimes difficult to tease out due to difficulty in effectively simulating soil-plant-atmosphere interactions. This paper introduces a simplified yet realistic model that is a combination of existing surface runoff and ecohydrology models designed to increase the quantitative understanding of complex urban hydrologic processes. Results demonstrate that the model is capable of simulating the long-term variability of major hydrologic fluxes as a function of impervious surface, temperature, water table elevation, canopy interception, soil characteristics, precipitation and complex mechanisms of plant water uptake. These understandings have potential implications for holistic urban water system management.
The effective integration of analysis, modeling, and simulation tools.
DOT National Transportation Integrated Search
2013-08-01
The need for model integration arises from the recognition that both transportation decisionmaking and the tools supporting it continue to increase in complexity. Many strategies that agencies evaluate require using tools that are sensitive to supply...
DOT National Transportation Integrated Search
2009-01-01
Metropolitan planning agencies face increasingly complex issues in modeling interactions between the built environment and multimodal transportation systems. Although great strides have been made in simulating land use, travel demand, and traffic flo...
Logic-Based Models for the Analysis of Cell Signaling Networks†
2010-01-01
Computational models are increasingly used to analyze the operation of complex biochemical networks, including those involved in cell signaling networks. Here we review recent advances in applying logic-based modeling to mammalian cell biology. Logic-based models represent biomolecular networks in a simple and intuitive manner without describing the detailed biochemistry of each interaction. A brief description of several logic-based modeling methods is followed by six case studies that demonstrate biological questions recently addressed using logic-based models and point to potential advances in model formalisms and training procedures that promise to enhance the utility of logic-based methods for studying the relationship between environmental inputs and phenotypic or signaling state outputs of complex signaling networks. PMID:20225868
Entering AN ERA of Synthesis of Modeling
NASA Astrophysics Data System (ADS)
Guerin, Stephen
First, I believe we're entering an era of synthesis of modeling. Over the past 20 years, we've seen the proliferation of many isolated complex systems models. I think we now need tools for researchers, policy makers and the public to share models. Sharing could happen by stacking different layers of spatial agent-based models in geographic information systems and projecting interactive visualization out onto shared surfaces. Further, we need to make model authoring tools much more accessible to the point where motivated policy makers can author on their own. With the increased ability to author and share models, I believe this will allow us to scale our research to understand and manage the many interacting systems that make up our complex world...
Qian, Xinyi (Lisa); Yarnal, Careen M.; Almeida, David M.
2013-01-01
Affective complexity, a manifestation of psychological well-being, refers to the relative independence between positive and negative affect (PA, NA). According to the Dynamic Model of Affect (DMA), stressful situations lead to highly inverse PA-NA relationship, reducing affective complexity. Meanwhile, positive events can sustain affective complexity by restoring PA-NA independence. Leisure, a type of positive events, has been identified as a coping resource. This study used the DMA to assess whether leisure time helps restore affective complexity on stressful days. We found that on days with more leisure time than usual, an individual experienced less negative PA-NA relationship after daily stressful events. The finding demonstrates the value of leisure time as a coping resource and the DMA’s contribution to coping research. PMID:24659826
Demystifying the cytokine network: Mathematical models point the way.
Morel, Penelope A; Lee, Robin E C; Faeder, James R
2017-10-01
Cytokines provide the means by which immune cells communicate with each other and with parenchymal cells. There are over one hundred cytokines and many exist in families that share receptor components and signal transduction pathways, creating complex networks. Reductionist approaches to understanding the role of specific cytokines, through the use of gene-targeted mice, have revealed further complexity in the form of redundancy and pleiotropy in cytokine function. Creating an understanding of the complex interactions between cytokines and their target cells is challenging experimentally. Mathematical and computational modeling provides a robust set of tools by which complex interactions between cytokines can be studied and analyzed, in the process creating novel insights that can be further tested experimentally. This review will discuss and provide examples of the different modeling approaches that have been used to increase our understanding of cytokine networks. This includes discussion of knowledge-based and data-driven modeling approaches and the recent advance in single-cell analysis. The use of modeling to optimize cytokine-based therapies will also be discussed. Copyright © 2016 Elsevier Ltd. All rights reserved.
A Short Note on Estimating the Testlet Model with Different Estimators in Mplus
ERIC Educational Resources Information Center
Luo, Yong
2018-01-01
Mplus is a powerful latent variable modeling software program that has become an increasingly popular choice for fitting complex item response theory models. In this short note, we demonstrate that the two-parameter logistic testlet model can be estimated as a constrained bifactor model in Mplus with three estimators encompassing limited- and…
Universal Session-Level Change Processes in an Early Session of Psychotherapy: Path Models
ERIC Educational Resources Information Center
Kolden, Gregory G.; Chisholm-Stockard, Sarah M.; Strauman, Timothy J.; Tierney, Sandy C.; Mullen, Elizabeth A.; Schneider, Kristin L.
2006-01-01
The authors used structural equation modeling to investigate universal change processes identified in the generic model of psychotherapy (GMP). Three path models of increasing complexity were examined in Study 1 in dynamic therapy. The best fitting model from Study one was replicated in Study two for participants receiving either cognitive or…
Modeling Costal Zone Responses to Sea-Level Rise Using MoCCS: A Model of Complex Coastal System
NASA Astrophysics Data System (ADS)
Dai, H.; Niedoroda, A. W.; Ye, M.; Saha, B.; Donoghue, J. F.; Kish, S.
2011-12-01
Large-scale coastal systems consisting of several morphological components (e.g. beach, surf zone, dune, inlet, shoreface, and estuary) can be expected to exhibit complex and interacting responses to changes in the rate of sea level rise and storm climate. We have developed a numerical model of complex coastal systems (MoCCS), derived from earlier morphdynamic models, to represent the large-scale time-averaged physical processes that shape each component and govern the component interactions. These control the ongoing evolution of the barrier islands, beach and dune erosion, shoal formation and sand withdrawal at tidal inlets, depth changes in the bay, and changes in storm flooding. The model has been used to study the response of an idealized coastal system with physical characteristics and storm climatology similar to Santa Rosa Island on the Florida Panhandle coast. Five SLR scenarios have been used, covering the range of recently published projections for the next century. Each scenario has been input with a constant and then a time-varying storm climate. The results indicate that substantial increases in the rate of beach erosion are largely due to increased sand transfer to inlet shoals with increased rates of sea level rise. The barrier island undergoes cycles of dune destruction and regrowth, leading to sand deposition. This largely maintains island freeboard but is progressively less effective in offsetting bayside inundation and marsh habitat loss at accelerated sea level rise rates.
Hopkins, Jim
2016-01-01
The main concepts of the free energy (FE) neuroscience developed by Karl Friston and colleagues parallel those of Freud's Project for a Scientific Psychology. In Hobson et al. (2014) these include an innate virtual reality generator that produces the fictive prior beliefs that Freud described as the primary process. This enables Friston's account to encompass a unified treatment-a complexity theory-of the role of virtual reality in both dreaming and mental disorder. In both accounts the brain operates to minimize FE aroused by sensory impingements-including interoceptive impingements that report compliance with biological imperatives-and constructs a representation/model of the causes of impingement that enables this minimization. In Friston's account (variational) FE equals complexity minus accuracy, and is minimized by increasing accuracy and decreasing complexity. Roughly the brain (or model) increases accuracy together with complexity in waking. This is mediated by consciousness-creating active inference-by which it explains sensory impingements in terms of perceptual experiences of their causes. In sleep it reduces complexity by processes that include both synaptic pruning and consciousness/virtual reality/dreaming in REM. The consciousness-creating active inference that effects complexity-reduction in REM dreaming must operate on FE-arousing data distinct from sensory impingement. The most relevant source is remembered arousals of emotion, both recent and remote, as processed in SWS and REM on "active systems" accounts of memory consolidation/reconsolidation. Freud describes these remembered arousals as condensed in the dreamwork for use in the conscious contents of dreams, and similar condensation can be seen in symptoms. Complexity partly reflects emotional conflict and trauma. This indicates that dreams and symptoms are both produced to reduce complexity in the form of potentially adverse (traumatic or conflicting) arousals of amygdala-related emotions. Mental disorder is thus caused by computational complexity together with mechanisms like synaptic pruning that have evolved for complexity-reduction; and important features of disorder can be understood in these terms. Details of the consilience among Freudian, systems consolidation, and complexity-reduction accounts appear clearly in the analysis of a single fragment of a dream, indicating also how complexity reduction proceeds by a process resembling Bayesian model selection.
Frequency analysis of stress relaxation dynamics in model asphalts
NASA Astrophysics Data System (ADS)
Masoori, Mohammad; Greenfield, Michael L.
2014-09-01
Asphalt is an amorphous or semi-crystalline material whose mechanical performance relies on viscoelastic responses to applied strain or stress. Chemical composition and its effect on the viscoelastic properties of model asphalts have been investigated here by computing complex modulus from molecular dynamics simulation results for two different model asphalts whose compositions each resemble the Strategic Highway Research Program AAA-1 asphalt in different ways. For a model system that contains smaller molecules, simulation results for storage and loss modulus at 443 K reach both the low and high frequency scaling limits of the Maxwell model. Results for a model system composed of larger molecules (molecular weights 300-900 g/mol) with longer branches show a quantitatively higher complex modulus that decreases significantly as temperature increases over 400-533 K. Simulation results for its loss modulus approach the low frequency scaling limit of the Maxwell model at only the highest temperature simulated. A Black plot or van Gurp-Palman plot of complex modulus vs. phase angle for the system of larger molecules suggests some overlap among results at different temperatures for less high frequencies, with an interdependence consistent with the empirical Christensen-Anderson-Marasteanu model. Both model asphalts are thermorheologically complex at very high frequencies, where they show a loss peak that appears to be independent of temperature and density.
Mello-Andrade, Francyelli; da Costa, Wanderson Lucas; Pires, Wanessa Carvalho; Pereira, Flávia de Castro; Cardoso, Clever Gomes; Lino-Junior, Ruy de Souza; Irusta, Vicente Raul Chavarria; Carneiro, Cristiene Costa; de Melo-Reis, Paulo Roberto; Castro, Carlos Henrique; Almeida, Marcio Aurélio Pinheiro; Batista, Alzir Azevedo; Silveira-Lacerda, Elisângela de Paula
2017-10-01
Peritoneal carcinomatosis is considered as a potentially lethal clinical condition, and the therapeutic options are limited. The antitumor effectiveness of the [Ru(l-Met)(bipy)(dppb)]PF 6 (1) and the [Ru(l-Trp)(bipy)(dppb)]PF 6 (2) complexes were evaluated in the peritoneal carcinomatosis model, Ehrlich ascites carcinoma-bearing Swiss mice. This is the first study that evaluated the effect of Ru(II)/amino acid complexes for antitumor activity in vivo. Complexes 1 and 2 (2 and 6 mg kg -1 ) showed tumor growth inhibition ranging from moderate to high. The mean survival time of animal groups treated with complexes 1 and 2 was higher than in the negative and vehicle control groups. The induction of Ehrlich ascites carcinoma in mice led to alterations in hematological and biochemical parameters, and not the treatment with complexes 1 and 2. The treatment of Ehrlich ascites carcinoma-bearing mice with complexes 1 and 2 increased the number of Annexin V positive cells and cleaved caspase-3 levels and induced changes in the cell morphology and in the cell cycle phases by induction of sub-G1 and G0/G1 cell cycle arrest. In addition, these complexes reduce angiogenesis induced by Ehrlich ascites carcinoma cells in chick embryo chorioallantoic membrane model. The treatment with the LAT1 inhibitor decreased the sensitivity of the Ehrlich ascites carcinoma cells to complexes 1 and 2 in vitro-which suggests that the LAT1 could be related to the mechanism of action of amino acid/ruthenium(II) complexes, consequently decreasing the glucose uptake. Therefore, these complexes could be used to reduce tumor growth and increase mean survival time with less toxicity than cisplatin. Besides, these complexes induce apoptosis by combination of different mechanism of action.
NASA Astrophysics Data System (ADS)
Li, Weiyao; Huang, Guanhua; Xiong, Yunwu
2016-04-01
The complexity of the spatial structure of porous media, randomness of groundwater recharge and discharge (rainfall, runoff, etc.) has led to groundwater movement complexity, physical and chemical interaction between groundwater and porous media cause solute transport in the medium more complicated. An appropriate method to describe the complexity of features is essential when study on solute transport and conversion in porous media. Information entropy could measure uncertainty and disorder, therefore we attempted to investigate complexity, explore the contact between the information entropy and complexity of solute transport in heterogeneous porous media using information entropy theory. Based on Markov theory, two-dimensional stochastic field of hydraulic conductivity (K) was generated by transition probability. Flow and solute transport model were established under four conditions (instantaneous point source, continuous point source, instantaneous line source and continuous line source). The spatial and temporal complexity of solute transport process was characterized and evaluated using spatial moment and information entropy. Results indicated that the entropy increased as the increase of complexity of solute transport process. For the point source, the one-dimensional entropy of solute concentration increased at first and then decreased along X and Y directions. As time increased, entropy peak value basically unchanged, peak position migrated along the flow direction (X direction) and approximately coincided with the centroid position. With the increase of time, spatial variability and complexity of solute concentration increase, which result in the increases of the second-order spatial moment and the two-dimensional entropy. Information entropy of line source was higher than point source. Solute entropy obtained from continuous input was higher than instantaneous input. Due to the increase of average length of lithoface, media continuity increased, flow and solute transport complexity weakened, and the corresponding information entropy also decreased. Longitudinal macro dispersivity declined slightly at early time then rose. Solute spatial and temporal distribution had significant impacts on the information entropy. Information entropy could reflect the change of solute distribution. Information entropy appears a tool to characterize the spatial and temporal complexity of solute migration and provides a reference for future research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boros, Eszter; Srinivas, Raja; Kim, Hee -Kyung
Aqua ligands can undergo rapid internal rotation about the M-O bond. For magnetic resonance contrast agents, this rotation results in diminished relaxivity. Herein, we show that an intramolecular hydrogen bond to the aqua ligand can reduce this internal rotation and increase relaxivity. Molecular modeling was used to design a series of four Gd complexes capable of forming an intramolecular H-bond to the coordinated water ligand, and these complexes had anomalously high relaxivities compared to similar complexes lacking a H-bond acceptor. Molecular dynamics simulations supported the formation of a stable intramolecular H-bond, while alternative hypotheses that could explain the higher relaxivitymore » were systematically ruled out. Finally, intramolecular H-bonding represents a useful strategy to limit internal water rotational motion and increase relaxivity of Gd complexes.« less
NASA Technical Reports Server (NTRS)
Schaefer, Jacob; Hanson, Curt; Johnson, Marcus A.; Nguyen, Nhan
2011-01-01
Three model reference adaptive controllers (MRAC) with varying levels of complexity were evaluated on a high performance jet aircraft and compared along with a baseline nonlinear dynamic inversion controller. The handling qualities and performance of the controllers were examined during failure conditions that induce coupling between the pitch and roll axes. Results from flight tests showed with a roll to pitch input coupling failure, the handling qualities went from Level 2 with the baseline controller to Level 1 with the most complex MRAC tested. A failure scenario with the left stabilator frozen also showed improvement with the MRAC. Improvement in performance and handling qualities was generally seen as complexity was incrementally added; however, added complexity usually corresponds to increased verification and validation effort required for certification. The tradeoff between complexity and performance is thus important to a controls system designer when implementing an adaptive controller on an aircraft. This paper investigates this relation through flight testing of several controllers of vary complexity.
Kreps, Gary L
2009-03-01
Communication is a crucial process in the effective delivery of health care services and the promotion of public health. However, there are often tremendous complexities in using communication effectively to provide the best health care, direct the adoption of health promoting behaviors, and implement evidence-based public health policies and practices. This article describes Weick's model of organizing as a powerful theory of social organizing that can help increase understanding of the communication demands of health care and health promotion. The article identifies relevant applications from the model for health communication research and practice. Weick's model of organizing is a relevant and heuristic theoretical perspective for guiding health communication research and practice. There are many potential applications of this model illustrating the complexities of effective communication in health care and health promotion. Weick's model of organizing can be used as a template for guiding both research and practice in health care and health promotion. The model illustrates the important roles that communication performs in enabling health care consumers and providers to make sense of the complexities of modern health care and health promotion, select the best strategies for responding effectively to complex health care and health promotion situations, and retain relevant information (develop organizational intelligence) for guiding future responses to complex health care and health promotion challenges.
Dilber, Daniel; Malcic, Ivan
2010-08-01
The Aristotle basic complexity score and the risk adjustment in congenital cardiac surgery-1 method were developed and used to compare outcomes of congenital cardiac surgery. Both methods were used to compare results of procedures performed on our patients in Croatian cardiosurgical centres and results of procedures were taken abroad. The study population consisted of all patients with congenital cardiac disease born to Croatian residents between 1 October, 2002 and 1 October, 2007 undergoing a cardiovascular operation during this period. Of the 556 operations, the Aristotle basic complexity score could be assigned to 553 operations and the risk adjustment in congenital cardiac surgery-1 method to 536 operations. Procedures were performed in two institutions in Croatia and seven institutions abroad. The average complexity for cardiac procedures performed in Croatia was significantly lower. With both systems, along with the increase in complexity, there is also an increase in mortality before discharge and postoperative length of stay. Only after the adjustment for complexity there are marked differences in mortality and occurrence of postoperative complications. Both, the Aristotle basic complexity score and the risk adjustment in congenital cardiac surgery-1 method were predictive of in-hospital mortality as well as prolonged postoperative length to stay, and can be used as a tool in our country to evaluate a cardiosurgical model and recognise potential problems.
NASA Astrophysics Data System (ADS)
Mei, Yuan; Sherman, David M.; Liu, Weihua; Etschmann, Barbara; Testemale, Denis; Brugger, Joël
2015-02-01
The solubility of zinc minerals in hydrothermal fluids is enhanced by chloride complexation of Zn2+. Thermodynamic models of these complexation reactions are central to models of Zn transport and ore formation. However, existing thermodynamic models, derived from solubility measurements, are inconsistent with spectroscopic measurements of Zn speciation. Here, we used ab initio molecular dynamics simulations (with the PBE exchange-correlation functional) to predict the speciation of Zn-Cl complexes from 25 to 600 °C. We also obtained in situ XAS measurements of Zn-Cl solutions at 30-600 °C. Qualitatively, the simulations reproduced the main features derived from in situ XANES and EXAFS measurements: octahedral to tetrahedral transition with increasing temperature and salinity, stability of ZnCl42- at high chloride concentration up to ⩾500 °C, and increasing stability of the trigonal planar [ZnCl3]- complex at high temperature. Having confirmed the dominant species, we directly determined the stability constants for the Zn-Cl complexes using thermodynamic integration along constrained Zn-Cl distances in a series of MD simulations. We corrected our stability constants to infinite dilution using the b-dot model for the activity coefficients of the solute species. In order to compare the ab initio results with experiments, we need to re-model the existing solubility data using the species we identified in our MD simulations. The stability constants derived from refitting published experimental data are in reasonable agreement with those we obtained using ab initio MD simulations. Our new thermodynamic model accurately predicts the experimentally observed changes in ZnO(s) and ZnCO3(s) solubility as a function of chloride concentration from 200 (Psat) to 600 °C (2000 bar). This study demonstrates that metal speciation and geologically useful stability constants can be derived for species in hydrothermal fluids from ab initio MD simulations even at the generalized gradient approximation for exchange-correlation. We caution, however, that simulations are mostly reliable at high T where ligand exchange is fast enough to yield thermodynamic averages over the timescales of the simulations.
Fighting Cancer with Mathematics and Viruses.
Santiago, Daniel N; Heidbuechel, Johannes P W; Kandell, Wendy M; Walker, Rachel; Djeu, Julie; Engeland, Christine E; Abate-Daga, Daniel; Enderling, Heiko
2017-08-23
After decades of research, oncolytic virotherapy has recently advanced to clinical application, and currently a multitude of novel agents and combination treatments are being evaluated for cancer therapy. Oncolytic agents preferentially replicate in tumor cells, inducing tumor cell lysis and complex antitumor effects, such as innate and adaptive immune responses and the destruction of tumor vasculature. With the availability of different vector platforms and the potential of both genetic engineering and combination regimens to enhance particular aspects of safety and efficacy, the identification of optimal treatments for patient subpopulations or even individual patients becomes a top priority. Mathematical modeling can provide support in this arena by making use of experimental and clinical data to generate hypotheses about the mechanisms underlying complex biology and, ultimately, predict optimal treatment protocols. Increasingly complex models can be applied to account for therapeutically relevant parameters such as components of the immune system. In this review, we describe current developments in oncolytic virotherapy and mathematical modeling to discuss the benefit of integrating different modeling approaches into biological and clinical experimentation. Conclusively, we propose a mutual combination of these research fields to increase the value of the preclinical development and the therapeutic efficacy of the resulting treatments.
Fighting Cancer with Mathematics and Viruses
Santiago, Daniel N.; Heidbuechel, Johannes P. W.; Kandell, Wendy M.; Walker, Rachel; Djeu, Julie; Abate-Daga, Daniel; Enderling, Heiko
2017-01-01
After decades of research, oncolytic virotherapy has recently advanced to clinical application, and currently a multitude of novel agents and combination treatments are being evaluated for cancer therapy. Oncolytic agents preferentially replicate in tumor cells, inducing tumor cell lysis and complex antitumor effects, such as innate and adaptive immune responses and the destruction of tumor vasculature. With the availability of different vector platforms and the potential of both genetic engineering and combination regimens to enhance particular aspects of safety and efficacy, the identification of optimal treatments for patient subpopulations or even individual patients becomes a top priority. Mathematical modeling can provide support in this arena by making use of experimental and clinical data to generate hypotheses about the mechanisms underlying complex biology and, ultimately, predict optimal treatment protocols. Increasingly complex models can be applied to account for therapeutically relevant parameters such as components of the immune system. In this review, we describe current developments in oncolytic virotherapy and mathematical modeling to discuss the benefit of integrating different modeling approaches into biological and clinical experimentation. Conclusively, we propose a mutual combination of these research fields to increase the value of the preclinical development and the therapeutic efficacy of the resulting treatments. PMID:28832539
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Zhenyu Henry; Tate, Zeb; Abhyankar, Shrirang
The power grid has been evolving over the last 120 years, but it is seeing more changes in this decade and next than it has seen over the past century. In particular, the widespread deployment of intermittent renewable generation, smart loads and devices, hierarchical and distributed control technologies, phasor measurement units, energy storage, and widespread usage of electric vehicles will require fundamental changes in methods and tools for the operation and planning of the power grid. The resulting new dynamic and stochastic behaviors will demand the inclusion of more complexity in modeling the power grid. Solving such complex models inmore » the traditional computing environment will be a major challenge. Along with the increasing complexity of power system models, the increasing complexity of smart grid data further adds to the prevailing challenges. In this environment, the myriad of smart sensors and meters in the power grid increase by multiple orders of magnitude, so do the volume and speed of the data. The information infrastructure will need to drastically change to support the exchange of enormous amounts of data as smart grid applications will need the capability to collect, assimilate, analyze and process the data, to meet real-time grid functions. High performance computing (HPC) holds the promise to enhance these functions, but it is a great resource that has not been fully explored and adopted for the power grid domain.« less
NASA Astrophysics Data System (ADS)
Kobayashi, Hiroaki; Gotoda, Hiroshi; Tachibana, Shigeru; Yoshida, Seiji
2017-12-01
We conduct an experimental study using time series analysis based on symbolic dynamics to detect a precursor of frequency-mode-shift during thermoacoustic combustion oscillations in a staged aircraft engine model combustor. With increasing amount of the main fuel, a significant shift in the dominant frequency-mode occurs in noisy periodic dynamics, leading to a notable increase in oscillation amplitudes. The sustainment of noisy periodic dynamics during thermoacoustic combustion oscillations is clearly shown by the multiscale complexity-entropy causality plane in terms of statistical complexity. A modified version of the permutation entropy allows us to detect a precursor of the frequency-mode-shift before the amplification of pressure fluctuations.
On the dimension of complex responses in nonlinear structural vibrations
NASA Astrophysics Data System (ADS)
Wiebe, R.; Spottswood, S. M.
2016-07-01
The ability to accurately model engineering systems under extreme dynamic loads would prove a major breakthrough in many aspects of aerospace, mechanical, and civil engineering. Extreme loads frequently induce both nonlinearities and coupling which increase the complexity of the response and the computational cost of finite element models. Dimension reduction has recently gained traction and promises the ability to distill dynamic responses down to a minimal dimension without sacrificing accuracy. In this context, the dimensionality of a response is related to the number of modes needed in a reduced order model to accurately simulate the response. Thus, an important step is characterizing the dimensionality of complex nonlinear responses of structures. In this work, the dimensionality of the nonlinear response of a post-buckled beam is investigated. Significant detail is dedicated to carefully introducing the experiment, the verification of a finite element model, and the dimensionality estimation algorithm as it is hoped that this system may help serve as a benchmark test case. It is shown that with minor modifications, the method of false nearest neighbors can quantitatively distinguish between the response dimension of various snap-through, non-snap-through, random, and deterministic loads. The state-space dimension of the nonlinear system in question increased from 2-to-10 as the system response moved from simple, low-level harmonic to chaotic snap-through. Beyond the problem studied herein, the techniques developed will serve as a prescriptive guide in developing fast and accurate dimensionally reduced models of nonlinear systems, and eventually as a tool for adaptive dimension-reduction in numerical modeling. The results are especially relevant in the aerospace industry for the design of thin structures such as beams, panels, and shells, which are all capable of spatio-temporally complex dynamic responses that are difficult and computationally expensive to model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rai, Dhanpat; Kitamura, Akira; Rosso, Kevin M.
Solubility of HfO2(am) was determined as a function of KHCO3 concentrations ranging from 0.001 mol·kg-1 to 0.1 mol·kg-1. The solubility of HfO2(am) increased dramatically with the increase in KHCO3 concentrations, indicating that Hf(IV) makes strong complexes with carbonate. Thermodynamic equilibrium constants for the formation of Hf-carbonate complexes were determined using both the Pitzer and SIT models. The dramatic increase in Hf concentrations with the increase in KHCO3 concentrations can best be described by the formation of Hf(OH-)2(CO3)22- and Hf(CO3)56-. The log10 K0 values for the reactions [Hf4++2CO32-+2OH-⇌Hf(OH)2(CO3)22-] and [Hf4++5CO32-⇌Hf(CO3)56-], based on the SIT model, were determined to be 44.53±0.46 andmore » 41.53±0.46, respectively, and based on the Pitzer model they were 44.56±0.48 and 40.20±0.48, respectively.« less
CALCULATION OF PHYSICOCHEMICAL PROPERTIES FOR ENVIRONMENTAL MODELING
Recent trends in environmental regulatory strategies dictate that EPA will rely heavily on predictive modeling to carry out the increasingly complex array of exposure and risk assessments necessary to develop scientifically defensible regulations. In response to this need, resea...
Dos Passos Menezes, Paula; Dos Santos, Polliana Barbosa Pereira; Dória, Grace Anne Azevedo; de Sousa, Bruna Maria Hipólito; Serafini, Mairim Russo; Nunes, Paula Santos; Quintans-Júnior, Lucindo José; de Matos, Iara Lisboa; Alves, Péricles Barreto; Bezerra, Daniel Pereira; Mendonça Júnior, Francisco Jaime Bezerra; da Silva, Gabriel Francisco; de Aquino, Thiago Mendonça; de Souza Bento, Edson; Scotti, Marcus Tullius; Scotti, Luciana; de Souza Araujo, Adriano Antunes
2017-02-01
This study evaluated three different methods for the formation of an inclusion complex between alpha- and beta-cyclodextrin (α- and β-CD) and limonene (LIM) with the goal of improving the physicochemical properties of limonene. The study samples were prepared through physical mixing (PM), paste complexation (PC), and slurry complexation (SC) methods in the molar ratio of 1:1 (cyclodextrin:limonene). The complexes prepared were evaluated with thermogravimetry/derivate thermogravimetry, infrared spectroscopy, X-ray diffraction, complexation efficiency through gas chromatography/mass spectrometry analyses, molecular modeling, and nuclear magnetic resonance. The results showed that the physical mixing procedure did not produce complexation, but the paste and slurry methods produced inclusion complexes, which demonstrated interactions outside of the cavity of the CDs. However, the paste obtained with β-cyclodextrin did not demonstrate complexation in the gas chromatographic technique because, after extraction, most of the limonene was either surface-adsorbed by β-cyclodextrin or volatilized during the procedure. We conclude that paste complexation and slurry complexation are effective and economic methods to improve the physicochemical character of limonene and could have important applications in pharmacological activities in terms of an increase in solubility.
Wang, Jiguang; Sun, Yidan; Zheng, Si; Zhang, Xiang-Sun; Zhou, Huarong; Chen, Luonan
2013-01-01
Synergistic interactions among transcription factors (TFs) and their cofactors collectively determine gene expression in complex biological systems. In this work, we develop a novel graphical model, called Active Protein-Gene (APG) network model, to quantify regulatory signals of transcription in complex biomolecular networks through integrating both TF upstream-regulation and downstream-regulation high-throughput data. Firstly, we theoretically and computationally demonstrate the effectiveness of APG by comparing with the traditional strategy based only on TF downstream-regulation information. We then apply this model to study spontaneous type 2 diabetic Goto-Kakizaki (GK) and Wistar control rats. Our biological experiments validate the theoretical results. In particular, SP1 is found to be a hidden TF with changed regulatory activity, and the loss of SP1 activity contributes to the increased glucose production during diabetes development. APG model provides theoretical basis to quantitatively elucidate transcriptional regulation by modelling TF combinatorial interactions and exploiting multilevel high-throughput information.
Wang, Jiguang; Sun, Yidan; Zheng, Si; Zhang, Xiang-Sun; Zhou, Huarong; Chen, Luonan
2013-01-01
Synergistic interactions among transcription factors (TFs) and their cofactors collectively determine gene expression in complex biological systems. In this work, we develop a novel graphical model, called Active Protein-Gene (APG) network model, to quantify regulatory signals of transcription in complex biomolecular networks through integrating both TF upstream-regulation and downstream-regulation high-throughput data. Firstly, we theoretically and computationally demonstrate the effectiveness of APG by comparing with the traditional strategy based only on TF downstream-regulation information. We then apply this model to study spontaneous type 2 diabetic Goto-Kakizaki (GK) and Wistar control rats. Our biological experiments validate the theoretical results. In particular, SP1 is found to be a hidden TF with changed regulatory activity, and the loss of SP1 activity contributes to the increased glucose production during diabetes development. APG model provides theoretical basis to quantitatively elucidate transcriptional regulation by modelling TF combinatorial interactions and exploiting multilevel high-throughput information. PMID:23346354
Snowden, Thomas J; van der Graaf, Piet H; Tindall, Marcus J
2017-07-01
Complex models of biochemical reaction systems have become increasingly common in the systems biology literature. The complexity of such models can present a number of obstacles for their practical use, often making problems difficult to intuit or computationally intractable. Methods of model reduction can be employed to alleviate the issue of complexity by seeking to eliminate those portions of a reaction network that have little or no effect upon the outcomes of interest, hence yielding simplified systems that retain an accurate predictive capacity. This review paper seeks to provide a brief overview of a range of such methods and their application in the context of biochemical reaction network models. To achieve this, we provide a brief mathematical account of the main methods including timescale exploitation approaches, reduction via sensitivity analysis, optimisation methods, lumping, and singular value decomposition-based approaches. Methods are reviewed in the context of large-scale systems biology type models, and future areas of research are briefly discussed.
Task Models in the Digital Ocean
ERIC Educational Resources Information Center
DiCerbo, Kristen E.
2014-01-01
The Task Model is a description of each task in a workflow. It defines attributes associated with that task. The creation of task models becomes increasingly important as the assessment tasks become more complex. Explicitly delineating the impact of task variables on the ability to collect evidence and make inferences demands thoughtfulness from…
ERIC Educational Resources Information Center
Del Giudice, Marco
2016-01-01
According to models of differential susceptibility, the same neurobiological and temperamental traits that determine increased sensitivity to stress and adversity also confer enhanced responsivity to the positive aspects of the environment. Differential susceptibility models have expanded to include complex developmental processes in which genetic…
A protocol for parameterization and calibration of RZWQM2 in field research
USDA-ARS?s Scientific Manuscript database
Use of agricultural system models in field research requires a full understanding of both the model and the system it simulates. Since the 1960s, agricultural system models have increased tremendously in their complexity due to greater understanding of the processes simulated, their application to r...
The Model United Nations in Brief.
ERIC Educational Resources Information Center
Muldoon, James P., Jr.
Over 60,000 secondary school and college students participate annually in the Model United Nations (UN) simulations in order to learn the complexities of international politics and to increase their understanding of other nations and cultural perspectives. This paper provides a broad overview of the Model UN program and its role in global…
Abaqus Simulations of Rock Response to Dynamic Loading
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steedman, David W.; Coblentz, David
The LANL Geodynamics Team has been applying Abaqus modeling to achieve increasingly complex simulations. Advancements in Abaqus model building and simulation tools allows this progress. We use Lab-developed constitutive models, the fully coupled CEL Abaqus and general contact to simulate response of realistic sites to explosively driven shock.
A Multitasking General Executive for Compound Continuous Tasks
ERIC Educational Resources Information Center
Salvucci, Dario D.
2005-01-01
As cognitive architectures move to account for increasingly complex real-world tasks, one of the most pressing challenges involves understanding and modeling human multitasking. Although a number of existing models now perform multitasking in real-world scenarios, these models typically employ customized executives that schedule tasks for the…
Cheung, Carol C; Torlakovic, Emina E; Chow, Hung; Snover, Dale C; Asa, Sylvia L
2015-03-01
Pathologists provide diagnoses relevant to the disease state of the patient and identify specific tissue characteristics relevant to response to therapy and prognosis. As personalized medicine evolves, there is a trend for increased demand of tissue-derived parameters. Pathologists perform increasingly complex analyses on the same 'cases'. Traditional methods of workload assessment and reimbursement, based on number of cases sometimes with a modifier (eg, the relative value unit (RVU) system used in the United States), often grossly underestimate the amount of work needed for complex cases and may overvalue simple, small biopsy cases. We describe a new approach to pathologist workload measurement that aligns with this new practice paradigm. Our multisite institution with geographically diverse partner institutions has developed the Automatable Activity-Based Approach to Complexity Unit Scoring (AABACUS) model that captures pathologists' clinical activities from parameters documented in departmental laboratory information systems (LISs). The model's algorithm includes: 'capture', 'export', 'identify', 'count', 'score', 'attribute', 'filter', and 'assess filtered results'. Captured data include specimen acquisition, handling, analysis, and reporting activities. Activities were counted and complexity units (CUs) generated using a complexity factor for each activity. CUs were compared between institutions, practice groups, and practice types and evaluated over a 5-year period (2008-2012). The annual load of a clinical service pathologist, irrespective of subspecialty, was ∼40,000 CUs using relative benchmarking. The model detected changing practice patterns and was appropriate for monitoring clinical workload for anatomical pathology, neuropathology, and hematopathology in academic and community settings, and encompassing subspecialty and generalist practices. AABACUS is objective, can be integrated with an LIS and automated, is reproducible, backwards compatible, and future adaptable. It can be applied as a robust decision support tool for the assessment of overall and targeted staffing needs as well as utilization analyses for resource allocation.
In vivo quantitative analysis of Talin turnover in response to force
Hákonardóttir, Guðlaug Katrín; López-Ceballos, Pablo; Herrera-Reyes, Alejandra Donají; Das, Raibatak; Coombs, Daniel; Tanentzapf, Guy
2015-01-01
Cell adhesion to the extracellular matrix (ECM) allows cells to form and maintain three-dimensional tissue architecture. Cell–ECM adhesions are stabilized upon exposure to mechanical force. In this study, we used quantitative imaging and mathematical modeling to gain mechanistic insight into how integrin-based adhesions respond to increased and decreased mechanical forces. A critical means of regulating integrin-based adhesion is provided by modulating the turnover of integrin and its adhesion complex (integrin adhesion complex [IAC]). The turnover of the IAC component Talin, a known mechanosensor, was analyzed using fluorescence recovery after photobleaching. Experiments were carried out in live, intact flies in genetic backgrounds that increased or decreased the force applied on sites of adhesion. This analysis showed that when force is elevated, the rate of assembly of new adhesions increases such that cell–ECM adhesion is stabilized. Moreover, under conditions of decreased force, the overall rate of turnover, but not the proportion of adhesion complex components undergoing turnover, increases. Using point mutations, we identify the key functional domains of Talin that mediate its response to force. Finally, by fitting a mathematical model to the data, we uncover the mechanisms that mediate the stabilization of ECM-based adhesion during development. PMID:26446844
Metal Complexation in Xylem Fluid 1
White, Michael C.; Chaney, Rufus L.; Decker, A. Morris
1981-01-01
The capacity of ligands in xylem fluid to form metal complexes was tested with a series of in vitro experiments using paper electrophoresis and radiographs. The xylem fluid was collected hourly for 8 hours from soybean (Glycine max L. Merr.) and tomato (Lycopersicon esculentum Mill.) plants grown in normal and Zn-phytotoxic nutrient solutions. Metal complexation was assayed by anodic or reduced cathodic movement of radionuclides (63Ni, 65Zn, 109Cd, 54Mn) that were presumed to have formed negatively charged complexes. Electrophoretic migration of Ni, Zn, Cd, and Mn added to xylem exudate and spotted on KCl- or KNO3-wetted paper showed that stable Ni, Zn, and Cd metal complexes were formed by exudate ligands. No anodic Mn complexes were observed in this test system. Solution pH, plant species, exudate collection time, and Zn phytotoxicity all affected the amount of metal complex formed in exudate. As the pH increased, there was increased anodic metal movement. Soybean exudate generally bound more of each metal than did tomato exudate. Metal binding usually decreased with increasing exudate collection time, and less metal was bound by the high-Zn exudate. Ni, Zn, Cd, and Mn in exudate added to exudate-wetted paper demonstrated the effect of ligand concentration on stable metal complex formation. Complexes for each metal were demonstratable with this method. Cathodic metal movement increased with time of exudate collection, and it was greater in the high-Zn exudate than in the normal-Zn exudate. A model study illustrated the effect of ligand concentration on metal complex stability in the electrophoretic field. Higher ligand (citric acid) concentrations increased the stability for all metals tested. Images PMID:16661666
Role of Microenvironment in Glioma Invasion: What We Learned from In Vitro Models
Manini, Ivana; Caponnetto, Federica; Bartolini, Anna; Ius, Tamara; Mariuzzi, Laura; Di Loreto, Carla; Cesselli, Daniela
2018-01-01
The invasion properties of glioblastoma hamper a radical surgery and are responsible for its recurrence. Understanding the invasion mechanisms is thus critical to devise new therapeutic strategies. Therefore, the creation of in vitro models that enable these mechanisms to be studied represents a crucial step. Since in vitro models represent an over-simplification of the in vivo system, in these years it has been attempted to increase the level of complexity of in vitro assays to create models that could better mimic the behaviour of the cells in vivo. These levels of complexity involved: 1. The dimension of the system, moving from two-dimensional to three-dimensional models; 2. The use of microfluidic systems; 3. The use of mixed cultures of tumour cells and cells of the tumour micro-environment in order to mimic the complex cross-talk between tumour cells and their micro-environment; 4. And the source of cells used in an attempt to move from commercial lines to patient-based models. In this review, we will summarize the evidence obtained exploring these different levels of complexity and highlighting advantages and limitations of each system used. PMID:29300332
Porta, Alberto; Faes, Luca; Bari, Vlasta; Marchi, Andrea; Bassani, Tito; Nollo, Giandomenico; Perseguini, Natália Maria; Milan, Juliana; Minatel, Vinícius; Borghi-Silva, Audrey; Takahashi, Anielle C. M.; Catai, Aparecida M.
2014-01-01
The proposed approach evaluates complexity of the cardiovascular control and causality among cardiovascular regulatory mechanisms from spontaneous variability of heart period (HP), systolic arterial pressure (SAP) and respiration (RESP). It relies on construction of a multivariate embedding space, optimization of the embedding dimension and a procedure allowing the selection of the components most suitable to form the multivariate embedding space. Moreover, it allows the comparison between linear model-based (MB) and nonlinear model-free (MF) techniques and between MF approaches exploiting local predictability (LP) and conditional entropy (CE). The framework was applied to study age-related modifications of complexity and causality in healthy humans in supine resting (REST) and during standing (STAND). We found that: 1) MF approaches are more efficient than the MB method when nonlinear components are present, while the reverse situation holds in presence of high dimensional embedding spaces; 2) the CE method is the least powerful in detecting age-related trends; 3) the association of HP complexity on age suggests an impairment of cardiac regulation and response to STAND; 4) the relation of SAP complexity on age indicates a gradual increase of sympathetic activity and a reduced responsiveness of vasomotor control to STAND; 5) the association from SAP to HP on age during STAND reveals a progressive inefficiency of baroreflex; 6) the reduced connection from HP to SAP with age might be linked to the progressive exploitation of Frank-Starling mechanism at REST and to the progressive increase of peripheral resistances during STAND; 7) at REST the diminished association from RESP to HP with age suggests a vagal withdrawal and a gradual uncoupling between respiratory activity and heart; 8) the weakened connection from RESP to SAP with age might be related to the progressive increase of left ventricular thickness and vascular stiffness and to the gradual decrease of respiratory sinus arrhythmia. PMID:24586796
The Limitations of Model-Based Experimental Design and Parameter Estimation in Sloppy Systems.
White, Andrew; Tolman, Malachi; Thames, Howard D; Withers, Hubert Rodney; Mason, Kathy A; Transtrum, Mark K
2016-12-01
We explore the relationship among experimental design, parameter estimation, and systematic error in sloppy models. We show that the approximate nature of mathematical models poses challenges for experimental design in sloppy models. In many models of complex biological processes it is unknown what are the relevant physical mechanisms that must be included to explain system behaviors. As a consequence, models are often overly complex, with many practically unidentifiable parameters. Furthermore, which mechanisms are relevant/irrelevant vary among experiments. By selecting complementary experiments, experimental design may inadvertently make details that were ommitted from the model become relevant. When this occurs, the model will have a large systematic error and fail to give a good fit to the data. We use a simple hyper-model of model error to quantify a model's discrepancy and apply it to two models of complex biological processes (EGFR signaling and DNA repair) with optimally selected experiments. We find that although parameters may be accurately estimated, the discrepancy in the model renders it less predictive than it was in the sloppy regime where systematic error is small. We introduce the concept of a sloppy system-a sequence of models of increasing complexity that become sloppy in the limit of microscopic accuracy. We explore the limits of accurate parameter estimation in sloppy systems and argue that identifying underlying mechanisms controlling system behavior is better approached by considering a hierarchy of models of varying detail rather than focusing on parameter estimation in a single model.
Detwiler, R.L.; Mehl, S.; Rajaram, H.; Cheung, W.W.
2002-01-01
Numerical solution of large-scale ground water flow and transport problems is often constrained by the convergence behavior of the iterative solvers used to solve the resulting systems of equations. We demonstrate the ability of an algebraic multigrid algorithm (AMG) to efficiently solve the large, sparse systems of equations that result from computational models of ground water flow and transport in large and complex domains. Unlike geometric multigrid methods, this algorithm is applicable to problems in complex flow geometries, such as those encountered in pore-scale modeling of two-phase flow and transport. We integrated AMG into MODFLOW 2000 to compare two- and three-dimensional flow simulations using AMG to simulations using PCG2, a preconditioned conjugate gradient solver that uses the modified incomplete Cholesky preconditioner and is included with MODFLOW 2000. CPU times required for convergence with AMG were up to 140 times faster than those for PCG2. The cost of this increased speed was up to a nine-fold increase in required random access memory (RAM) for the three-dimensional problems and up to a four-fold increase in required RAM for the two-dimensional problems. We also compared two-dimensional numerical simulations of steady-state transport using AMG and the generalized minimum residual method with an incomplete LU-decomposition preconditioner. For these transport simulations, AMG yielded increased speeds of up to 17 times with only a 20% increase in required RAM. The ability of AMG to solve flow and transport problems in large, complex flow systems and its ready availability make it an ideal solver for use in both field-scale and pore-scale modeling.
Strong Start Wraparound: Addressing the Complex Needs of Mothers in Early Recovery
ERIC Educational Resources Information Center
Teel, M. Kay
2014-01-01
The Strong Start Study tested an innovative, High-Fidelity Wraparound intervention with families in early recovery from substance use. The Strong Start Wraparound model addressed the complex needs of pregnant and parenting women who were in early recovery to increase the protective factors of parental resilience, social connections, concrete…
Architectures for Distributed and Complex M-Learning Systems: Applying Intelligent Technologies
ERIC Educational Resources Information Center
Caballe, Santi, Ed.; Xhafa, Fatos, Ed.; Daradoumis, Thanasis, Ed.; Juan, Angel A., Ed.
2009-01-01
Over the last decade, the needs of educational organizations have been changing in accordance with increasingly complex pedagogical models and with the technological evolution of e-learning environments with very dynamic teaching and learning requirements. This book explores state-of-the-art software architectures and platforms used to support…
Exploring Creativity by Linking Complexity Learning to Futures-Based Research Proposals
ERIC Educational Resources Information Center
Bolton, Michael J.
2009-01-01
Traditional teaching models based on linear approaches to instruction arguably are of limited value in preparing students to handle complex, dynamic real-world problems. As such, they are undergoing increased scrutiny by scholars in various disciplines. The author argues that nonlinear approaches to higher education such as those founded on…
Multi-Node Thermal System Model for Lithium-Ion Battery Packs: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, Ying; Smith, Kandler; Wood, Eric
Temperature is one of the main factors that controls the degradation in lithium ion batteries. Accurate knowledge and control of cell temperatures in a pack helps the battery management system (BMS) to maximize cell utilization and ensure pack safety and service life. In a pack with arrays of cells, a cells temperature is not only affected by its own thermal characteristics but also by its neighbors, the cooling system and pack configuration, which increase the noise level and the complexity of cell temperatures prediction. This work proposes to model lithium ion packs thermal behavior using a multi-node thermal network model,more » which predicts the cell temperatures by zones. The model was parametrized and validated using commercial lithium-ion battery packs. neighbors, the cooling system and pack configuration, which increase the noise level and the complexity of cell temperatures prediction. This work proposes to model lithium ion packs thermal behavior using a multi-node thermal network model, which predicts the cell temperatures by zones. The model was parametrized and validated using commercial lithium-ion battery packs.« less
Fraysse, Marion; Pinazo, Christel; Faure, Vincent Martin; Fuchs, Rosalie; Lazzari, Paolo; Raimbault, Patrick; Pairaud, Ivane
2013-01-01
Terrestrial inputs (natural and anthropogenic) from rivers, the atmosphere and physical processes strongly impact the functioning of coastal pelagic ecosystems. The objective of this study was to develop a tool for the examination of these impacts on the Marseille coastal area, which experiences inputs from the Rhone River and high rates of atmospheric deposition. Therefore, a new 3D coupled physical/biogeochemical model was developed. Two versions of the biogeochemical model were tested, one model considering only the carbon (C) and nitrogen (N) cycles and a second model that also considers the phosphorus (P) cycle. Realistic simulations were performed for a period of 5 years (2007-2011). The model accuracy assessment showed that both versions of the model were able of capturing the seasonal changes and spatial characteristics of the ecosystem. The model also reproduced upwelling events and the intrusion of Rhone River water into the Bay of Marseille well. Those processes appeared to greatly impact this coastal oligotrophic area because they induced strong increases in chlorophyll-a concentrations in the surface layer. The model with the C, N and P cycles better reproduced the chlorophyll-a concentrations at the surface than did the model without the P cycle, especially for the Rhone River water. Nevertheless, the chlorophyll-a concentrations at depth were better represented by the model without the P cycle. Therefore, the complexity of the biogeochemical model introduced errors into the model results, but it also improved model results during specific events. Finally, this study suggested that in coastal oligotrophic areas, improvements in the description and quantification of the hydrodynamics and the terrestrial inputs should be preferred over increasing the complexity of the biogeochemical model.
Fraysse, Marion; Pinazo, Christel; Faure, Vincent Martin; Fuchs, Rosalie; Lazzari, Paolo; Raimbault, Patrick; Pairaud, Ivane
2013-01-01
Terrestrial inputs (natural and anthropogenic) from rivers, the atmosphere and physical processes strongly impact the functioning of coastal pelagic ecosystems. The objective of this study was to develop a tool for the examination of these impacts on the Marseille coastal area, which experiences inputs from the Rhone River and high rates of atmospheric deposition. Therefore, a new 3D coupled physical/biogeochemical model was developed. Two versions of the biogeochemical model were tested, one model considering only the carbon (C) and nitrogen (N) cycles and a second model that also considers the phosphorus (P) cycle. Realistic simulations were performed for a period of 5 years (2007–2011). The model accuracy assessment showed that both versions of the model were able of capturing the seasonal changes and spatial characteristics of the ecosystem. The model also reproduced upwelling events and the intrusion of Rhone River water into the Bay of Marseille well. Those processes appeared to greatly impact this coastal oligotrophic area because they induced strong increases in chlorophyll-a concentrations in the surface layer. The model with the C, N and P cycles better reproduced the chlorophyll-a concentrations at the surface than did the model without the P cycle, especially for the Rhone River water. Nevertheless, the chlorophyll-a concentrations at depth were better represented by the model without the P cycle. Therefore, the complexity of the biogeochemical model introduced errors into the model results, but it also improved model results during specific events. Finally, this study suggested that in coastal oligotrophic areas, improvements in the description and quantification of the hydrodynamics and the terrestrial inputs should be preferred over increasing the complexity of the biogeochemical model. PMID:24324589
'Mitochondrial energy imbalance and lipid peroxidation cause cell death in Friedreich's ataxia'.
Abeti, R; Parkinson, M H; Hargreaves, I P; Angelova, P R; Sandi, C; Pook, M A; Giunti, P; Abramov, A Y
2016-05-26
Friedreich's ataxia (FRDA) is an inherited neurodegenerative disease. The mutation consists of a GAA repeat expansion within the FXN gene, which downregulates frataxin, leading to abnormal mitochondrial iron accumulation, which may in turn cause changes in mitochondrial function. Although, many studies of FRDA patients and mouse models have been conducted in the past two decades, the role of frataxin in mitochondrial pathophysiology remains elusive. Are the mitochondrial abnormalities only a side effect of the increased accumulation of reactive iron, generating oxidative stress? Or does the progressive lack of iron-sulphur clusters (ISCs), induced by reduced frataxin, cause an inhibition of the electron transport chain complexes (CI, II and III) leading to reactive oxygen species escaping from oxidative phosphorylation reactions? To answer these crucial questions, we have characterised the mitochondrial pathophysiology of a group of disease-relevant and readily accessible neurons, cerebellar granule cells, from a validated FRDA mouse model. By using live cell imaging and biochemical techniques we were able to demonstrate that mitochondria are deregulated in neurons from the YG8R FRDA mouse model, causing a decrease in mitochondrial membrane potential (▵Ψm) due to an inhibition of Complex I, which is partially compensated by an overactivation of Complex II. This complex activity imbalance leads to ROS generation in both mitochondrial matrix and cytosol, which results in glutathione depletion and increased lipid peroxidation. Preventing this increase in lipid peroxidation, in neurons, protects against in cell death. This work describes the pathophysiological properties of the mitochondria in neurons from a FRDA mouse model and shows that lipid peroxidation could be an important target for novel therapeutic strategies in FRDA, which still lacks a cure.
Study on planning and design of ecological tourist rural complex for the elderly
NASA Astrophysics Data System (ADS)
Han, Zhoulin; Jiang, Nan; He, Yunxiao; Long, Yanping
2018-03-01
In order to deal with the increasingly serious aging problem in China, a new model about serving the aged better needs to be explored. This paper puts forward the concept of ecological tourist rural complex for the elderly, a novel pattern that combining the rural retirement place with pastoral complex which is proposed recently. A concrete example of Deteng complex in Mianyang is given to explore the construction condition and planning approach. Three important aspects including pastoral, ecology, serving the aged are the core elements to develop ecological tourist rural complex for the elderly.
The spatiotemporal system dynamics of acquired resistance in an engineered microecology.
Datla, Udaya Sree; Mather, William H; Chen, Sheng; Shoultz, Isaac W; Täuber, Uwe C; Jones, Caroline N; Butzin, Nicholas C
2017-11-22
Great strides have been made in the understanding of complex networks; however, our understanding of natural microecologies is limited. Modelling of complex natural ecological systems has allowed for new findings, but these models typically ignore the constant evolution of species. Due to the complexity of natural systems, unanticipated interactions may lead to erroneous conclusions concerning the role of specific molecular components. To address this, we use a synthetic system to understand the spatiotemporal dynamics of growth and to study acquired resistance in vivo. Our system differs from earlier synthetic systems in that it focuses on the evolution of a microecology from a killer-prey relationship to coexistence using two different non-motile Escherichia coli strains. Using empirical data, we developed the first ecological model emphasising the concept of the constant evolution of species, where the survival of the prey species is dependent on location (distance from the killer) or the evolution of resistance. Our simple model, when expanded to complex microecological association studies under varied spatial and nutrient backgrounds may help to understand the complex relationships between multiple species in intricate natural ecological networks. This type of microecological study has become increasingly important, especially with the emergence of antibiotic-resistant pathogens.
Effects of organizational complexity and resources on construction site risk.
Forteza, Francisco J; Carretero-Gómez, Jose M; Sesé, Albert
2017-09-01
Our research is aimed at studying the relationship between risk level and organizational complexity and resources on constructions sites. Our general hypothesis is that site complexity increases risk, whereas more resources of the structure decrease risk. A Structural Equation Model (SEM) approach was adopted to validate our theoretical model. To develop our study, 957 building sites in Spain were visited and assessed in 2003-2009. All needed data were obtained using a specific tool developed by the authors to assess site risk, structure and resources (Construction Sites Risk Assessment Tool, or CONSRAT). This tool operationalizes the variables to fit our model, specifically, via a site risk index (SRI) and 10 organizational variables. Our random sample is composed largely of small building sites with general high levels of risk, moderate complexity, and low resources on site. The model obtained adequate fit, and results showed empirical evidence that the factors of complexity and resources can be considered predictors of site risk level. Consequently, these results can help companies, managers of construction and regulators to identify which organizational aspects should be improved to prevent risks on sites and consequently accidents. Copyright © 2017 National Safety Council and Elsevier Ltd. All rights reserved.
The Limitations of Model-Based Experimental Design and Parameter Estimation in Sloppy Systems
Tolman, Malachi; Thames, Howard D.; Mason, Kathy A.
2016-01-01
We explore the relationship among experimental design, parameter estimation, and systematic error in sloppy models. We show that the approximate nature of mathematical models poses challenges for experimental design in sloppy models. In many models of complex biological processes it is unknown what are the relevant physical mechanisms that must be included to explain system behaviors. As a consequence, models are often overly complex, with many practically unidentifiable parameters. Furthermore, which mechanisms are relevant/irrelevant vary among experiments. By selecting complementary experiments, experimental design may inadvertently make details that were ommitted from the model become relevant. When this occurs, the model will have a large systematic error and fail to give a good fit to the data. We use a simple hyper-model of model error to quantify a model’s discrepancy and apply it to two models of complex biological processes (EGFR signaling and DNA repair) with optimally selected experiments. We find that although parameters may be accurately estimated, the discrepancy in the model renders it less predictive than it was in the sloppy regime where systematic error is small. We introduce the concept of a sloppy system–a sequence of models of increasing complexity that become sloppy in the limit of microscopic accuracy. We explore the limits of accurate parameter estimation in sloppy systems and argue that identifying underlying mechanisms controlling system behavior is better approached by considering a hierarchy of models of varying detail rather than focusing on parameter estimation in a single model. PMID:27923060
Drawert, Brian; Engblom, Stefan; Hellander, Andreas
2012-06-22
Experiments in silico using stochastic reaction-diffusion models have emerged as an important tool in molecular systems biology. Designing computational software for such applications poses several challenges. Firstly, realistic lattice-based modeling for biological applications requires a consistent way of handling complex geometries, including curved inner- and outer boundaries. Secondly, spatiotemporal stochastic simulations are computationally expensive due to the fast time scales of individual reaction- and diffusion events when compared to the biological phenomena of actual interest. We therefore argue that simulation software needs to be both computationally efficient, employing sophisticated algorithms, yet in the same time flexible in order to meet present and future needs of increasingly complex biological modeling. We have developed URDME, a flexible software framework for general stochastic reaction-transport modeling and simulation. URDME uses Unstructured triangular and tetrahedral meshes to resolve general geometries, and relies on the Reaction-Diffusion Master Equation formalism to model the processes under study. An interface to a mature geometry and mesh handling external software (Comsol Multiphysics) provides for a stable and interactive environment for model construction. The core simulation routines are logically separated from the model building interface and written in a low-level language for computational efficiency. The connection to the geometry handling software is realized via a Matlab interface which facilitates script computing, data management, and post-processing. For practitioners, the software therefore behaves much as an interactive Matlab toolbox. At the same time, it is possible to modify and extend URDME with newly developed simulation routines. Since the overall design effectively hides the complexity of managing the geometry and meshes, this means that newly developed methods may be tested in a realistic setting already at an early stage of development. In this paper we demonstrate, in a series of examples with high relevance to the molecular systems biology community, that the proposed software framework is a useful tool for both practitioners and developers of spatial stochastic simulation algorithms. Through the combined efforts of algorithm development and improved modeling accuracy, increasingly complex biological models become feasible to study through computational methods. URDME is freely available at http://www.urdme.org.
Rayleigh-Taylor and Richtmyer-Meshkov instability induced flow, turbulence, and mixing. II
NASA Astrophysics Data System (ADS)
Zhou, Ye
2017-12-01
Rayleigh-Taylor (RT) and Richtmyer-Meshkov(RM) instabilities are well-known pathways towards turbulent mixing layers, in many cases characterized by significant mass and species exchange across the mixing layers (Zhou, 2017. Physics Reports, 720-722, 1-136). Mathematically, the pathway to turbulent mixing requires that the initial interface be multimodal, to permit cross-mode coupling leading to turbulence. Practically speaking, it is difficult to experimentally produce a non-multi-mode initial interface. Numerous methods and approaches have been developed to describe the late, multimodal, turbulent stages of RT and RM mixing layers. This paper first presents the initial condition dependence of RT mixing layers, and introduces parameters that are used to evaluate the level of "mixedness" and "mixed mass" within the layers, as well as the dependence on density differences, as well as the characteristic anisotropy of this acceleration-driven flow, emphasizing some of the key differences between the two-dimensional and three-dimensional RT mixing layers. Next, the RM mixing layers are discussed, and differences with the RT mixing layer are elucidated, including the RM mixing layers dependence on the Mach number of the initiating shock. Another key feature of the RM induced flows is its response to a reshock event, as frequently seen in shock-tube experiments as well as inertial confinement events. A number of approaches to modeling the evolution of these mixing layers are then described, in order of increasing complexity. These include simple buoyancy-drag models, Reynolds-averaged Navier-Stokes models of increased complexity, including K- ε, K-L, and K- L- a models, up to full Reynolds-stress models with more than one length-scale. Multifield models and multiphase models have also been implemented. Additional complexities to these flows are examined as well as modifications to the models to understand the effects of these complexities. These complexities include the presence of magnetic fields, compressibility, rotation, stratification and additional instabilities. The complications induced by the presence of converging geometries are also considered. Finally, the unique problems of astrophysical and high-energy-density applications, and efforts to model these are discussed.
Hierarchical Bayesian spatial models for multispecies conservation planning and monitoring
Carlos Carroll; Devin S. Johnson; Jeffrey R. Dunk; William J. Zielinski
2010-01-01
Biologists who develop and apply habitat models are often familiar with the statistical challenges posed by their dataâs spatial structure but are unsure of whether the use of complex spatial models will increase the utility of model results in planning. We compared the relative performance of nonspatial and hierarchical Bayesian spatial models for three vertebrate and...
Cognitive Complexity of the Medical Record Is a Risk Factor for Major Adverse Events
Roberson, David; Connell, Michael; Dillis, Shay; Gauvreau, Kimberlee; Gore, Rebecca; Heagerty, Elaina; Jenkins, Kathy; Ma, Lin; Maurer, Amy; Stephenson, Jessica; Schwartz, Margot
2014-01-01
Context: Patients in tertiary care hospitals are more complex than in the past, but the implications of this are poorly understood because “patient complexity” has been difficult to quantify. Objective: We developed a tool, the Complexity Ruler, to quantify the amount of data (as bits) in the patient’s medical record. We designated the amount of data in the medical record as the cognitive complexity of the medical record (CCMR). We hypothesized that CCMR is a useful surrogate for true patient complexity and that higher CCMR correlates with risk of major adverse events. Design: The Complexity Ruler was validated by comparing the measured CCMR with physician rankings of patient complexity on specific inpatient services. It was tested in a case-control model of all patients with major adverse events at a tertiary care pediatric hospital from 2005 to 2006. Main Outcome Measures: The main outcome measure was an externally reported major adverse event. We measured CCMR for 24 hours before the event, and we estimated lifetime CCMR. Results: Above empirically derived cutoffs, 24-hour and lifetime CCMR were risk factors for major adverse events (odds ratios, 5.3 and 6.5, respectively). In a multivariate analysis, CCMR alone was essentially as predictive of risk as a model that started with 30-plus clinical factors. Conclusions: CCMR correlates with physician assessment of complexity and risk of adverse events. We hypothesize that increased CCMR increases the risk of physician cognitive overload. An automated version of the Complexity Ruler could allow identification of at-risk patients in real time. PMID:24626065
Using HexSim to simulate complex species, landscape, and stressor interactions
Background / Question / Methods The use of simulation models in conservation biology, landscape ecology, and other disciplines is increasing. Models are essential tools for researchers who, for example, need to forecast future conditions, weigh competing recovery and mitigation...
Hieb, Aaron R; Halsey, Wayne A; Betterton, Meredith D; Perkins, Thomas T; Kugel, Jennifer F; Goodrich, James A
2007-09-21
Eukaryotic mRNA transcription by RNA polymerase II is a highly regulated complex reaction involving numerous proteins. In order to control tissue and promoter specific gene expression, transcription factors must work in concert with each other and with the promoter DNA to form the proper architecture to activate the gene of interest. The TATA binding protein (TBP) binds to TATA boxes in core promoters and bends the TATA DNA. We have used quantitative solution fluorescence resonance energy transfer (FRET) and gel-based FRET (gelFRET) to determine the effect of TFIIA on the conformation of the DNA in TBP/TATA complexes and on the kinetic stability of these complexes. Our results indicate that human TFIIA decreases the angle to which human TBP bends consensus TATA DNA from 104 degrees to 80 degrees when calculated using a two-kink model. The kinetic stability of TBP/TATA complexes was greatly reduced by increasing the KCl concentration from 50 mM to 140 mM, which is more physiologically relevant. TFIIA significantly enhanced the kinetic stability of TBP/TATA complexes, thereby attenuating the effect of higher salt concentrations. We also found that TBP bent non-consensus TATA DNA to a lesser degree than consensus TATA DNA and complexes between TBP and a non-consensus TATA box were kinetically unstable even at 50 mM KCl. Interestingly, TFIIA increased the calculated bend angle and kinetic stability of complexes on a non-consensus TATA box, making them similar to those on a consensus TATA box. Our data show that TFIIA induces a conformational change within the TBP/TATA complex that enhances its stability under both in vitro and physiological salt conditions. Furthermore, we present a refined model for the effect that TFIIA has on DNA conformation that takes into account potential changes in bend angle as well as twist angle.
The evolution of complex life cycles when parasite mortality is size- or time-dependent.
Ball, M A; Parker, G A; Chubb, J C
2008-07-07
In complex cycles, helminth larvae in their intermediate hosts typically grow to a fixed size. We define this cessation of growth before transmission to the next host as growth arrest at larval maturity (GALM). Where the larval parasite controls its own growth in the intermediate host, in order that growth eventually arrests, some form of size- or time-dependent increase in its death rate must apply. In contrast, the switch from growth to sexual reproduction in the definitive host can be regulated by constant (time-independent) mortality as in standard life history theory. We here develop a step-wise model for the evolution of complex helminth life cycles through trophic transmission, based on the approach of Parker et al. [2003a. Evolution of complex life cycles in helminth parasites. Nature London 425, 480-484], but which includes size- or time-dependent increase in mortality rate. We assume that the growing larval parasite has two components to its death rate: (i) a constant, size- or time-independent component, and (ii) a component that increases with size or time in the intermediate host. When growth stops at larval maturity, there is a discontinuous change in mortality to a constant (time-independent) rate. This model generates the same optimal size for the parasite larva at GALM in the intermediate host whether the evolutionary approach to the complex life cycle is by adding a new host above the original definitive host (upward incorporation), or below the original definitive host (downward incorporation). We discuss some unexplored problems for cases where complex life cycles evolve through trophic transmission.
Regier, Mary C; Maccoux, Lindsey J; Weinberger, Emma M; Regehr, Keil J; Berry, Scott M; Beebe, David J; Alarid, Elaine T
2016-08-01
Heterotypic interactions in cancer microenvironments play important roles in disease initiation, progression, and spread. Co-culture is the predominant approach used in dissecting paracrine interactions between tumor and stromal cells, but functional results from simple co-cultures frequently fail to correlate to in vivo conditions. Though complex heterotypic in vitro models have improved functional relevance, there is little systematic knowledge of how multi-culture parameters influence this recapitulation. We therefore have employed a more iterative approach to investigate the influence of increasing model complexity; increased heterotypic complexity specifically. Here we describe how the compartmentalized and microscale elements of our multi-culture device allowed us to obtain gene expression data from one cell type at a time in a heterotypic culture where cells communicated through paracrine interactions. With our device we generated a large dataset comprised of cell type specific gene-expression patterns for cultures of increasing complexity (three cell types in mono-, co-, or tri-culture) not readily accessible in other systems. Principal component analysis indicated that gene expression was changed in co-culture but was often more strongly altered in tri-culture as compared to mono-culture. Our analysis revealed that cell type identity and the complexity around it (mono-, co-, or tri-culture) influence gene regulation. We also observed evidence of complementary regulation between cell types in the same heterotypic culture. Here we demonstrate the utility of our platform in providing insight into how tumor and stromal cells respond to microenvironments of varying complexities highlighting the expanding importance of heterotypic cultures that go beyond conventional co-culture.
NASA Astrophysics Data System (ADS)
Wagenbrenner, N. S.; Forthofer, J.; Butler, B.; Shannon, K.
2014-12-01
Near-surface wind predictions are important for a number of applications, including transport and dispersion, wind energy forecasting, and wildfire behavior. Researchers and forecasters would benefit from a wind model that could be readily applied to complex terrain for use in these various disciplines. Unfortunately, near-surface winds in complex terrain are not handled well by traditional modeling approaches. Numerical weather prediction models employ coarse horizontal resolutions which do not adequately resolve sub-grid terrain features important to the surface flow. Computational fluid dynamics (CFD) models are increasingly being applied to simulate atmospheric boundary layer (ABL) flows, especially in wind energy applications; however, the standard functionality provided in commercial CFD models is not suitable for ABL flows. Appropriate CFD modeling in the ABL requires modification of empirically-derived wall function parameters and boundary conditions to avoid erroneous streamwise gradients due to inconsistences between inlet profiles and specified boundary conditions. This work presents a new version of a near-surface wind model for complex terrain called WindNinja. The new version of WindNinja offers two options for flow simulations: 1) the native, fast-running mass-consistent method available in previous model versions and 2) a CFD approach based on the OpenFOAM modeling framework and optimized for ABL flows. The model is described and evaluations of predictions with surface wind data collected from two recent field campaigns in complex terrain are presented. A comparison of predictions from the native mass-consistent method and the new CFD method is also provided.
Fast neuromimetic object recognition using FPGA outperforms GPU implementations.
Orchard, Garrick; Martin, Jacob G; Vogelstein, R Jacob; Etienne-Cummings, Ralph
2013-08-01
Recognition of objects in still images has traditionally been regarded as a difficult computational problem. Although modern automated methods for visual object recognition have achieved steadily increasing recognition accuracy, even the most advanced computational vision approaches are unable to obtain performance equal to that of humans. This has led to the creation of many biologically inspired models of visual object recognition, among them the hierarchical model and X (HMAX) model. HMAX is traditionally known to achieve high accuracy in visual object recognition tasks at the expense of significant computational complexity. Increasing complexity, in turn, increases computation time, reducing the number of images that can be processed per unit time. In this paper we describe how the computationally intensive and biologically inspired HMAX model for visual object recognition can be modified for implementation on a commercial field-programmable aate Array, specifically the Xilinx Virtex 6 ML605 evaluation board with XC6VLX240T FPGA. We show that with minor modifications to the traditional HMAX model we can perform recognition on images of size 128 × 128 pixels at a rate of 190 images per second with a less than 1% loss in recognition accuracy in both binary and multiclass visual object recognition tasks.
International Land Model Benchmarking (ILAMB) Workshop Report, Technical Report DOE/SC-0186
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffman, Forrest M.; Koven, Charles D.; Kappel-Aleks, Gretchen
2016-11-01
As Earth system models become increasingly complex, there is a growing need for comprehensive and multi-faceted evaluation of model projections. To advance understanding of biogeochemical processes and their interactions with hydrology and climate under conditions of increasing atmospheric carbon dioxide, new analysis methods are required that use observations to constrain model predictions, inform model development, and identify needed measurements and field experiments. Better representations of biogeochemistry–climate feedbacks and ecosystem processes in these models are essential for reducing uncertainties associated with projections of climate change during the remainder of the 21st century.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ridley, Mora K.; Machesky, Michael L.; Wesolowski, David J
2005-01-01
The adsorption of Nd{sup 3+} onto rutile surfaces was examined by potentiometric titration from 25 to 250 C, in 0.03 and 0.30m NaCl background electrolyte. Experimental results show that Nd{sup 3+} sorbs strongly, even at low temperature, with adsorption commencing below the pHznpc of rutile. In addition, there is a systematic increase in Nd{sup 3+} adsorption with increasing temperature. The experimental results were rationalized and described using surface oxygen proton affinities computed from the MUlti SIte Complexation or MUSIC model, coupled with a Stern-based three-layer description of the oxide/water interface. Moreover, molecular-scale information was incorporated successfully into the surface complexationmore » model, providing a unique geometry for the adsorption of Nd{sup 3+} on rutile. The primary mode of Nd{sup 3+} adsorption was assumed to be the tetradentate configuration found for Y{sup 3+} adsorption on the rutile (110) surface from previously described in situ X-ray standing wave experiments, wherein the sorbing cations bond directly with two adjacent ''terminal'' and two adjacent ''bridging'' surface oxygen atoms. Similarly, the adsorption of Na{sup +} counterions was also assumed to be tetradentate, as supported by MD simulations of Na{sup +} interactions with the rutile (110) surface, and by analogous X-ray standing wave results for Rb{sup +} adsorption on rutile. Fitting parameters for Nd{sup 3+} adsorption included binding constants for the tetradentate adsorption complex and capacitance values for the inner-sphere binding plane. In addition, hydrolysis of the tetradentate adsorption complex was permitted and resulted in significantly improved model fits at higher temperature and pH values. The modeling results indicate that the Stern-based MUSIC surface-complexation model adequately accommodates molecular-scale information to uniquely rationalize and describe multivalent ion adsorption systematically into the hydrothermal regime.« less
NASA Astrophysics Data System (ADS)
Marsac, R.; Davranche, M.; Gruau, G.; Dia, A.
2009-04-01
In natural organic-rich waters, rare earth elements (REE) speciation is mainly controlled by organic colloids such as humic acid (HA). Different series of REE-HA complexation experiments performed at several metal loading (REE/C) displayed two pattern shapes (i) at high metal loading, a middle-REE (MREE) downward concavity, and (ii) at low metal loading, a regular increase from La to Lu (e.g. Sonke and Salters, 2006; Pourret et al., 2007). Both REE patterns might be related to REE binding with different surface sites on HA. To understand REE-HA binding, REE-HA complexation experiments at various metals loading were carried out using ultrafiltration combined with ICP-MS measurements, for the 14 REE simultaneously. The patterns of the apparent coefficients of REE partition between HA and the inorganic solution (log Kd) evolved regularly according to the metal loading. The REE patterns presented a MREE downward concavity at low loading and a regular increase from La to Lu at high loading. The dataset was modelled with Model VI by adjusting two specific parameters, log KMA, the apparent complexation constant of HA low affinity sites and DLK2, the parameter increasing high affinity sites binding strength. Experiments and modelling provided evidence that HA high affinity sites controlled the REE binding with HA at low metal loading. The REE-HA complex could be as multidentate complexes with carboxylic or phenolic sites or potentially with sites constituted of N, P or S as donor atoms. Moreover, these high affinity sites could be different for light and heavy REE, because heavy REE have higher affinity for these sites, in low density, and could saturate them. These new Model VI parameter sets allowed the prediction of the REE-HA pattern shape evolution on a large range of pH and metal loading. According to the metal loading, the evolution of the calculated REE patterns was similar to the various REE pattern observed in natural acidic organic-rich waters (pH<7 and DOC>10 mg L-1). As a consequence, the metal loading could be the key parameter controlling the REE pattern in organic-rich waters.
Differential responses of targeted lung redox enzymes to rat exposure to 60 or 85% oxygen
Gan, Zhuohui; Roerig, David L.; Clough, Anne V.
2011-01-01
Rat exposure to 60% O2 (hyper-60) or 85% O2 (hyper-85) for 7 days confers susceptibility or tolerance, respectively, of the otherwise lethal effects of exposure to 100% O2. The objective of this study was to determine whether activities of the antioxidant cytosolic enzyme NAD(P)H:quinone oxidoreductase 1 (NQO1) and mitochondrial complex III are differentially altered in hyper-60 and hyper-85 lungs. Duroquinone (DQ), an NQO1 substrate, or its hydroquinone (DQH2), a complex III substrate, was infused into the arterial inflow of isolated, perfused lungs, and the venous efflux rates of DQH2 and DQ were measured. Based on inhibitor effects and kinetic modeling, capacities of NQO1-mediated DQ reduction (Vmax1) and complex III-mediated DQH2 oxidation (Vmax2) increased by ∼140 and ∼180% in hyper-85 lungs, respectively, compared with rates in lungs of rats exposed to room air (normoxic). In hyper-60 lungs, Vmax1 increased by ∼80%, with no effect on Vmax2. Additional studies revealed that mitochondrial complex I activity in hyper-60 and hyper-85 lung tissue homogenates was ∼50% lower than in normoxic lung homogenates, whereas mitochondrial complex IV activity was ∼90% higher in only hyper-85 lung tissue homogenates. Thus NQO1 activity increased in both hyper-60 and hyper-85 lungs, whereas complex III activity increased in hyper-85 lungs only. This increase, along with the increase in complex IV activity, may counter the effects the depression in complex I activity might have on tissue mitochondrial function and/or reactive oxygen species production and may be important to the tolerance of 100% O2 observed in hyper-85 rats. PMID:21551015
Differential responses of targeted lung redox enzymes to rat exposure to 60 or 85% oxygen.
Gan, Zhuohui; Roerig, David L; Clough, Anne V; Audi, Said H
2011-07-01
Rat exposure to 60% O(2) (hyper-60) or 85% O(2) (hyper-85) for 7 days confers susceptibility or tolerance, respectively, of the otherwise lethal effects of exposure to 100% O(2). The objective of this study was to determine whether activities of the antioxidant cytosolic enzyme NAD(P)H:quinone oxidoreductase 1 (NQO1) and mitochondrial complex III are differentially altered in hyper-60 and hyper-85 lungs. Duroquinone (DQ), an NQO1 substrate, or its hydroquinone (DQH(2)), a complex III substrate, was infused into the arterial inflow of isolated, perfused lungs, and the venous efflux rates of DQH(2) and DQ were measured. Based on inhibitor effects and kinetic modeling, capacities of NQO1-mediated DQ reduction (V(max1)) and complex III-mediated DQH(2) oxidation (V(max2)) increased by ∼140 and ∼180% in hyper-85 lungs, respectively, compared with rates in lungs of rats exposed to room air (normoxic). In hyper-60 lungs, V(max1) increased by ∼80%, with no effect on V(max2). Additional studies revealed that mitochondrial complex I activity in hyper-60 and hyper-85 lung tissue homogenates was ∼50% lower than in normoxic lung homogenates, whereas mitochondrial complex IV activity was ∼90% higher in only hyper-85 lung tissue homogenates. Thus NQO1 activity increased in both hyper-60 and hyper-85 lungs, whereas complex III activity increased in hyper-85 lungs only. This increase, along with the increase in complex IV activity, may counter the effects the depression in complex I activity might have on tissue mitochondrial function and/or reactive oxygen species production and may be important to the tolerance of 100% O(2) observed in hyper-85 rats.
Model structures amplify uncertainty in predicted soil carbon responses to climate change.
Shi, Zheng; Crowell, Sean; Luo, Yiqi; Moore, Berrien
2018-06-04
Large model uncertainty in projected future soil carbon (C) dynamics has been well documented. However, our understanding of the sources of this uncertainty is limited. Here we quantify the uncertainties arising from model parameters, structures and their interactions, and how those uncertainties propagate through different models to projections of future soil carbon stocks. Both the vertically resolved model and the microbial explicit model project much greater uncertainties to climate change than the conventional soil C model, with both positive and negative C-climate feedbacks, whereas the conventional model consistently predicts positive soil C-climate feedback. Our findings suggest that diverse model structures are necessary to increase confidence in soil C projection. However, the larger uncertainty in the complex models also suggests that we need to strike a balance between model complexity and the need to include diverse model structures in order to forecast soil C dynamics with high confidence and low uncertainty.
Computational Modeling of Human Metabolism and Its Application to Systems Biomedicine.
Aurich, Maike K; Thiele, Ines
2016-01-01
Modern high-throughput techniques offer immense opportunities to investigate whole-systems behavior, such as those underlying human diseases. However, the complexity of the data presents challenges in interpretation, and new avenues are needed to address the complexity of both diseases and data. Constraint-based modeling is one formalism applied in systems biology. It relies on a genome-scale reconstruction that captures extensive biochemical knowledge regarding an organism. The human genome-scale metabolic reconstruction is increasingly used to understand normal cellular and disease states because metabolism is an important factor in many human diseases. The application of human genome-scale reconstruction ranges from mere querying of the model as a knowledge base to studies that take advantage of the model's topology and, most notably, to functional predictions based on cell- and condition-specific metabolic models built based on omics data.An increasing number and diversity of biomedical questions are being addressed using constraint-based modeling and metabolic models. One of the most successful biomedical applications to date is cancer metabolism, but constraint-based modeling also holds great potential for inborn errors of metabolism or obesity. In addition, it offers great prospects for individualized approaches to diagnostics and the design of disease prevention and intervention strategies. Metabolic models support this endeavor by providing easy access to complex high-throughput datasets. Personalized metabolic models have been introduced. Finally, constraint-based modeling can be used to model whole-body metabolism, which will enable the elucidation of metabolic interactions between organs and disturbances of these interactions as either causes or consequence of metabolic diseases. This chapter introduces constraint-based modeling and describes some of its contributions to systems biomedicine.
Nguyen, Hai; Pérez, Alberto; Bermeo, Sherry; Simmerling, Carlos
2016-01-01
The Generalized Born (GB) implicit solvent model has undergone significant improvements in accuracy for modeling of proteins and small molecules. However, GB still remains a less widely explored option for nucleic acid simulations, in part because fast GB models are often unable to maintain stable nucleic acid structures, or they introduce structural bias in proteins, leading to difficulty in application of GB models in simulations of protein-nucleic acid complexes. Recently, GB-neck2 was developed to improve the behavior of protein simulations. In an effort to create a more accurate model for nucleic acids, a similar procedure to the development of GB-neck2 is described here for nucleic acids. The resulting parameter set significantly reduces absolute and relative energy error relative to Poisson Boltzmann for both nucleic acids and nucleic acid-protein complexes, when compared to its predecessor GB-neck model. This improvement in solvation energy calculation translates to increased structural stability for simulations of DNA and RNA duplexes, quadruplexes, and protein-nucleic acid complexes. The GB-neck2 model also enables successful folding of small DNA and RNA hairpins to near native structures as determined from comparison with experiment. The functional form and all required parameters are provided here and also implemented in the AMBER software. PMID:26574454
Dos Santos, Hélio F; Paschoal, Diego; Burda, Jaroslav V
2012-11-15
The reactivity of gold(III) complexes is analyzed for a series of derivatives of 3-azapentane-1,5-diamine (dien) tridentate ligand that can contain some bulky substituents. Two distinct series of compounds are considered where the dien ligand is either deprotonated (R-dien-H) or protonated (R-dien) at the secondary amine where R = ethyl (Et) or methyl (Me). While the deprotonated species will occur in neutral and basic solutions, the protonated forms are likely to be present in acidic environment. Hydration reaction (water/Cl(-) ligand exchange) of 14 complexes is modeled with quantum chemical calculations. Our calculations predict that the reactivity decreases with the increase in the molecular volume of the substituted dien ligand, and the calculated rate constants are in satisfactory agreement with experimental results. In addition, quantitative structure/reactivity models are proposed where the angle between the entering and leaving groups in the transition state structure (the reactivity angle) is used as a molecular descriptor. These models explain the trend of the relative reactivity of these complexes and can be used to design new ligands for gold(III) complexes aiming to adjust the reactivity of the complex.
Mechanism of synergistic activation of Arp2/3 complex by cortactin and N-WASP
Helgeson, Luke A; Nolen, Brad J
2013-01-01
Nucleation promoting factors (NPFs) initiate branched actin network assembly by activating Arp2/3 complex, a branched actin filament nucleator. Cellular actin networks contain multiple NPFs, but how they coordinately regulate Arp2/3 complex is unclear. Cortactin is an NPF that activates Arp2/3 complex weakly on its own, but with WASP/N-WASP, another class of NPFs, potently activates. We dissect the mechanism of synergy and propose a model in which cortactin displaces N-WASP from nascent branches as a prerequisite for nucleation. Single-molecule imaging revealed that unlike WASP/N-WASP, cortactin remains bound to junctions during nucleation, and specifically targets junctions with a ∼160-fold increased on rate over filament sides. N-WASP must be dimerized for potent synergy, and targeted mutations indicate release of dimeric N-WASP from nascent branches limits nucleation. Mathematical modeling shows cortactin-mediated displacement but not N-WASP recycling or filament recruitment models can explain synergy. Our results provide a molecular basis for coordinate Arp2/3 complex regulation. DOI: http://dx.doi.org/10.7554/eLife.00884.001 PMID:24015358
New Age of 3D Geological Modelling or Complexity is not an Issue Anymore
NASA Astrophysics Data System (ADS)
Mitrofanov, Aleksandr
2017-04-01
Geological model has a significant value in almost all types of researches related to regional mapping, geodynamics and especially to structural and resource geology of mineral deposits. Well-developed geological model must take into account all vital features of modelling object without over-simplification and also should adequately represent the interpretation of the geologist. In recent years with the gradual exhaustion deposits with relatively simple morphology geologists from all over the world are faced with the necessity of building the representative models for more and more structurally complex objects. Meanwhile, the amount of tools used for that has not significantly changed in the last two-three decades. The most widespread method of wireframe geological modelling now was developed in 1990s and is fully based on engineering design set of instruments (so-called CAD). Strings and polygons representing the section-based interpretation are being used as an intermediate step in the process of wireframes generation. Despite of significant time required for this type of modelling, it still can provide sufficient results for simple and medium-complexity geological objects. However, with the increasing complexity more and more vital features of the deposit are being sacrificed because of fundamental inability (or much greater time required for modelling) of CAD-based explicit techniques to develop the wireframes of the appropriate complexity. At the same time alternative technology which is not based on sectional approach and which uses the fundamentally different mathematical algorithms is being actively developed in the variety of other disciplines: medicine, advanced industrial design, game and cinema industry. In the recent years this implicit technology started to being developed for geological modelling purpose and nowadays it is represented by very powerful set of tools that has been integrated in almost all major commercial software packages. Implicit modelling allows to develop geological models that really correspond with complicated geological reality. Models can include fault blocking, complex structural trends and folding; can be based on excessive input dataset (like lots of drilling on the mining stage) or, on the other hand, on a quite few drillholes intersections with significant input from geological interpretation of the deposit. In any case implicit modelling, if is used correctly, allows to incorporate the whole batch of geological data and relatively quickly get the easily adjustable, flexible and robust geological wireframes that can be used as a reliable foundation on the following stages of geological investigations. In SRK practice nowadays almost all the wireframe models used for structural and resource geology are developed with implicit modelling tools which significantly increased the speed and quality of geological modelling.
Takagi-Sugeno-Kang fuzzy models of the rainfall-runoff transformation
NASA Astrophysics Data System (ADS)
Jacquin, A. P.; Shamseldin, A. Y.
2009-04-01
Fuzzy inference systems, or fuzzy models, are non-linear models that describe the relation between the inputs and the output of a real system using a set of fuzzy IF-THEN rules. This study deals with the application of Takagi-Sugeno-Kang type fuzzy models to the development of rainfall-runoff models operating on a daily basis, using a system based approach. The models proposed are classified in two types, each intended to account for different kinds of dominant non-linear effects in the rainfall-runoff relationship. Fuzzy models type 1 are intended to incorporate the effect of changes in the prevailing soil moisture content, while fuzzy models type 2 address the phenomenon of seasonality. Each model type consists of five fuzzy models of increasing complexity; the most complex fuzzy model of each model type includes all the model components found in the remaining fuzzy models of the respective type. The models developed are applied to data of six catchments from different geographical locations and sizes. Model performance is evaluated in terms of two measures of goodness of fit, namely the Nash-Sutcliffe criterion and the index of volumetric fit. The results of the fuzzy models are compared with those of the Simple Linear Model, the Linear Perturbation Model and the Nearest Neighbour Linear Perturbation Model, which use similar input information. Overall, the results of this study indicate that Takagi-Sugeno-Kang fuzzy models are a suitable alternative for modelling the rainfall-runoff relationship. However, it is also observed that increasing the complexity of the model structure does not necessarily produce an improvement in the performance of the fuzzy models. The relative importance of the different model components in determining the model performance is evaluated through sensitivity analysis of the model parameters in the accompanying study presented in this meeting. Acknowledgements: We would like to express our gratitude to Prof. Kieran M. O'Connor from the National University of Ireland, Galway, for providing the data used in this study.
The Use of Behavior Models for Predicting Complex Operations
NASA Technical Reports Server (NTRS)
Gore, Brian F.
2010-01-01
Modeling and simulation (M&S) plays an important role when complex human-system notions are being proposed, developed and tested within the system design process. National Aeronautics and Space Administration (NASA) as an agency uses many different types of M&S approaches for predicting human-system interactions, especially when it is early in the development phase of a conceptual design. NASA Ames Research Center possesses a number of M&S capabilities ranging from airflow, flight path models, aircraft models, scheduling models, human performance models (HPMs), and bioinformatics models among a host of other kinds of M&S capabilities that are used for predicting whether the proposed designs will benefit the specific mission criteria. The Man-Machine Integration Design and Analysis System (MIDAS) is a NASA ARC HPM software tool that integrates many models of human behavior with environment models, equipment models, and procedural / task models. The challenge to model comprehensibility is heightened as the number of models that are integrated and the requisite fidelity of the procedural sets are increased. Model transparency is needed for some of the more complex HPMs to maintain comprehensibility of the integrated model performance. This will be exemplified in a recent MIDAS v5 application model and plans for future model refinements will be presented.
PROCEEDINGS OF THE CROSS DISCIPLINE ECOSYTEM MODELING AND ANALYSIS WORKSHOP
The complexity of environmental problems we face now and in the future is ever increasing. Process linkages among air, land, surface and subsurface water require interdisciplinary modeling approaches. The dynamics of land use change spurred by population and economic growth, ...
A Review of Contemporary Ethical Decision-Making Models for Mental Health Professionals
ERIC Educational Resources Information Center
Francis, Perry C.
2015-01-01
Mental health professionals are faced with increasingly complex ethical decisions that are impacted by culture, personal and professional values, and the contexts in which they and their clients inhabit. This article presents the reasons for developing and implementing multiple ethical decision making models and reviews four models that address…
Understanding Rasch Measurement: Partial Credit Model and Pivot Anchoring.
ERIC Educational Resources Information Center
Bode, Rita K.
2001-01-01
Describes the Rasch measurement partial credit model, what it is, how it differs from other Rasch models, and when and how to use it. Also describes the calibration of instruments with increasingly complex items. Explains pivot anchoring and illustrates its use and describes the effect of pivot anchoring on step calibrations, item hierarchy, and…
Coding response to a case-mix measurement system based on multiple diagnoses.
Preyra, Colin
2004-08-01
To examine the hospital coding response to a payment model using a case-mix measurement system based on multiple diagnoses and the resulting impact on a hospital cost model. Financial, clinical, and supplementary data for all Ontario short stay hospitals from years 1997 to 2002. Disaggregated trends in hospital case-mix growth are examined for five years following the adoption of an inpatient classification system making extensive use of combinations of secondary diagnoses. Hospital case mix is decomposed into base and complexity components. The longitudinal effects of coding variation on a standard hospital payment model are examined in terms of payment accuracy and impact on adjustment factors. Introduction of the refined case-mix system provided incentives for hospitals to increase reporting of secondary diagnoses and resulted in growth in highest complexity cases that were not matched by increased resource use over time. Despite a pronounced coding response on the part of hospitals, the increase in measured complexity and case mix did not reduce the unexplained variation in hospital unit cost nor did it reduce the reliance on the teaching adjustment factor, a potential proxy for case mix. The main implication was changes in the size and distribution of predicted hospital operating costs. Jurisdictions introducing extensive refinements to standard diagnostic related group (DRG)-type payment systems should consider the effects of induced changes to hospital coding practices. Assessing model performance should include analysis of the robustness of classification systems to hospital-level variation in coding practices. Unanticipated coding effects imply that case-mix models hypothesized to perform well ex ante may not meet expectations ex post.
Fast and Accurate Circuit Design Automation through Hierarchical Model Switching.
Huynh, Linh; Tagkopoulos, Ilias
2015-08-21
In computer-aided biological design, the trifecta of characterized part libraries, accurate models and optimal design parameters is crucial for producing reliable designs. As the number of parts and model complexity increase, however, it becomes exponentially more difficult for any optimization method to search the solution space, hence creating a trade-off that hampers efficient design. To address this issue, we present a hierarchical computer-aided design architecture that uses a two-step approach for biological design. First, a simple model of low computational complexity is used to predict circuit behavior and assess candidate circuit branches through branch-and-bound methods. Then, a complex, nonlinear circuit model is used for a fine-grained search of the reduced solution space, thus achieving more accurate results. Evaluation with a benchmark of 11 circuits and a library of 102 experimental designs with known characterization parameters demonstrates a speed-up of 3 orders of magnitude when compared to other design methods that provide optimality guarantees.
NASA Astrophysics Data System (ADS)
Rubinstein, A.; Sabirianov, R. F.; Mei, W. N.; Namavar, F.; Khoynezhad, A.
2010-08-01
Using a nonlocal electrostatic approach that incorporates the short-range structure of the contacting media, we evaluated the electrostatic contribution to the energy of the complex formation of two model proteins. In this study, we have demonstrated that the existence of an ordered interfacial water layer at the protein-solvent interface reduces the charging energy of the proteins in the aqueous solvent, and consequently increases the electrostatic contribution to the protein binding (change in free energy upon the complex formation of two proteins). This is in contrast with the finding of the continuum electrostatic model, which suggests that electrostatic interactions are not strong enough to compensate for the unfavorable desolvation effects.
Rubinstein, A; Sabirianov, R F; Mei, W N; Namavar, F; Khoynezhad, A
2010-08-01
Using a nonlocal electrostatic approach that incorporates the short-range structure of the contacting media, we evaluated the electrostatic contribution to the energy of the complex formation of two model proteins. In this study, we have demonstrated that the existence of an ordered interfacial water layer at the protein-solvent interface reduces the charging energy of the proteins in the aqueous solvent, and consequently increases the electrostatic contribution to the protein binding (change in free energy upon the complex formation of two proteins). This is in contrast with the finding of the continuum electrostatic model, which suggests that electrostatic interactions are not strong enough to compensate for the unfavorable desolvation effects.
Quantifying networks complexity from information geometry viewpoint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Felice, Domenico, E-mail: domenico.felice@unicam.it; Mancini, Stefano; INFN-Sezione di Perugia, Via A. Pascoli, I-06123 Perugia
We consider a Gaussian statistical model whose parameter space is given by the variances of random variables. Underlying this model we identify networks by interpreting random variables as sitting on vertices and their correlations as weighted edges among vertices. We then associate to the parameter space a statistical manifold endowed with a Riemannian metric structure (that of Fisher-Rao). Going on, in analogy with the microcanonical definition of entropy in Statistical Mechanics, we introduce an entropic measure of networks complexity. We prove that it is invariant under networks isomorphism. Above all, considering networks as simplicial complexes, we evaluate this entropy onmore » simplexes and find that it monotonically increases with their dimension.« less
UML as a cell and biochemistry modeling language.
Webb, Ken; White, Tony
2005-06-01
The systems biology community is building increasingly complex models and simulations of cells and other biological entities, and are beginning to look at alternatives to traditional representations such as those provided by ordinary differential equations (ODE). The lessons learned over the years by the software development community in designing and building increasingly complex telecommunication and other commercial real-time reactive systems, can be advantageously applied to the problems of modeling in the biology domain. Making use of the object-oriented (OO) paradigm, the unified modeling language (UML) and Real-Time Object-Oriented Modeling (ROOM) visual formalisms, and the Rational Rose RealTime (RRT) visual modeling tool, we describe a multi-step process we have used to construct top-down models of cells and cell aggregates. The simple example model described in this paper includes membranes with lipid bilayers, multiple compartments including a variable number of mitochondria, substrate molecules, enzymes with reaction rules, and metabolic pathways. We demonstrate the relevance of abstraction, reuse, objects, classes, component and inheritance hierarchies, multiplicity, visual modeling, and other current software development best practices. We show how it is possible to start with a direct diagrammatic representation of a biological structure such as a cell, using terminology familiar to biologists, and by following a process of gradually adding more and more detail, arrive at a system with structure and behavior of arbitrary complexity that can run and be observed on a computer. We discuss our CellAK (Cell Assembly Kit) approach in terms of features found in SBML, CellML, E-CELL, Gepasi, Jarnac, StochSim, Virtual Cell, and membrane computing systems.
Revisiting the structures of several antibiotics bound to the bacterial ribosome
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bulkley, David; Innis, C. Axel; Blaha, Gregor
2010-10-08
The increasing prevalence of antibiotic-resistant pathogens reinforces the need for structures of antibiotic-ribosome complexes that are accurate enough to enable the rational design of novel ribosome-targeting therapeutics. Structures of many antibiotics in complex with both archaeal and eubacterial ribosomes have been determined, yet discrepancies between several of these models have raised the question of whether these differences arise from species-specific variations or from experimental problems. Our structure of chloramphenicol in complex with the 70S ribosome from Thermus thermophilus suggests a model for chloramphenicol bound to the large subunit of the bacterial ribosome that is radically different from the prevailing model.more » Further, our structures of the macrolide antibiotics erythromycin and azithromycin in complex with a bacterial ribosome are indistinguishable from those determined of complexes with the 50S subunit of Haloarcula marismortui, but differ significantly from the models that have been published for 50S subunit complexes of the eubacterium Deinococcus radiodurans. Our structure of the antibiotic telithromycin bound to the T. thermophilus ribosome reveals a lactone ring with a conformation similar to that observed in the H. marismortui and D. radiodurans complexes. However, the alkyl-aryl moiety is oriented differently in all three organisms, and the contacts observed with the T. thermophilus ribosome are consistent with biochemical studies performed on the Escherichia coli ribosome. Thus, our results support a mode of macrolide binding that is largely conserved across species, suggesting that the quality and interpretation of electron density, rather than species specificity, may be responsible for many of the discrepancies between the models.« less
Revisiting the Structures of Several Antibiotics Bound to the Bacterial Ribosome
DOE Office of Scientific and Technical Information (OSTI.GOV)
D Bulkley; C Innis; G Blaha
2011-12-31
The increasing prevalence of antibiotic-resistant pathogens reinforces the need for structures of antibiotic-ribosome complexes that are accurate enough to enable the rational design of novel ribosome-targeting therapeutics. Structures of many antibiotics in complex with both archaeal and eubacterial ribosomes have been determined, yet discrepancies between several of these models have raised the question of whether these differences arise from species-specific variations or from experimental problems. Our structure of chloramphenicol in complex with the 70S ribosome from Thermus thermophilus suggests a model for chloramphenicol bound to the large subunit of the bacterial ribosome that is radically different from the prevailing model.more » Further, our structures of the macrolide antibiotics erythromycin and azithromycin in complex with a bacterial ribosome are indistinguishable from those determined of complexes with the 50S subunit of Haloarcula marismortui, but differ significantly from the models that have been published for 50S subunit complexes of the eubacterium Deinococcus radiodurans. Our structure of the antibiotic telithromycin bound to the T. thermophilus ribosome reveals a lactone ring with a conformation similar to that observed in the H. marismortui and D. radiodurans complexes. However, the alkyl-aryl moiety is oriented differently in all three organisms, and the contacts observed with the T. thermophilus ribosome are consistent with biochemical studies performed on the Escherichia coli ribosome. Thus, our results support a mode of macrolide binding that is largely conserved across species, suggesting that the quality and interpretation of electron density, rather than species specificity, may be responsible for many of the discrepancies between the models.« less
Teal, Cayla R; Street, Richard L
2009-02-01
Increasing the cultural competence of physicians is one means of responding to demographic changes in the USA, as well as reducing health disparities. However, in spite of the development and implementation of cultural competence training programs, little is known about the ways cultural competence manifests itself in medical encounters. This paper will present a model of culturally competent communication that offers a framework of studying cultural competence 'in action.' First, we describe four critical elements of culturally competent communication in the medical encounter--communication repertoire, situational awareness, adaptability, and knowledge about core cultural issues. We present a model of culturally competent physician communication that integrates existing frameworks for cultural competence in patient care with models of effective patient-centered communication. The culturally competent communication model includes five communication skills that are depicted as elements of a set in which acquisition of more skills corresponds to increasing complexity and culturally competent communication. The culturally competent communication model utilizes each of the four critical elements to fully develop each skill and apply increasingly sophisticated, contextually appropriate communication behaviors to engage with culturally different patients in complex interactions. It is designed to foster maximum physician sensitivity to cultural variation in patients as the foundation of physician-communication competence in interacting with patients.
NASA Technical Reports Server (NTRS)
Ancel, Ersin; Shih, Ann T.
2014-01-01
This paper highlights the development of a model that is focused on the safety issue of increasing complexity and reliance on automation systems in transport category aircraft. Recent statistics show an increase in mishaps related to manual handling and automation errors due to pilot complacency and over-reliance on automation, loss of situational awareness, automation system failures and/or pilot deficiencies. Consequently, the aircraft can enter a state outside the flight envelope and/or air traffic safety margins which potentially can lead to loss-of-control (LOC), controlled-flight-into-terrain (CFIT), or runway excursion/confusion accidents, etc. The goal of this modeling effort is to provide NASA's Aviation Safety Program (AvSP) with a platform capable of assessing the impacts of AvSP technologies and products towards reducing the relative risk of automation related accidents and incidents. In order to do so, a generic framework, capable of mapping both latent and active causal factors leading to automation errors, is developed. Next, the framework is converted into a Bayesian Belief Network model and populated with data gathered from Subject Matter Experts (SMEs). With the insertion of technologies and products, the model provides individual and collective risk reduction acquired by technologies and methodologies developed within AvSP.
Ligand placement based on prior structures: the guided ligand-replacement method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klei, Herbert E.; Bristol-Myers Squibb, Princeton, NJ 08543-4000; Moriarty, Nigel W., E-mail: nwmoriarty@lbl.gov
2014-01-01
A new module, Guided Ligand Replacement (GLR), has been developed in Phenix to increase the ease and success rate of ligand placement when prior protein-ligand complexes are available. The process of iterative structure-based drug design involves the X-ray crystal structure determination of upwards of 100 ligands with the same general scaffold (i.e. chemotype) complexed with very similar, if not identical, protein targets. In conjunction with insights from computational models and assays, this collection of crystal structures is analyzed to improve potency, to achieve better selectivity and to reduce liabilities such as absorption, distribution, metabolism, excretion and toxicology. Current methods formore » modeling ligands into electron-density maps typically do not utilize information on how similar ligands bound in related structures. Even if the electron density is of sufficient quality and resolution to allow de novo placement, the process can take considerable time as the size, complexity and torsional degrees of freedom of the ligands increase. A new module, Guided Ligand Replacement (GLR), was developed in Phenix to increase the ease and success rate of ligand placement when prior protein–ligand complexes are available. At the heart of GLR is an algorithm based on graph theory that associates atoms in the target ligand with analogous atoms in the reference ligand. Based on this correspondence, a set of coordinates is generated for the target ligand. GLR is especially useful in two situations: (i) modeling a series of large, flexible, complicated or macrocyclic ligands in successive structures and (ii) modeling ligands as part of a refinement pipeline that can automatically select a reference structure. Even in those cases for which no reference structure is available, if there are multiple copies of the bound ligand per asymmetric unit GLR offers an efficient way to complete the model after the first ligand has been placed. In all of these applications, GLR leverages prior knowledge from earlier structures to facilitate ligand placement in the current structure.« less
Improving a regional model using reduced complexity and parameter estimation
Kelson, Victor A.; Hunt, Randall J.; Haitjema, Henk M.
2002-01-01
The availability of powerful desktop computers and graphical user interfaces for ground water flow models makes possible the construction of ever more complex models. A proposed copper-zinc sulfide mine in northern Wisconsin offers a unique case in which the same hydrologic system has been modeled using a variety of techniques covering a wide range of sophistication and complexity. Early in the permitting process, simple numerical models were used to evaluate the necessary amount of water to be pumped from the mine, reductions in streamflow, and the drawdowns in the regional aquifer. More complex models have subsequently been used in an attempt to refine the predictions. Even after so much modeling effort, questions regarding the accuracy and reliability of the predictions remain. We have performed a new analysis of the proposed mine using the two-dimensional analytic element code GFLOW coupled with the nonlinear parameter estimation code UCODE. The new model is parsimonious, containing fewer than 10 parameters, and covers a region several times larger in areal extent than any of the previous models. The model demonstrates the suitability of analytic element codes for use with parameter estimation codes. The simplified model results are similar to the more complex models; predicted mine inflows and UCODE-derived 95% confidence intervals are consistent with the previous predictions. More important, the large areal extent of the model allowed us to examine hydrological features not included in the previous models, resulting in new insights about the effects that far-field boundary conditions can have on near-field model calibration and parameterization. In this case, the addition of surface water runoff into a lake in the headwaters of a stream while holding recharge constant moved a regional ground watershed divide and resulted in some of the added water being captured by the adjoining basin. Finally, a simple analytical solution was used to clarify the GFLOW model's prediction that, for a model that is properly calibrated for heads, regional drawdowns are relatively unaffected by the choice of aquifer properties, but that mine inflows are strongly affected. Paradoxically, by reducing model complexity, we have increased the understanding gained from the modeling effort.
Improving a regional model using reduced complexity and parameter estimation.
Kelson, Victor A; Hunt, Randall J; Haitjema, Henk M
2002-01-01
The availability of powerful desktop computers and graphical user interfaces for ground water flow models makes possible the construction of ever more complex models. A proposed copper-zinc sulfide mine in northern Wisconsin offers a unique case in which the same hydrologic system has been modeled using a variety of techniques covering a wide range of sophistication and complexity. Early in the permitting process, simple numerical models were used to evaluate the necessary amount of water to be pumped from the mine, reductions in streamflow, and the drawdowns in the regional aquifer. More complex models have subsequently been used in an attempt to refine the predictions. Even after so much modeling effort, questions regarding the accuracy and reliability of the predictions remain. We have performed a new analysis of the proposed mine using the two-dimensional analytic element code GFLOW coupled with the nonlinear parameter estimation code UCODE. The new model is parsimonious, containing fewer than 10 parameters, and covers a region several times larger in areal extent than any of the previous models. The model demonstrates the suitability of analytic element codes for use with parameter estimation codes. The simplified model results are similar to the more complex models; predicted mine inflows and UCODE-derived 95% confidence intervals are consistent with the previous predictions. More important, the large areal extent of the model allowed us to examine hydrological features not included in the previous models, resulting in new insights about the effects that far-field boundary conditions can have on near-field model calibration and parameterization. In this case, the addition of surface water runoff into a lake in the headwaters of a stream while holding recharge constant moved a regional ground watershed divide and resulted in some of the added water being captured by the adjoining basin. Finally, a simple analytical solution was used to clarify the GFLOW model's prediction that, for a model that is properly calibrated for heads, regional drawdowns are relatively unaffected by the choice of aquifer properties, but that mine inflows are strongly affected. Paradoxically, by reducing model complexity, we have increased the understanding gained from the modeling effort.
Analytical Modelling of the Spread of Disease in Confined and Crowded Spaces
NASA Astrophysics Data System (ADS)
Goscé, Lara; Barton, David A. W.; Johansson, Anders
2014-05-01
Since 1927 and until recently, most models describing the spread of disease have been of compartmental type, based on the assumption that populations are homogeneous and well-mixed. Recent models have utilised agent-based models and complex networks to explicitly study heterogeneous interaction patterns, but this leads to an increasing computational complexity. Compartmental models are appealing because of their simplicity, but their parameters, especially the transmission rate, are complex and depend on a number of factors, which makes it hard to predict how a change of a single environmental, demographic, or epidemiological factor will affect the population. Therefore, in this contribution we propose a middle ground, utilising crowd-behaviour research to improve compartmental models in crowded situations. We show how both the rate of infection as well as the walking speed depend on the local crowd density around an infected individual. The combined effect is that the rate of infection at a population scale has an analytically tractable non-linear dependency on crowd density. We model the spread of a hypothetical disease in a corridor and compare our new model with a typical compartmental model, which highlights the regime in which current models may not produce credible results.
A brief introduction to mixed effects modelling and multi-model inference in ecology
Donaldson, Lynda; Correa-Cano, Maria Eugenia; Goodwin, Cecily E.D.
2018-01-01
The use of linear mixed effects models (LMMs) is increasingly common in the analysis of biological data. Whilst LMMs offer a flexible approach to modelling a broad range of data types, ecological data are often complex and require complex model structures, and the fitting and interpretation of such models is not always straightforward. The ability to achieve robust biological inference requires that practitioners know how and when to apply these tools. Here, we provide a general overview of current methods for the application of LMMs to biological data, and highlight the typical pitfalls that can be encountered in the statistical modelling process. We tackle several issues regarding methods of model selection, with particular reference to the use of information theory and multi-model inference in ecology. We offer practical solutions and direct the reader to key references that provide further technical detail for those seeking a deeper understanding. This overview should serve as a widely accessible code of best practice for applying LMMs to complex biological problems and model structures, and in doing so improve the robustness of conclusions drawn from studies investigating ecological and evolutionary questions. PMID:29844961
A brief introduction to mixed effects modelling and multi-model inference in ecology.
Harrison, Xavier A; Donaldson, Lynda; Correa-Cano, Maria Eugenia; Evans, Julian; Fisher, David N; Goodwin, Cecily E D; Robinson, Beth S; Hodgson, David J; Inger, Richard
2018-01-01
The use of linear mixed effects models (LMMs) is increasingly common in the analysis of biological data. Whilst LMMs offer a flexible approach to modelling a broad range of data types, ecological data are often complex and require complex model structures, and the fitting and interpretation of such models is not always straightforward. The ability to achieve robust biological inference requires that practitioners know how and when to apply these tools. Here, we provide a general overview of current methods for the application of LMMs to biological data, and highlight the typical pitfalls that can be encountered in the statistical modelling process. We tackle several issues regarding methods of model selection, with particular reference to the use of information theory and multi-model inference in ecology. We offer practical solutions and direct the reader to key references that provide further technical detail for those seeking a deeper understanding. This overview should serve as a widely accessible code of best practice for applying LMMs to complex biological problems and model structures, and in doing so improve the robustness of conclusions drawn from studies investigating ecological and evolutionary questions.
Takeda, Kunio; Moriyama, Yoshiko
2015-01-01
The kinetic mechanism of surfactant-induced protein denaturation is discussed on the basis of not only stopped-flow kinetic data but also the changes of protein helicities caused by the surfactants and the discontinuous mobility changes of surfactant-protein complexes. For example, the α-helical structures of bovine serum albumin (BSA) are partially disrupted due to the addition of sodium dodecyl sulfate (SDS). Formation of SDS-BSA complex can lead to only four complex types with specific mobilities depending on the surfactant concentration. On the other hand, the apparent rate constant of the structural change of BSA increases with an increase of SDS concentration, indicating that the rate of the structural change becomes fast as the degree of the change increases. When a certain amount of surfactant ions bind to proteins, their native structures transform directly to particular structures without passing through intermediate stages that might be induced due to the binding of fewer amounts of the surfactant ions. Furthermore, this review brings up a question about two-state and three-state models, N⇌D and N⇌D'⇌D (N: native state, D: denatured sate, D': intermediate between N and D), which have been often adopted without hesitation in discussion on general denaturations of proteins. First of all, doubtful is whether any equilibrium relationship exists in such denaturation reactions. It cannot be disregarded that the D states in these models differ depending on the changes of intensities of the denaturing factors. The authors emphasize that the denaturations or the structural changes of proteins should be discussed assuming one-way reaction models with no backward processes rather than assuming the reversible two-state reaction models or similar modified reaction models.
Assessment of global flood exposures - developing an appropriate approach
NASA Astrophysics Data System (ADS)
Millinship, Ian; Booth, Naomi
2015-04-01
Increasingly complex probabilistic catastrophe models have become the standard for quantitative flood risk assessments by re/insurance companies. On the one hand, probabilistic modelling of this nature is extremely useful; a large range of risk metrics can be output. However, they can be time consuming and computationally expensive to develop and run. Levels of uncertainty are persistently high despite, or perhaps because of, attempts to increase resolution and complexity. A cycle of dependency between modelling companies and re/insurers has developed whereby available models are purchased, models run, and both portfolio and model data 'improved' every year. This can lead to potential exposures in perils and territories that are not currently modelled being largely overlooked by companies, who may then face substantial and unexpected losses when large events occur in these areas. We present here an approach to assessing global flood exposures which reduces the scale and complexity of approach used and begins with the identification of hotspots where there is a significant exposure to flood risk. The method comprises four stages: i) compile consistent exposure information, ii) to apply reinsurance terms and conditions to calculate values exposed, iii) to assess the potential hazard using a global set of flood hazard maps, and iv) to identify potential risk 'hotspots' which include considerations of spatially and/or temporally clustered historical events, and local flood defences. This global exposure assessment is designed as a scoping exercise, and reveals areas or cities where the potential for accumulated loss is of significant interest to a reinsurance company, and for which there is no existing catastrophe model. These regions are then candidates for the development of deterministic scenarios, or probabilistic models. The key advantages of this approach will be discussed. These include simplicity and ability of business leaders to understand results, as well as ease and speed of analysis and the advantages this can offer in terms of monitoring changing exposures over time. Significantly, in many areas of the world, this increase in exposure is likely to have more of an impact on increasing catastrophe losses than potential anthropogenically driven changes in weather extremes.
How do precision medicine and system biology response to human body's complex adaptability?
Yuan, Bing
2016-12-01
In the field of life sciences, although system biology and "precision medicine" introduce some complex scientifific methods and techniques, it is still based on the "analysis-reconstruction" of reductionist theory as a whole. Adaptability of complex system increase system behaviour uncertainty as well as the difficulties of precise identifification and control. It also put systems biology research into trouble. To grasp the behaviour and characteristics of organism fundamentally, systems biology has to abandon the "analysis-reconstruction" concept. In accordance with the guidelines of complexity science, systems biology should build organism model from holistic level, just like the Chinese medicine did in dealing with human body and disease. When we study the living body from the holistic level, we will fifind the adaptability of complex system is not the obstacle that increases the diffificulty of problem solving. It is the "exceptional", "right-hand man" that helping us to deal with the complexity of life more effectively.
Predictive Surface Complexation Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sverjensky, Dimitri A.
Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO 2 and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall,more » my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.« less
Three-Dimensional Modeling of Fluid and Heat Transport in an Accretionary Complex
NASA Astrophysics Data System (ADS)
Paula, C. A.; Ge, S.; Screaton, E. J.
2001-12-01
As sediments are scraped off of the subducting oceanic crust and accreted to the overriding plate, the rapid loading causes pore pressures in the underthrust sediments to increase. The change in pore pressure drives fluid flow and heat transport within the accretionary complex. Fluid is channeled along higher permeability faults and fractures and expelled at the seafloor. In this investigation, we examined the effects of sediment loading on fluid flow and thermal transport in the decollement at the Barbados Ridge subduction zone. Both the width and thickness of the Barbados Ridge accretionary complex increase from north to south. The presence of mud diapers south of the Tiburon Rise and an observed southward decrease in heat flow measurements indicate that the increased thickness of the southern Barbados accretionary prism affects the transport of chemicals and heat by fluids. The three-dimensional geometry and physical properties of the accretionary complex were utilized to construct a three-dimensional fluid flow/heat transport model. We calculated the pore pressure change due to a period of sediment loading and added this to steady-state pressure conditions to generate initial conditions for transient simulations. We then examined the diffusion of pore pressure and possible perturbation of the thermal regime over time due to loading of the underthrust sediments. The model results show that the sediment-loading event was sufficient to create small temperature fluctuations in the decollement zone. The magnitude of temperature fluctuation in the decollement was greatest at the deformation front but did not vary significantly from north to south of the Tiburon Rise.
Complex food webs prevent competitive exclusion among producer species.
Brose, Ulrich
2008-11-07
Herbivorous top-down forces and bottom-up competition for nutrients determine the coexistence and relative biomass patterns of producer species. Combining models of predator-prey and producer-nutrient interactions with a structural model of complex food webs, I investigated these two aspects in a dynamic food-web model. While competitive exclusion leads to persistence of only one producer species in 99.7% of the simulated simple producer communities without consumers, embedding the same producer communities in complex food webs generally yields producer coexistence. In simple producer communities, the producers with the most efficient nutrient-intake rates increase in biomass until they competitively exclude inferior producers. In food webs, herbivory predominantly reduces the biomass density of those producers that dominated in producer communities, which yields a more even biomass distribution. In contrast to prior analyses of simple modules, this facilitation of producer coexistence by herbivory does not require a trade-off between the nutrient-intake efficiency and the resistance to herbivory. The local network structure of food webs (top-down effects of the number of herbivores and the herbivores' maximum consumption rates) and the nutrient supply (bottom-up effect) interactively determine the relative biomass densities of the producer species. A strong negative feedback loop emerges in food webs: factors that increase producer biomasses also increase herbivory, which reduces producer biomasses. This negative feedback loop regulates the coexistence and biomass patterns of the producers by balancing biomass increases of producers and biomass fluxes to herbivores, which prevents competitive exclusion.
Juckem, Paul F.; Clark, Brian R.; Feinstein, Daniel T.
2017-05-04
The U.S. Geological Survey, National Water-Quality Assessment seeks to map estimated intrinsic susceptibility of the glacial aquifer system of the conterminous United States. Improved understanding of the hydrogeologic characteristics that explain spatial patterns of intrinsic susceptibility, commonly inferred from estimates of groundwater age distributions, is sought so that methods used for the estimation process are properly equipped. An important step beyond identifying relevant hydrogeologic datasets, such as glacial geology maps, is to evaluate how incorporation of these resources into process-based models using differing levels of detail could affect resulting simulations of groundwater age distributions and, thus, estimates of intrinsic susceptibility.This report describes the construction and calibration of three groundwater-flow models of northeastern Wisconsin that were developed with differing levels of complexity to provide a framework for subsequent evaluations of the effects of process-based model complexity on estimations of groundwater age distributions for withdrawal wells and streams. Preliminary assessments, which focused on the effects of model complexity on simulated water levels and base flows in the glacial aquifer system, illustrate that simulation of vertical gradients using multiple model layers improves simulated heads more in low-permeability units than in high-permeability units. Moreover, simulation of heterogeneous hydraulic conductivity fields in coarse-grained and some fine-grained glacial materials produced a larger improvement in simulated water levels in the glacial aquifer system compared with simulation of uniform hydraulic conductivity within zones. The relation between base flows and model complexity was less clear; however, the relation generally seemed to follow a similar pattern as water levels. Although increased model complexity resulted in improved calibrations, future application of the models using simulated particle tracking is anticipated to evaluate if these model design considerations are similarly important for understanding the primary modeling objective - to simulate reasonable groundwater age distributions.
Dynamic Business Networks: A Headache for Sustainable Systems Interoperability
NASA Astrophysics Data System (ADS)
Agostinho, Carlos; Jardim-Goncalves, Ricardo
Collaborative networked environments emerged with the spread of the internet, contributing to overcome past communication barriers, and identifying interoperability as an essential property. When achieved seamlessly, efficiency is increased in the entire product life cycle. Nowadays, most organizations try to attain interoperability by establishing peer-to-peer mappings with the different partners, or in optimized networks, by using international standard models as the core for information exchange. In current industrial practice, mappings are only defined once, and the morphisms that represent them, are hardcoded in the enterprise systems. This solution has been effective for static environments, where enterprise and product models are valid for decades. However, with an increasingly complex and dynamic global market, models change frequently to answer new customer requirements. This paper draws concepts from the complex systems science and proposes a framework for sustainable systems interoperability in dynamic networks, enabling different organizations to evolve at their own rate.
Multiple tipping points and optimal repairing in interacting networks
Majdandzic, Antonio; Braunstein, Lidia A.; Curme, Chester; Vodenska, Irena; Levy-Carciente, Sary; Eugene Stanley, H.; Havlin, Shlomo
2016-01-01
Systems composed of many interacting dynamical networks—such as the human body with its biological networks or the global economic network consisting of regional clusters—often exhibit complicated collective dynamics. Three fundamental processes that are typically present are failure, damage spread and recovery. Here we develop a model for such systems and find a very rich phase diagram that becomes increasingly more complex as the number of interacting networks increases. In the simplest example of two interacting networks we find two critical points, four triple points, ten allowed transitions and two ‘forbidden' transitions, as well as complex hysteresis loops. Remarkably, we find that triple points play the dominant role in constructing the optimal repairing strategy in damaged interacting systems. To test our model, we analyse an example of real interacting financial networks and find evidence of rapid dynamical transitions between well-defined states, in agreement with the predictions of our model. PMID:26926803
Update - Concept of Operations for Integrated Model-Centric Engineering at JPL
NASA Technical Reports Server (NTRS)
Bayer, Todd J.; Bennett, Matthew; Delp, Christopher L.; Dvorak, Daniel; Jenkins, Steven J.; Mandutianu, Sanda
2011-01-01
The increasingly ambitious requirements levied on JPL's space science missions, and the development pace of such missions, challenge our current engineering practices. All the engineering disciplines face this growth in complexity to some degree, but the challenges are greatest in systems engineering where numerous competing interests must be reconciled and where complex system level interactions must be identified and managed. Undesired system-level interactions are increasingly a major risk factor that cannot be reliably exposed by testing, and natural-language single-viewpoint specifications areinadequate to capture and expose system level interactions and characteristics. Systems engineering practices must improve to meet these challenges, and the most promising approach today is the movement toward a more integrated and model-centric approach to mission conception, design, implementation and operations. This approach elevates engineering models to a principal role in systems engineering, gradually replacing traditional document centric engineering practices.
ERIC Educational Resources Information Center
Frazier, Thomas W.; Youngstrom, Eric A.
2007-01-01
A historical increase in the number of factors purportedly measured by commercial tests of cognitive ability may result from four distinct pressures including: increasingly complex models of intelligence, test publishers' desires to provide clinically useful assessment instruments with greater interpretive value, test publishers' desires to…
Reassessing Geophysical Models of the Bushveld Complex in 3D
NASA Astrophysics Data System (ADS)
Cole, J.; Webb, S. J.; Finn, C.
2012-12-01
Conceptual geophysical models of the Bushveld Igneous Complex show three possible geometries for its mafic component: 1) Separate intrusions with vertical feeders for the eastern and western lobes (Cousins, 1959) 2) Separate dipping sheets for the two lobes (Du Plessis and Kleywegt, 1987) 3) A single saucer-shaped unit connected at depth in the central part between the two lobes (Cawthorn et al, 1998) Model three incorporates isostatic adjustment of the crust in response to the weight of the dense mafic material. The model was corroborated by results of a broadband seismic array over southern Africa, known as the Southern African Seismic Experiment (SASE) (Nguuri, et al, 2001; Webb et al, 2004). This new information about the crustal thickness only became available in the last decade and could not be considered in the earlier models. Nevertheless, there is still on-going debate as to which model is correct. All of the models published up to now have been done in 2 or 2.5 dimensions. This is not well suited to modelling the complex geometry of the Bushveld intrusion. 3D modelling takes into account effects of variations in geometry and geophysical properties of lithologies in a full three dimensional sense and therefore affects the shape and amplitude of calculated fields. The main question is how the new knowledge of the increased crustal thickness, as well as the complexity of the Bushveld Complex, will impact on the gravity fields calculated for the existing conceptual models, when modelling in 3D. The three published geophysical models were remodelled using full 3Dl potential field modelling software, and including crustal thickness obtained from the SASE. The aim was not to construct very detailed models, but to test the existing conceptual models in an equally conceptual way. Firstly a specific 2D model was recreated in 3D, without crustal thickening, to establish the difference between 2D and 3D results. Then the thicker crust was added. Including the less dense, thicker crust underneath the Bushveld Complex necessitates the presence of dense material in the central area between the eastern and western lobes. The simplest way to achieve this is to model the mafic component of the Bushveld Complex as a single intrusion. This is similar to what the first students of the Bushveld Complex suggested. Conceptual models are by definition simplified versions of the real situation, and the geometry of the Bushveld Complex is expected to be much more intricate. References Cawthorn, R.G., Cooper, G.R.J., Webb, S.J. (1998). Connectivity between the western and eastern limbs of the Bushveld Complex. S Afr J Geol, 101, 291-298. Cousins, C.A. (1959). The structure of the mafic portion of the Bushveld Igneous Complex. Trans Geol Soc S Afr, 62, 179-189. Du Plessis, A., Kleywegt, R.J. (1987). A dipping sheet model for the mafic lobes of the Bushveld Complex. S Afr J Geol, 90, 1-6. Nguuri, T.K., Gore, J., James, D.E., Webb, S.J., Wright, C., Zengeni, T.G., Gwavava, O., Snoke, J.A. and Kaapvaal Seismic Group. (2001). Crustal structure beneath southern Africa and its implications for the formation and evolution of the Kaapvaal and Zimbabwe cratons. Geoph Res Lett, 28, 2501-2504. Webb, S.J., Cawthorn, R.G., Nguuri, T., James, D. (2004). Gravity modelling of Bushveld Complex connectivity supported by Southern African Seismic Experiment results, S Afr J Geol, 107, 207-218.
NASA Astrophysics Data System (ADS)
Harikrishnan, K. P.
2018-02-01
We consider the simplest model in the family of discrete predator-prey system and introduce for the first time an environmental factor in the evolution of the system by periodically modulating the natural death rate of the predator. We show that with the introduction of environmental modulation, the bifurcation structure becomes much more complex with bubble structure and inverse period doubling bifurcation. The model also displays the peculiar phenomenon of coexistence of multiple limit cycles in the domain of attraction for a given parameter value that combine and finally gets transformed into a single strange attractor as the control parameter is increased. To identify the chaotic regime in the parameter plane of the model, we apply the recently proposed scheme based on the correlation dimension analysis. We show that the environmental modulation is more favourable for the stable coexistence of the predator and the prey as the regions of fixed point and limit cycle in the parameter plane increase at the expense of chaotic domain.
Hopkins, Jim
2016-01-01
The main concepts of the free energy (FE) neuroscience developed by Karl Friston and colleagues parallel those of Freud's Project for a Scientific Psychology. In Hobson et al. (2014) these include an innate virtual reality generator that produces the fictive prior beliefs that Freud described as the primary process. This enables Friston's account to encompass a unified treatment—a complexity theory—of the role of virtual reality in both dreaming and mental disorder. In both accounts the brain operates to minimize FE aroused by sensory impingements—including interoceptive impingements that report compliance with biological imperatives—and constructs a representation/model of the causes of impingement that enables this minimization. In Friston's account (variational) FE equals complexity minus accuracy, and is minimized by increasing accuracy and decreasing complexity. Roughly the brain (or model) increases accuracy together with complexity in waking. This is mediated by consciousness-creating active inference—by which it explains sensory impingements in terms of perceptual experiences of their causes. In sleep it reduces complexity by processes that include both synaptic pruning and consciousness/virtual reality/dreaming in REM. The consciousness-creating active inference that effects complexity-reduction in REM dreaming must operate on FE-arousing data distinct from sensory impingement. The most relevant source is remembered arousals of emotion, both recent and remote, as processed in SWS and REM on “active systems” accounts of memory consolidation/reconsolidation. Freud describes these remembered arousals as condensed in the dreamwork for use in the conscious contents of dreams, and similar condensation can be seen in symptoms. Complexity partly reflects emotional conflict and trauma. This indicates that dreams and symptoms are both produced to reduce complexity in the form of potentially adverse (traumatic or conflicting) arousals of amygdala-related emotions. Mental disorder is thus caused by computational complexity together with mechanisms like synaptic pruning that have evolved for complexity-reduction; and important features of disorder can be understood in these terms. Details of the consilience among Freudian, systems consolidation, and complexity-reduction accounts appear clearly in the analysis of a single fragment of a dream, indicating also how complexity reduction proceeds by a process resembling Bayesian model selection. PMID:27471478
Harvest: a web-based biomedical data discovery and reporting application development platform.
Italia, Michael J; Pennington, Jeffrey W; Ruth, Byron; Wrazien, Stacey; Loutrel, Jennifer G; Crenshaw, E Bryan; Miller, Jeffrey; White, Peter S
2013-01-01
Biomedical researchers share a common challenge of making complex data understandable and accessible. This need is increasingly acute as investigators seek opportunities for discovery amidst an exponential growth in the volume and complexity of laboratory and clinical data. To address this need, we developed Harvest, an open source framework that provides a set of modular components to aid the rapid development and deployment of custom data discovery software applications. Harvest incorporates visual representations of multidimensional data types in an intuitive, web-based interface that promotes a real-time, iterative approach to exploring complex clinical and experimental data. The Harvest architecture capitalizes on standards-based, open source technologies to address multiple functional needs critical to a research and development environment, including domain-specific data modeling, abstraction of complex data models, and a customizable web client.
NASA Astrophysics Data System (ADS)
Araujo, Marcia Valeria Gaspar de; Macedo, Osmir F. L.; Nascimento, Cristiane da Cunha; Conegero, Leila Souza; Barreto, Ledjane Silva; Almeida, Luis Eduardo; Costa, Nivan Bezerra da; Gimenez, Iara F.
2009-02-01
An inclusion complex between the dihydrofolate reductase inhibitor pyrimethamine (PYR) and α-cyclodextrin (α-CD) was prepared and characterized. From the phase-solubility diagram, a linear increase of PYR solubility was verified as a function of α-CD concentration, suggesting the formation of a soluble complex. A 1:1 host-guest stoichiometry can be proposed according to the Job's plot, obtained from the difference of PYR fluorescence intensity in the presence and absence of α-CD. Differential scanning calorimetry (DSC) measurements provided additional evidences of complexation such as the absence of the endothermic peak assigned to the melting of the drug. The inclusion mode characterized by two-dimensional 1H NMR spectroscopy (ROESY) involves penetration of the p-chlorophenyl ring into the α-CD cavity, in agreement to the orientation optimized by molecular modeling methods.
NASA Astrophysics Data System (ADS)
Maes, Julien; Geiger, Sebastian
2018-01-01
Laboratory experiments have shown that oil production from sandstone and carbonate reservoirs by waterflooding could be significantly increased by manipulating the composition of the injected water (e.g. by lowering the ionic strength). Recent studies suggest that a change of wettability induced by a change in surface charge is likely to be one of the driving mechanism of the so-called low-salinity effect. In this case, the potential increase of oil recovery during waterflooding at low ionic strength would be strongly impacted by the inter-relations between flow, transport and chemical reaction at the pore-scale. Hence, a new numerical model that includes two-phase flow, solute reactive transport and wettability alteration is implemented based on the Direct Numerical Simulation of the Navier-Stokes equations and surface complexation modelling. Our model is first used to match experimental results of oil droplet detachment from clay patches. We then study the effect of wettability change on the pore-scale displacement for simple 2D calcite micro-models and evaluate the impact of several parameters such as water composition and injected velocity. Finally, we repeat the simulation experiments on a larger and more complex pore geometry representing a carbonate rock. Our simulations highlight two different effects of low-salinity on oil production from carbonate rocks: a smaller number of oil clusters left in the pores after invasion, and a greater number of pores invaded.
Gerothanassis, I. P.; Momenteau, M.; Barrie, P. J.; Kalodimos, C. G.; Hawkes, G. E.
1996-04-24
13C cross-polarization magic-angle-spinning (CP/MAS) NMR spectra of several carbonmonoxide (93-99% (13)C enriched) hemoprotein models with 1,2-dimethylimidazole (1,2-diMeIm) and 1-methylimidazole (1-MeIm) as axial ligands are reported. This enables the (13)CO spinning sideband manifold to be measured and hence the principal components of the (13)CO chemical shift tensor to be obtained. Negative polar interactions in the binding pocket of the cap porphyrin model and inhibition of Fe-->CO back-donation result in a reduction in shielding anisotropy; on the contrary, positive distal polar interactions result in an increase in the shielding anisotropy and asymmetry parameter in some models. It appears that the axial hindered base 1,2-dimethylimidazole has little direct effect on the local geometry at the CO site, despite higher rates of CO desorption being observed for such complexes. This suggests that the mechanism by which steric interactions are released for the 1,2-diMeIm complexes compared to 1-MeIm complexes does not involve a significant increase in bending of the Fe-C-O unit. The asymmetry of the shielding tensor of all the heme model compounds studied is smaller than that found for horse myoglobin and rabbit hemoglobin.
Hosoda, Kazufumi; Tsuda, Soichiro; Kadowaki, Kohmei; Nakamura, Yutaka; Nakano, Tadashi; Ishii, Kojiro
2016-02-01
Understanding ecosystem dynamics is crucial as contemporary human societies face ecosystem degradation. One of the challenges that needs to be recognized is the complex hierarchical dynamics. Conventional dynamic models in ecology often represent only the population level and have yet to include the dynamics of the sub-organism level, which makes an ecosystem a complex adaptive system that shows characteristic behaviors such as resilience and regime shifts. The neglect of the sub-organism level in the conventional dynamic models would be because integrating multiple hierarchical levels makes the models unnecessarily complex unless supporting experimental data are present. Now that large amounts of molecular and ecological data are increasingly accessible in microbial experimental ecosystems, it is worthwhile to tackle the questions of their complex hierarchical dynamics. Here, we propose an approach that combines microbial experimental ecosystems and a hierarchical dynamic model named population-reaction model. We present a simple microbial experimental ecosystem as an example and show how the system can be analyzed by a population-reaction model. We also show that population-reaction models can be applied to various ecological concepts, such as predator-prey interactions, climate change, evolution, and stability of diversity. Our approach will reveal a path to the general understanding of various ecosystems and organisms. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
Elementary Teachers' Selection and Use of Visual Models
NASA Astrophysics Data System (ADS)
Lee, Tammy D.; Gail Jones, M.
2018-02-01
As science grows in complexity, science teachers face an increasing challenge of helping students interpret models that represent complex science systems. Little is known about how teachers select and use models when planning lessons. This mixed methods study investigated the pedagogical approaches and visual models used by elementary in-service and preservice teachers in the development of a science lesson about a complex system (e.g., water cycle). Sixty-seven elementary in-service and 69 elementary preservice teachers completed a card sort task designed to document the types of visual models (e.g., images) that teachers choose when planning science instruction. Quantitative and qualitative analyses were conducted to analyze the card sort task. Semistructured interviews were conducted with a subsample of teachers to elicit the rationale for image selection. Results from this study showed that both experienced in-service teachers and novice preservice teachers tended to select similar models and use similar rationales for images to be used in lessons. Teachers tended to select models that were aesthetically pleasing and simple in design and illustrated specific elements of the water cycle. The results also showed that teachers were not likely to select images that represented the less obvious dimensions of the water cycle. Furthermore, teachers selected visual models more as a pedagogical tool to illustrate specific elements of the water cycle and less often as a tool to promote student learning related to complex systems.
Human performance cognitive-behavioral modeling: a benefit for occupational safety.
Gore, Brian F
2002-01-01
Human Performance Modeling (HPM) is a computer-aided job analysis software methodology used to generate predictions of complex human-automation integration and system flow patterns with the goal of improving operator and system safety. The use of HPM tools has recently been increasing due to reductions in computational cost, augmentations in the tools' fidelity, and usefulness in the generated output. An examination of an Air Man-machine Integration Design and Analysis System (Air MIDAS) model evaluating complex human-automation integration currently underway at NASA Ames Research Center will highlight the importance to occupational safety of considering both cognitive and physical aspects of performance when researching human error.
Fishing anti(lymph)angiogenic drugs with zebrafish.
García-Caballero, Melissa; Quesada, Ana R; Medina, Miguel A; Marí-Beffa, Manuel
2018-02-01
Zebrafish, an amenable small teleost fish with a complex mammal-like circulatory system, is being increasingly used for drug screening and toxicity studies. It combines the biological complexity of in vivo models with a higher-throughput screening capability compared with other available animal models. Externally growing, transparent embryos, displaying well-defined blood and lymphatic vessels, allow the inexpensive, rapid, and automatable evaluation of drug candidates that are able to inhibit neovascularisation. Here, we briefly review zebrafish as a model for the screening of anti(lymph)angiogenic drugs, with emphasis on the advantages and limitations of the different zebrafish-based in vivo assays. Copyright © 2017 Elsevier Ltd. All rights reserved.
Human performance cognitive-behavioral modeling: a benefit for occupational safety
NASA Technical Reports Server (NTRS)
Gore, Brian F.
2002-01-01
Human Performance Modeling (HPM) is a computer-aided job analysis software methodology used to generate predictions of complex human-automation integration and system flow patterns with the goal of improving operator and system safety. The use of HPM tools has recently been increasing due to reductions in computational cost, augmentations in the tools' fidelity, and usefulness in the generated output. An examination of an Air Man-machine Integration Design and Analysis System (Air MIDAS) model evaluating complex human-automation integration currently underway at NASA Ames Research Center will highlight the importance to occupational safety of considering both cognitive and physical aspects of performance when researching human error.
A 3D modeling approach to complex faults with multi-source data
NASA Astrophysics Data System (ADS)
Wu, Qiang; Xu, Hua; Zou, Xukai; Lei, Hongzhuan
2015-04-01
Fault modeling is a very important step in making an accurate and reliable 3D geological model. Typical existing methods demand enough fault data to be able to construct complex fault models, however, it is well known that the available fault data are generally sparse and undersampled. In this paper, we propose a workflow of fault modeling, which can integrate multi-source data to construct fault models. For the faults that are not modeled with these data, especially small-scale or approximately parallel with the sections, we propose the fault deduction method to infer the hanging wall and footwall lines after displacement calculation. Moreover, using the fault cutting algorithm can supplement the available fault points on the location where faults cut each other. Increasing fault points in poor sample areas can not only efficiently construct fault models, but also reduce manual intervention. By using a fault-based interpolation and remeshing the horizons, an accurate 3D geological model can be constructed. The method can naturally simulate geological structures no matter whether the available geological data are sufficient or not. A concrete example of using the method in Tangshan, China, shows that the method can be applied to broad and complex geological areas.
Wang, Danny J J; Jann, Kay; Fan, Chang; Qiao, Yang; Zang, Yu-Feng; Lu, Hanbing; Yang, Yihong
2018-01-01
Recently, non-linear statistical measures such as multi-scale entropy (MSE) have been introduced as indices of the complexity of electrophysiology and fMRI time-series across multiple time scales. In this work, we investigated the neurophysiological underpinnings of complexity (MSE) of electrophysiology and fMRI signals and their relations to functional connectivity (FC). MSE and FC analyses were performed on simulated data using neural mass model based brain network model with the Brain Dynamics Toolbox, on animal models with concurrent recording of fMRI and electrophysiology in conjunction with pharmacological manipulations, and on resting-state fMRI data from the Human Connectome Project. Our results show that the complexity of regional electrophysiology and fMRI signals is positively correlated with network FC. The associations between MSE and FC are dependent on the temporal scales or frequencies, with higher associations between MSE and FC at lower temporal frequencies. Our results from theoretical modeling, animal experiment and human fMRI indicate that (1) Regional neural complexity and network FC may be two related aspects of brain's information processing: the more complex regional neural activity, the higher FC this region has with other brain regions; (2) MSE at high and low frequencies may represent local and distributed information processing across brain regions. Based on literature and our data, we propose that the complexity of regional neural signals may serve as an index of the brain's capacity of information processing-increased complexity may indicate greater transition or exploration between different states of brain networks, thereby a greater propensity for information processing.
DNA compaction by poly (amido amine) dendrimers of ammonia cored and ethylene diamine cored
NASA Astrophysics Data System (ADS)
Qamhieh, K.; Al-Shawwa, J.
2017-06-01
The complexes build-up of DNA and soft particles poly amidoamine (PAMAM) dendrimers of ammonia cored of generations (G1-G6) and ethylenediamine cored of generations (G1-G10) have been studied, using a new theoretical model developed by Qamhieh and coworkers. The model describes the interaction between linear polyelectrolyte (LPE) chain and ion-penetrable spheres. Many factors affecting LPE/dendrimer complex have been investigated such as dendrimer generation, the Bjerrum length, salt concentration, and rigidity of the LPE chain represented by the persistence length. It is found that the wrapping chain length around dendrimer increases by increasing dendrimer`s generation, Bjerrum length, and salt concentration, while decreases by increasing the persistence length of the LPE chain. Also we can conclude that the wrapping length of LPE chain around ethylenediamine cored dendrimers is larger than its length around ammonia cored dendrimers.
Vanysacker, L.; Denis, C.; Declerck, P.; Piasecka, A.; Vankelecom, I. F. J.
2013-01-01
Since many years, membrane biofouling has been described as the Achilles heel of membrane fouling. In the present study, an ecological assay was performed using model systems with increasing complexity: a monospecies assay using Pseudomonas aeruginosa or Escherichia coli separately, a duospecies assay using both microorganisms, and a multispecies assay using activated sludge with or without spiked P. aeruginosa. The microbial adhesion and biofilm formation were evaluated in terms of bacterial cell densities, species richness, and bacterial community composition on polyvinyldifluoride, polyethylene, and polysulfone membranes. The data show that biofouling formation was strongly influenced by the kind of microorganism, the interactions between the organisms, and the changes in environmental conditions whereas the membrane effect was less important. The findings obtained in this study suggest that more knowledge in species composition and microbial interactions is needed in order to understand the complex biofouling process. This is the first report describing the microbial interactions with a membrane during the biofouling development. PMID:23986906
Lucas, Anne D; Nagaraja, Srinidhi; Gordon, Edward A; Hitchins, Victoria M
2015-01-01
Reusable medical devices need to be cleaned prior to disinfection or sterilization and subsequent use to prevent infections. The cleanability of medical devices depends in part on the design of the device. This study examined how models of orthopedic medical devices of increasing complexity retain calcium phosphate bone cement, a relevant test soil for these devices. The dye Alizarin Red S and micro-computed tomography (μCT) were used to assess the amount and location of bone cement debris in a series of model orthopedic devices. Testing was performed after soiling and cleaning once, and soiling and cleaning 10 times. The color change of the dye after reacting with the bone cement was useful for indicating the presence of bone cement in these models. High-resolution μCT analysis provided the volume and location of the bone cement. Models that were more complex retained significantly more bone debris than simpler designs. Model devices repeatedly soiled and cleaned 10 times retained significantly more bone debris than those soiled and cleaned once. Significantly more bone cement was retained in the more complex lumen structures. This information may be useful in designing reusable orthopedic devices, and other complex medical devices with lumens.
Arai, Mamiko; Brandt, Vicky; Dabaghian, Yuri
2014-01-01
Learning arises through the activity of large ensembles of cells, yet most of the data neuroscientists accumulate is at the level of individual neurons; we need models that can bridge this gap. We have taken spatial learning as our starting point, computationally modeling the activity of place cells using methods derived from algebraic topology, especially persistent homology. We previously showed that ensembles of hundreds of place cells could accurately encode topological information about different environments (“learn” the space) within certain values of place cell firing rate, place field size, and cell population; we called this parameter space the learning region. Here we advance the model both technically and conceptually. To make the model more physiological, we explored the effects of theta precession on spatial learning in our virtual ensembles. Theta precession, which is believed to influence learning and memory, did in fact enhance learning in our model, increasing both speed and the size of the learning region. Interestingly, theta precession also increased the number of spurious loops during simplicial complex formation. We next explored how downstream readout neurons might define co-firing by grouping together cells within different windows of time and thereby capturing different degrees of temporal overlap between spike trains. Our model's optimum coactivity window correlates well with experimental data, ranging from ∼150–200 msec. We further studied the relationship between learning time, window width, and theta precession. Our results validate our topological model for spatial learning and open new avenues for connecting data at the level of individual neurons to behavioral outcomes at the neuronal ensemble level. Finally, we analyzed the dynamics of simplicial complex formation and loop transience to propose that the simplicial complex provides a useful working description of the spatial learning process. PMID:24945927
NASA Astrophysics Data System (ADS)
Ravazzani, G.; Montaldo, N.; Mancini, M.; Rosso, R.
2003-04-01
Event-based hydrologic models need the antecedent soil moisture condition, as critical boundary initial condition for flood simulation. Land-surface models (LSMs) have been developed to simulate mass and energy transfers, and to update the soil moisture condition through time from the solution of water and energy balance equations. They are recently used in distributed hydrologic modeling for flood prediction systems. Recent developments have made LSMs more complex by inclusion of more processes and controlling variables, increasing parameter number and uncertainty of their estimates. This also led to increasing of computational burden and parameterization of the distributed hydrologic models. In this study we investigate: 1) the role of soil moisture initial conditions in the modeling of Alpine basin floods; 2) the adequate complexity level of LSMs for the distributed hydrologic modeling of Alpine basin floods. The Toce basin is the case study; it is located in the North Piedmont (Italian Alps), and it has a total drainage area of 1534 km2 at Candoglia section. Three distributed hydrologic models of different level of complexity are developed and compared: two (TDLSM and SDLSM) are continuous models, one (FEST02) is an event model based on the simplified SCS-CN method for rainfall abstractions. In the TDLSM model a two-layer LSM computes both saturation and infiltration excess runoff, and simulates the evolution of the water table spatial distribution using the topographic index; in the SDLSM model a simplified one-layer distributed LSM only computes hortonian runoff, and doesn’t simulate the water table dynamic. All the three hydrologic models simulate the surface runoff propagation through the Muskingum-Cunge method. TDLSM and SDLSM models have been applied for the two-year (1996 and 1997) simulation period, during which two major floods occurred in the November 1996 and in the June 1997. The models have been calibrated and tested comparing simulated and observed hydrographs at Candoglia. Sensitivity analysis of the models to significant LSM parameters were also performed. The performances of the three models in the simulation of the two major floods are compared. Interestingly, the results indicate that the SDLSM model is able to sufficiently well predict the major floods of this Alpine basin; indeed, this model is a good compromise between the over-parameterized and too complex TDLSM model and the over-simplified FEST02 model.
JAMS - a software platform for modular hydrological modelling
NASA Astrophysics Data System (ADS)
Kralisch, Sven; Fischer, Christian
2015-04-01
Current challenges of understanding and assessing the impacts of climate and land use changes on environmental systems demand for an ever-increasing integration of data and process knowledge in corresponding simulation models. Software frameworks that allow for a seamless creation of integrated models based on less complex components (domain models, process simulation routines) have therefore gained increasing attention during the last decade. JAMS is an Open-Source software framework that has been especially designed to cope with the challenges of eco-hydrological modelling. This is reflected by (i) its flexible approach for representing time and space, (ii) a strong separation of process simulation components from the declarative description of more complex models using domain specific XML, (iii) powerful analysis and visualization functions for spatial and temporal input and output data, and (iv) parameter optimization and uncertainty analysis functions commonly used in environmental modelling. Based on JAMS, different hydrological and nutrient-transport simulation models were implemented and successfully applied during the last years. We will present the JAMS core concepts and give an overview of models, simulation components and support tools available for that framework. Sample applications will be used to underline the advantages of component-based model designs and to show how JAMS can be used to address the challenges of integrated hydrological modelling.
Unmet mental health care needs in U.S. children with medical complexity, 2005-2010.
An, Ruopeng
2016-03-01
Children with special healthcare needs (CSHCN) are those who have or are at elevated risk for a chronic physical, developmental, behavioral or emotional condition and need healthcare services of a type or quantity beyond that required by children generally. Within CSHCN, a small group of children with medical complexity have medical vulnerability and intensive care needs that are not easily met by existing healthcare models. This study estimated the national prevalence of unmet mental healthcare needs among CSHCN with and without medical complexity. Secondary data analysis (N=80,965) based on the National Survey of CSHCN 2005-2006 and 2009-2010 waves. During 2005-2010, 7.66% of CSHCN in the U.S. were with medical complexity. The prevalence of unmet needs for mental healthcare services among CSHCN increased from 3.71% in 2005-2006 to 5.62% in 2009-2010. In 2005-2006 the prevalence of unmet mental healthcare needs among children with medical complexity was 9.92%, tripling the prevalence among CSHCN without medical complexity of 3.10%. The prevalence of unmet mental healthcare needs among children with medical complexity further increased to 13.71% in 2009-2010, whereas that among CSHCN without medical complexity increased to 5.07%. Among CSHCN with medical complexity, older children and children living in poorer households were more likely to have an unmet need for mental healthcare services. Substantial disparities in access to mental healthcare services between CSHCN with and without medical complexity were present, and the prevalence of unmet mental healthcare needs among both groups had noticeably increased during 2005-2010. Copyright © 2016 Elsevier Inc. All rights reserved.
Lunar crater volumes - Interpretation by models of impact cratering and upper crustal structure
NASA Technical Reports Server (NTRS)
Croft, S. K.
1978-01-01
Lunar crater volumes can be divided by size into two general classes with distinctly different functional dependence on diameter. Craters smaller than approximately 12 km in diameter are morphologically simple and increase in volume as the cube of the diameter, while craters larger than about 20 km are complex and increase in volume at a significantly lower rate implying shallowing. Ejecta and interior volumes are not identical and their ratio, Schroeters Ratio (SR), increases from about 0.5 for simple craters to about 1.5 for complex craters. The excess of ejecta volume causing the increase, can be accounted for by a discontinuity in lunar crust porosity at 1.5-2 km depth. The diameter range of significant increase in SR corresponds with the diameter range of transition from simple to complex crater morphology. This observation, combined with theoretical rebound calculation, indicates control of the transition diameter by the porosity structure of the upper crust.
Lado, Bettina; Matus, Ivan; Rodríguez, Alejandra; Inostroza, Luis; Poland, Jesse; Belzile, François; del Pozo, Alejandro; Quincke, Martín; Castro, Marina; von Zitzewitz, Jarislav
2013-12-09
In crop breeding, the interest of predicting the performance of candidate cultivars in the field has increased due to recent advances in molecular breeding technologies. However, the complexity of the wheat genome presents some challenges for applying new technologies in molecular marker identification with next-generation sequencing. We applied genotyping-by-sequencing, a recently developed method to identify single-nucleotide polymorphisms, in the genomes of 384 wheat (Triticum aestivum) genotypes that were field tested under three different water regimes in Mediterranean climatic conditions: rain-fed only, mild water stress, and fully irrigated. We identified 102,324 single-nucleotide polymorphisms in these genotypes, and the phenotypic data were used to train and test genomic selection models intended to predict yield, thousand-kernel weight, number of kernels per spike, and heading date. Phenotypic data showed marked spatial variation. Therefore, different models were tested to correct the trends observed in the field. A mixed-model using moving-means as a covariate was found to best fit the data. When we applied the genomic selection models, the accuracy of predicted traits increased with spatial adjustment. Multiple genomic selection models were tested, and a Gaussian kernel model was determined to give the highest accuracy. The best predictions between environments were obtained when data from different years were used to train the model. Our results confirm that genotyping-by-sequencing is an effective tool to obtain genome-wide information for crops with complex genomes, that these data are efficient for predicting traits, and that correction of spatial variation is a crucial ingredient to increase prediction accuracy in genomic selection models.
Zhao, Lei; Gossmann, Toni I; Waxman, David
2016-03-21
The Wright-Fisher model is an important model in evolutionary biology and population genetics. It has been applied in numerous analyses of finite populations with discrete generations. It is recognised that real populations can behave, in some key aspects, as though their size that is not the census size, N, but rather a smaller size, namely the effective population size, Ne. However, in the Wright-Fisher model, there is no distinction between the effective and census population sizes. Equivalently, we can say that in this model, Ne coincides with N. The Wright-Fisher model therefore lacks an important aspect of biological realism. Here, we present a method that allows Ne to be directly incorporated into the Wright-Fisher model. The modified model involves matrices whose size is determined by Ne. Thus apart from increased biological realism, the modified model also has reduced computational complexity, particularly so when Ne⪡N. For complex problems, it may be hard or impossible to numerically analyse the most commonly-used approximation of the Wright-Fisher model that incorporates Ne, namely the diffusion approximation. An alternative approach is simulation. However, the simulations need to be sufficiently detailed that they yield an effective size that is different to the census size. Simulations may also be time consuming and have attendant statistical errors. The method presented in this work may then be the only alternative to simulations, when Ne differs from N. We illustrate the straightforward application of the method to some problems involving allele fixation and the determination of the equilibrium site frequency spectrum. We then apply the method to the problem of fixation when three alleles are segregating in a population. This latter problem is significantly more complex than a two allele problem and since the diffusion equation cannot be numerically solved, the only other way Ne can be incorporated into the analysis is by simulation. We have achieved good accuracy in all cases considered. In summary, the present work extends the realism and tractability of an important model of evolutionary biology and population genetics. Copyright © 2016 Elsevier Ltd. All rights reserved.
Weinberger, Emma M.; Regehr, Keil J.; Berry, Scott M.; Beebe, David J.; Alarid, Elaine T.
2016-01-01
Heterotypic interactions in cancer microenvironments play important roles in disease initiation, progression, and spread. Co-culture is the predominant approach used in dissecting paracrine interactions between tumor and stromal cells, but functional results from simple co-cultures frequently fail to correlate to in vivo conditions. Though complex heterotypic in vitro models have improved functional relevance, there is little systematic knowledge of how multi-culture parameters influence this recapitulation. We therefore have employed a more iterative approach to investigate the influence of increasing model complexity; increased heterotypic complexity specifically. Here we describe how the compartmentalized and microscale elements of our multi-culture device allowed us to obtain gene expression data from one cell type at a time in a heterotypic culture where cells communicated through paracrine interactions. With our device we generated a large dataset comprised of cell type specific gene-expression patterns for cultures of increasing complexity (three cell types in mono-, co-, or tri-culture) not readily accessible in other systems. Principal component analysis indicated that gene expression was changed in co-culture but was often more strongly altered in tri-culture as compared to mono-culture. Our analysis revealed that cell type identity and the complexity around it (mono-, co-, or tri-culture) influence gene regulation. We also observed evidence of complementary regulation between cell types in the same heterotypic culture. Here we demonstrate the utility of our platform in providing insight into how tumor and stromal cells respond to microenvironments of varying complexities highlighting the expanding importance of heterotypic cultures that go beyond conventional co-culture. PMID:27432323
de Hoogt, Ronald; Estrada, Marta F; Vidic, Suzana; Davies, Emma J; Osswald, Annika; Barbier, Michael; Santo, Vítor E; Gjerde, Kjersti; van Zoggel, Hanneke J A A; Blom, Sami; Dong, Meng; Närhi, Katja; Boghaert, Erwin; Brito, Catarina; Chong, Yolanda; Sommergruber, Wolfgang; van der Kuip, Heiko; van Weerden, Wytske M; Verschuren, Emmy W; Hickman, John; Graeser, Ralph
2017-11-21
Two-dimensional (2D) culture of cancer cells in vitro does not recapitulate the three-dimensional (3D) architecture, heterogeneity and complexity of human tumors. More representative models are required that better reflect key aspects of tumor biology. These are essential studies of cancer biology and immunology as well as for target validation and drug discovery. The Innovative Medicines Initiative (IMI) consortium PREDECT (www.predect.eu) characterized in vitro models of three solid tumor types with the goal to capture elements of tumor complexity and heterogeneity. 2D culture and 3D mono- and stromal co-cultures of increasing complexity, and precision-cut tumor slice models were established. Robust protocols for the generation of these platforms are described. Tissue microarrays were prepared from all the models, permitting immunohistochemical analysis of individual cells, capturing heterogeneity. 3D cultures were also characterized using image analysis. Detailed step-by-step protocols, exemplary datasets from the 2D, 3D, and slice models, and refined analytical methods were established and are presented.
de Hoogt, Ronald; Estrada, Marta F.; Vidic, Suzana; Davies, Emma J.; Osswald, Annika; Barbier, Michael; Santo, Vítor E.; Gjerde, Kjersti; van Zoggel, Hanneke J. A. A.; Blom, Sami; Dong, Meng; Närhi, Katja; Boghaert, Erwin; Brito, Catarina; Chong, Yolanda; Sommergruber, Wolfgang; van der Kuip, Heiko; van Weerden, Wytske M.; Verschuren, Emmy W.; Hickman, John; Graeser, Ralph
2017-01-01
Two-dimensional (2D) culture of cancer cells in vitro does not recapitulate the three-dimensional (3D) architecture, heterogeneity and complexity of human tumors. More representative models are required that better reflect key aspects of tumor biology. These are essential studies of cancer biology and immunology as well as for target validation and drug discovery. The Innovative Medicines Initiative (IMI) consortium PREDECT (www.predect.eu) characterized in vitro models of three solid tumor types with the goal to capture elements of tumor complexity and heterogeneity. 2D culture and 3D mono- and stromal co-cultures of increasing complexity, and precision-cut tumor slice models were established. Robust protocols for the generation of these platforms are described. Tissue microarrays were prepared from all the models, permitting immunohistochemical analysis of individual cells, capturing heterogeneity. 3D cultures were also characterized using image analysis. Detailed step-by-step protocols, exemplary datasets from the 2D, 3D, and slice models, and refined analytical methods were established and are presented. PMID:29160867
Brain signal complexity rises with repetition suppression in visual learning.
Lafontaine, Marc Philippe; Lacourse, Karine; Lina, Jean-Marc; McIntosh, Anthony R; Gosselin, Frédéric; Théoret, Hugo; Lippé, Sarah
2016-06-21
Neuronal activity associated with visual processing of an unfamiliar face gradually diminishes when it is viewed repeatedly. This process, known as repetition suppression (RS), is involved in the acquisition of familiarity. Current models suggest that RS results from interactions between visual information processing areas located in the occipito-temporal cortex and higher order areas, such as the dorsolateral prefrontal cortex (DLPFC). Brain signal complexity, which reflects information dynamics of cortical networks, has been shown to increase as unfamiliar faces become familiar. However, the complementarity of RS and increases in brain signal complexity have yet to be demonstrated within the same measurements. We hypothesized that RS and brain signal complexity increase occur simultaneously during learning of unfamiliar faces. Further, we expected alteration of DLPFC function by transcranial direct current stimulation (tDCS) to modulate RS and brain signal complexity over the occipito-temporal cortex. Participants underwent three tDCS conditions in random order: right anodal/left cathodal, right cathodal/left anodal and sham. Following tDCS, participants learned unfamiliar faces, while an electroencephalogram (EEG) was recorded. Results revealed RS over occipito-temporal electrode sites during learning, reflected by a decrease in signal energy, a measure of amplitude. Simultaneously, as signal energy decreased, brain signal complexity, as estimated with multiscale entropy (MSE), increased. In addition, prefrontal tDCS modulated brain signal complexity over the right occipito-temporal cortex during the first presentation of faces. These results suggest that although RS may reflect a brain mechanism essential to learning, complementary processes reflected by increases in brain signal complexity, may be instrumental in the acquisition of novel visual information. Such processes likely involve long-range coordinated activity between prefrontal and lower order visual areas. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.
An Integrated Software Package to Enable Predictive Simulation Capabilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yousu; Fitzhenry, Erin B.; Jin, Shuangshuang
The power grid is increasing in complexity due to the deployment of smart grid technologies. Such technologies vastly increase the size and complexity of power grid systems for simulation and modeling. This increasing complexity necessitates not only the use of high-performance-computing (HPC) techniques, but a smooth, well-integrated interplay between HPC applications. This paper presents a new integrated software package that integrates HPC applications and a web-based visualization tool based on a middleware framework. This framework can support the data communication between different applications. Case studies with a large power system demonstrate the predictive capability brought by the integrated software package,more » as well as the better situational awareness provided by the web-based visualization tool in a live mode. Test results validate the effectiveness and usability of the integrated software package.« less
ERIC Educational Resources Information Center
Cheung, Mike W. L.; Chan, Wai
2009-01-01
Structural equation modeling (SEM) is widely used as a statistical framework to test complex models in behavioral and social sciences. When the number of publications increases, there is a need to systematically synthesize them. Methodology of synthesizing findings in the context of SEM is known as meta-analytic SEM (MASEM). Although correlation…
ERIC Educational Resources Information Center
Nowinski, Wieslaw L.; Thirunavuukarasuu, Arumugam; Volkau, Ihar; Marchenko, Yevgen; Aminah, Bivi; Gelas, Arnaud; Huang, Su; Lee, Looi Chow; Liu, Jimin; Ng, Ting Ting; Nowinska, Natalia G.; Qian, Guoyu Yu; Puspitasari, Fiftarina; Runge, Val M.
2009-01-01
The increasing complexity of human body models enabled by advances in diagnostic imaging, computing, and growing knowledge calls for the development of a new generation of systems for intelligent exploration of these models. Here, we introduce a novel paradigm for the exploration of digital body models illustrating cerebral vasculature. It enables…
Thomas W. Bonnot; Frank R. Thompson; Joshua J. Millspaugh
2017-01-01
The increasing need to predict how climate change will impact wildlife species has exposed limitations in how well current approaches model important biological processes at scales at which those processes interact with climate. We used a comprehensive approach that combined recent advances in landscape and population modeling into dynamic-landscape metapopulation...
How Long is my Toilet Roll?--A Simple Exercise in Mathematical Modelling
ERIC Educational Resources Information Center
Johnston, Peter R.
2013-01-01
The simple question of how much paper is left on my toilet roll is studied from a mathematical modelling perspective. As is typical with applied mathematics, models of increasing complexity are introduced and solved. Solutions produced at each step are compared with the solution from the previous step. This process exposes students to the typical…
NASA Astrophysics Data System (ADS)
Mulcahy, J. P.; Walters, D. N.; Bellouin, N.; Milton, S. F.
2014-05-01
The inclusion of the direct and indirect radiative effects of aerosols in high-resolution global numerical weather prediction (NWP) models is being increasingly recognised as important for the improved accuracy of short-range weather forecasts. In this study the impacts of increasing the aerosol complexity in the global NWP configuration of the Met Office Unified Model (MetUM) are investigated. A hierarchy of aerosol representations are evaluated including three-dimensional monthly mean speciated aerosol climatologies, fully prognostic aerosols modelled using the CLASSIC aerosol scheme and finally, initialised aerosols using assimilated aerosol fields from the GEMS project. The prognostic aerosol schemes are better able to predict the temporal and spatial variation of atmospheric aerosol optical depth, which is particularly important in cases of large sporadic aerosol events such as large dust storms or forest fires. Including the direct effect of aerosols improves model biases in outgoing long-wave radiation over West Africa due to a better representation of dust. However, uncertainties in dust optical properties propagate to its direct effect and the subsequent model response. Inclusion of the indirect aerosol effects improves surface radiation biases at the North Slope of Alaska ARM site due to lower cloud amounts in high-latitude clean-air regions. This leads to improved temperature and height forecasts in this region. Impacts on the global mean model precipitation and large-scale circulation fields were found to be generally small in the short-range forecasts. However, the indirect aerosol effect leads to a strengthening of the low-level monsoon flow over the Arabian Sea and Bay of Bengal and an increase in precipitation over Southeast Asia. Regional impacts on the African Easterly Jet (AEJ) are also presented with the large dust loading in the aerosol climatology enhancing of the heat low over West Africa and weakening the AEJ. This study highlights the importance of including a more realistic treatment of aerosol-cloud interactions in global NWP models and the potential for improved global environmental prediction systems through the incorporation of more complex aerosol schemes.
Impacts of increasing the aerosol complexity in the Met Office global NWP model
NASA Astrophysics Data System (ADS)
Mulcahy, J. P.; Walters, D. N.; Bellouin, N.; Milton, S. F.
2013-11-01
Inclusion of the direct and indirect radiative effects of aerosols in high resolution global numerical weather prediction (NWP) models is being increasingly recognised as important for the improved accuracy of short-range weather forecasts. In this study the impacts of increasing the aerosol complexity in the global NWP configuration of the Met Office Unified Model (MetUM) are investigated. A hierarchy of aerosol representations are evaluated including three dimensional monthly mean speciated aerosol climatologies, fully prognostic aerosols modelled using the CLASSIC aerosol scheme and finally, initialised aerosols using assimilated aerosol fields from the GEMS project. The prognostic aerosol schemes are better able to predict the temporal and spatial variation of atmospheric aerosol optical depth, which is particularly important in cases of large sporadic aerosol events such as large dust storms or forest fires. Including the direct effect of aerosols improves model biases in outgoing longwave radiation over West Africa due to a better representation of dust. However, uncertainties in dust optical properties propogate to its direct effect and the subsequent model response. Inclusion of the indirect aerosol effects improves surface radiation biases at the North Slope of Alaska ARM site due to lower cloud amounts in high latitude clean air regions. This leads to improved temperature and height forecasts in this region. Impacts on the global mean model precipitation and large-scale circulation fields were found to be generally small in the short range forecasts. However, the indirect aerosol effect leads to a strengthening of the low level monsoon flow over the Arabian Sea and Bay of Bengal and an increase in precipitation over Southeast Asia. Regional impacts on the African Easterly Jet (AEJ) are also presented with the large dust loading in the aerosol climatology enhancing of the heat low over West Africa and weakening the AEJ. This study highlights the importance of including a~more realistic treatment of aerosol-cloud interactions in global NWP models and the potential for improved global environmental prediction systems through the incorporation of more complex aerosol schemes.
Marianski, Mateusz; Dannenberg, J J
2012-02-02
We present density functional theory (DFT) calculations at the X3LYP/D95(d,p) level on the solvation of polyalanine α-helices in water. The study includes the effects of discrete water molecules and the CPCM and AMSOL SM5.2 solvent continuum model both separately and in combination. We find that individual water molecules cooperatively hydrogen-bond to both the C- and N-termini of the helix, which results in increases in the dipole moment of the helix/water complex to more than the vector sum of their individual dipole moments. These waters are found to be more stable than in bulk solvent. On the other hand, individual water molecules that interact with the backbone lower the dipole moment of the helix/water complex to below that of the helix itself. Small clusters of waters at the termini increase the dipole moments of the helix/water aggregates, but the effect diminishes as more waters are added. We discuss the somewhat complex behavior of the helix with the discrete waters in the continuum models.
Marianski, Mateusz
2012-01-01
We present density functional theory (DFT) calculations at the X3LYP/D95(d,p) level on the solvation of polyalanine α-helices in water. The study includes the effects of discrete water molecules and the CPCM and AMSOL SM5.2 solvent continuum model both separately and in combination. We find that individual water molecules cooperatively hydrogen-bond to both the C- and N-termini of the helix, which results in increases in the dipole moment of the helix/water complex to more than the vector sum of their individual dipole moments. These waters are found to be more stable than in bulk solvent. On the other hand, individual water that interact with the backbone lower the dipole moment of the helix/water complex to below that of the helix, itself. Small clusters of waters at the termini increase the dipole moments of the helix/water aggregates, but the effect diminishes as more waters are added. We discuss the somewhat complex behavior of the helix with the discrete waters in the continuum models. PMID:22201227
Kaufman, Michelle R; Cornish, Flora; Zimmerman, Rick S; Johnson, Blair T
2014-08-15
Despite increasing recent emphasis on the social and structural determinants of HIV-related behavior, empirical research and interventions lag behind, partly because of the complexity of social-structural approaches. This article provides a comprehensive and practical review of the diverse literature on multi-level approaches to HIV-related behavior change in the interest of contributing to the ongoing shift to more holistic theory, research, and practice. It has the following specific aims: (1) to provide a comprehensive list of relevant variables/factors related to behavior change at all points on the individual-structural spectrum, (2) to map out and compare the characteristics of important recent multi-level models, (3) to reflect on the challenges of operating with such complex theoretical tools, and (4) to identify next steps and make actionable recommendations. Using a multi-level approach implies incorporating increasing numbers of variables and increasingly context-specific mechanisms, overall producing greater intricacies. We conclude with recommendations on how best to respond to this complexity, which include: using formative research and interdisciplinary collaboration to select the most appropriate levels and variables in a given context; measuring social and institutional variables at the appropriate level to ensure meaningful assessments of multiple levels are made; and conceptualizing intervention and research with reference to theoretical models and mechanisms to facilitate transferability, sustainability, and scalability.
Hierarchical modeling and robust synthesis for the preliminary design of large scale complex systems
NASA Astrophysics Data System (ADS)
Koch, Patrick Nathan
Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: (1) Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis, (2) Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration, and (3) Noise modeling techniques for implementing robust preliminary design when approximate models are employed. The method developed and associated approaches are illustrated through their application to the preliminary design of a commercial turbofan turbine propulsion system; the turbofan system-level problem is partitioned into engine cycle and configuration design and a compressor module is integrated for more detailed subsystem-level design exploration, improving system evaluation.
Johnson, B. Thomas
1989-01-01
Traditional single species toxicity tests and multiple component laboratory-scaled microcosm assays were combined to assess the toxicological hazard of diesel oil, a model complex mixture, to a model aquatic environment. The immediate impact of diesel oil dosed on a freshwater community was studied in a model pond microcosm over 14 days: a 7-day dosage and a 7-day recovery period. A multicomponent laboratory microcosm was designed to monitor the biological effects of diesel oil (1·0 mg litre−1) on four components: water, sediment (soil + microbiota), plants (aquatic macrophytes and algae), and animals (zooplanktonic and zoobenthic invertebrates). To determine the sensitivity of each part of the community to diesel oil contamination and how this model community recovered when the oil dissipated, limnological, toxicological, and microbiological variables were considered. Our model revealed these significant occurrences during the spill period: first, a community production and respiration perturbation, characterized in the water column by a decrease in dissolved oxygen and redox potential and a concomitant increase in alkalinity and conductivity; second, marked changes in microbiota of sediments that included bacterial heterotrophic dominance and a high heterotrophic index (0·6), increased bacterial productivity, and the marked increases in numbers of saprophytic bacteria (10 x) and bacterial oil degraders (1000 x); and third, column water acutely toxic (100% mortality) to two model taxa: Selenastrum capricornutum and Daphnia magna. Following the simulated clean-up procedure to remove the oil slick, the recovery period of this freshwater microcosm was characterized by a return to control values. This experimental design emphasized monitoring toxicological responses in aquatic microcosm; hence, we proposed the term ‘toxicosm’ to describe this approach to aquatic toxicological hazard evaluation. The toxicosm as a valuable toxicological tool for screening aquatic contaminants was demonstrated using diesel oil as a model complex mixture.
Fang, Lingzhao; Sahana, Goutam; Ma, Peipei; Su, Guosheng; Yu, Ying; Zhang, Shengli; Lund, Mogens Sandø; Sørensen, Peter
2017-08-10
A better understanding of the genetic architecture underlying complex traits (e.g., the distribution of causal variants and their effects) may aid in the genomic prediction. Here, we hypothesized that the genomic variants of complex traits might be enriched in a subset of genomic regions defined by genes grouped on the basis of "Gene Ontology" (GO), and that incorporating this independent biological information into genomic prediction models might improve their predictive ability. Four complex traits (i.e., milk, fat and protein yields, and mastitis) together with imputed sequence variants in Holstein (HOL) and Jersey (JER) cattle were analysed. We first carried out a post-GWAS analysis in a HOL training population to assess the degree of enrichment of the association signals in the gene regions defined by each GO term. We then extended the genomic best linear unbiased prediction model (GBLUP) to a genomic feature BLUP (GFBLUP) model, including an additional genomic effect quantifying the joint effect of a group of variants located in a genomic feature. The GBLUP model using a single random effect assumes that all genomic variants contribute to the genomic relationship equally, whereas GFBLUP attributes different weights to the individual genomic relationships in the prediction equation based on the estimated genomic parameters. Our results demonstrate that the immune-relevant GO terms were more associated with mastitis than milk production, and several biologically meaningful GO terms improved the prediction accuracy with GFBLUP for the four traits, as compared with GBLUP. The improvement of the genomic prediction between breeds (the average increase across the four traits was 0.161) was more apparent than that it was within the HOL (the average increase across the four traits was 0.020). Our genomic feature modelling approaches provide a framework to simultaneously explore the genetic architecture and genomic prediction of complex traits by taking advantage of independent biological knowledge.
Development of PBPK Models for Gasoline in Adult and Pregnant Rats and their Fetuses
Concern for potential developmental effects of exposure to gasoline-ethanol blends has grown along with their increased use in the US fuel supply. Physiologically-based pharmacokinetic (PBPK) models for these complex mixtures were developed to address dosimetric issues related to...
MODELLING QUALITY ASSURANCE PLAN FOR THE LAKE MICHIGAN MASS BALANCE PROJECT
With the ever increasing complexity and costs of ecosystem protection and remediation, the USEPA is placing more emphasis on ensuring the quality and credibility of scientific tools, such as models, that are used to help guide decision-makers who are faced with difficult manageme...
Predictive Modeling in Adult Education
ERIC Educational Resources Information Center
Lindner, Charles L.
2011-01-01
The current economic crisis, a growing workforce, the increasing lifespan of workers, and demanding, complex jobs have made organizations highly selective in employee recruitment and retention. It is therefore important, to the adult educator, to develop models of learning that better prepare adult learners for the workplace. The purpose of…
Experimental Validation of Various Temperature Modells for Semi-Physical Tyre Model Approaches
NASA Astrophysics Data System (ADS)
Hackl, Andreas; Scherndl, Christoph; Hirschberg, Wolfgang; Lex, Cornelia
2017-10-01
With increasing level of complexity and automation in the area of automotive engineering, the simulation of safety relevant Advanced Driver Assistance Systems (ADAS) leads to increasing accuracy demands in the description of tyre contact forces. In recent years, with improvement in tyre simulation, the needs for coping with tyre temperatures and the resulting changes in tyre characteristics are rising significantly. Therefore, experimental validation of three different temperature model approaches is carried out, discussed and compared in the scope of this article. To investigate or rather evaluate the range of application of the presented approaches in combination with respect of further implementation in semi-physical tyre models, the main focus lies on the a physical parameterisation. Aside from good modelling accuracy, focus is held on computational time and complexity of the parameterisation process. To evaluate this process and discuss the results, measurements from a Hoosier racing tyre 6.0 / 18.0 10 LCO C2000 from an industrial flat test bench are used. Finally the simulation results are compared with the measurement data.
Hiding the system from the user: Moving from complex mental models to elegant metaphors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curtis W. Nielsen; David J. Bruemmer
2007-08-01
In previous work, increased complexity of robot behaviors and the accompanying interface design often led to operator confusion and/or a fight for control between the robot and operator. We believe the reason for the conflict was that the design of the interface and interactions presented too much of the underlying robot design model to the operator. Since the design model includes the implementation of sensors, behaviors, and sophisticated algorithms, the result was that the operator’s cognitive efforts were focused on understanding the design of the robot system as opposed to focusing on the task at hand. This paper illustrates howmore » this very problem emerged at the INL and how the implementation of new metaphors for interaction has allowed us to hide the design model from the user and allow the user to focus more on the task at hand. Supporting the user’s focus on the task rather than on the design model allows increased use of the system and significant performance improvement in a search task with novice users.« less
Nahm, Francis Sahngun; Park, Zee-Yong; Nahm, Sang-Soep; Kim, Yong Chul; Lee, Pyung Bok
2014-01-01
Complex regional pain syndrome (CRPS) is a rare but debilitating pain disorder. Although the exact pathophysiology of CRPS is not fully understood, central and peripheral mechanisms might be involved in the development of this disorder. To reveal the central mechanism of CRPS, we conducted a proteomic analysis of rat cerebrum using the chronic postischemia pain (CPIP) model, a novel experimental model of CRPS. After generating the CPIP animal model, we performed a proteomic analysis of the rat cerebrum using a multidimensional protein identification technology, and screened the proteins differentially expressed between the CPIP and control groups. Results. A total of 155 proteins were differentially expressed between the CPIP and control groups: 125 increased and 30 decreased; expressions of proteins related to cell signaling, synaptic plasticity, regulation of cell proliferation, and cytoskeletal formation were increased in the CPIP group. However, proenkephalin A, cereblon, and neuroserpin were decreased in CPIP group. Altered expression of cerebral proteins in the CPIP model indicates cerebral involvement in the pathogenesis of CRPS. Further study is required to elucidate the roles of these proteins in the development and maintenance of CRPS.
The Role of Air-sea Coupling in the Response of Climate Extremes to Aerosols
NASA Astrophysics Data System (ADS)
Mahajan, S.
2017-12-01
Air-sea interactions dominate the climate of surrounding regions and thus also modulate the climate response to local and remote aerosol forcings. To clearly isolate the role of air-sea coupling in the climate response to aerosols, we conduct experiments with a full complexity atmosphere model that is coupled to a series of ocean models progressively increasing in complexity. The ocean models range from a data ocean model with prescribed SSTs, to a slab ocean model that only allows thermodynamic interactions, to a full dynamic ocean model. In a preliminary study, we have conducted single forcing experiments with black carbon aerosols in an atmosphere GCM coupled to a data ocean model and a slab ocean model. We find that while black carbon aerosols can intensify mean and extreme summer monsoonal precipitation over the Indian sub-continent, air-sea coupling can dramatically modulate this response. Black carbon aerosols in the vicinity of the Arabian Sea result in an increase of sea surface temperatures there in the slab ocean model, which intensify the low-level Somali Jet. The associated increase in moisture transport into Western India enhances the mean as well as extreme precipitation. In prescribed SST experiments, where SSTs are not allowed to respond BC aerosols, the response is muted. We will present results from a hierarchy of GCM simulations that investigate the role of air-sea coupling in the climate response to aerosols in more detail.
Formation Mechanism of Oxide-Sulfide Complex Inclusions in High-Sulfur-Containing Steel Melts
NASA Astrophysics Data System (ADS)
Shin, Jae Hong; Park, Joo Hyun
2018-02-01
The [S] content in resulfurized steel is controlled in the range of 200 to 800 ppm to ensure good machinability and workability. It is well known that "MgAl2O4(spinel)+CaS" complex inclusions are formed in molten steel during the ladle refining process, and these cause nozzle clogging during continuous casting. Thus, in the present study, the "Refractory-Slag-Metal-Inclusions (ReSMI)" multiphase reaction model was employed in conjunction with experiments to investigate the influence of slag composition and [S] content in the steel on the formation of oxide-sulfide complex inclusions. The critical [S] and [Al] contents necessary for the precipitation of CaS in the CaO-Al2O3-MgO-SiO2 (CAMS) oxide inclusions were predicted from the composition of the liquid inclusions, as observed by scanning electron microscopy-electron dispersive spectrometry (SEM-EDS) and calculated using the ReSMI multiphase reaction model. The critical [S] content increases with increasing content of SiO2 in the slag at a given [Al] content. Formation mechanisms for spinel+CaS and spinel+MnS complex inclusions were also proposed.
Managing Complexity in Next Generation Robotic Spacecraft: From a Software Perspective
NASA Technical Reports Server (NTRS)
Reinholtz, Kirk
2008-01-01
This presentation highlights the challenges in the design of software to support robotic spacecraft. Robotic spacecraft offer a higher degree of autonomy, however currently more capabilities are required, primarily in the software, while providing the same or higher degree of reliability. The complexity of designing such an autonomous system is great, particularly while attempting to address the needs for increased capabilities and high reliability without increased needs for time or money. The efforts to develop programming models for the new hardware and the integration of software architecture are highlighted.
NASA Technical Reports Server (NTRS)
Salas, Manuel D.
2007-01-01
The research program of the aerodynamics, aerothermodynamics and plasmadynamics discipline of NASA's Hypersonic Project is reviewed. Details are provided for each of its three components: 1) development of physics-based models of non-equilibrium chemistry, surface catalytic effects, turbulence, transition and radiation; 2) development of advanced simulation tools to enable increased spatial and time accuracy, increased geometrical complexity, grid adaptation, increased physical-processes complexity, uncertainty quantification and error control; and 3) establishment of experimental databases from ground and flight experiments to develop better understanding of high-speed flows and to provide data to validate and guide the development of simulation tools.
Interactive Visualization of Complex Seismic Data and Models Using Bokeh
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chai, Chengping; Ammon, Charles J.; Maceira, Monica
Visualizing multidimensional data and models becomes more challenging as the volume and resolution of seismic data and models increase. But thanks to the development of powerful and accessible computer systems, a model web browser can be used to visualize complex scientific data and models dynamically. In this paper, we present four examples of seismic model visualization using an open-source Python package Bokeh. One example is a visualization of a surface-wave dispersion data set, another presents a view of three-component seismograms, and two illustrate methods to explore a 3D seismic-velocity model. Unlike other 3D visualization packages, our visualization approach has amore » minimum requirement on users and is relatively easy to develop, provided you have reasonable programming skills. Finally, utilizing familiar web browsing interfaces, the dynamic tools provide us an effective and efficient approach to explore large data sets and models.« less
Interaction of mathematical modeling and social and behavioral HIV/AIDS research.
Cassels, Susan; Goodreau, Steven M
2011-03-01
HIV is transmitted within complex biobehavioral systems. Mathematical modeling can provide insight to complex population-level outcomes of various behaviors measured at an individual level. HIV models in the social and behavioral sciences can be categorized in a number of ways; here, we consider two classes of applications common in the field generally, and in the past year in particular: those models that explore significant behavioral determinants of HIV disparities within and between populations; and those models that seek to evaluate the potential impact of specific social and behavioral interventions. We discuss two overarching issues we see in the field: the need to further systematize effectiveness models of behavioral interventions, and the need for increasing investigation of the use of behavioral data in epidemic models. We believe that a recent initiative by the National Institutes of Health will qualitatively change the relationships between epidemic modeling and sociobehavioral prevention research in the coming years.
Interactive Visualization of Complex Seismic Data and Models Using Bokeh
Chai, Chengping; Ammon, Charles J.; Maceira, Monica; ...
2018-02-14
Visualizing multidimensional data and models becomes more challenging as the volume and resolution of seismic data and models increase. But thanks to the development of powerful and accessible computer systems, a model web browser can be used to visualize complex scientific data and models dynamically. In this paper, we present four examples of seismic model visualization using an open-source Python package Bokeh. One example is a visualization of a surface-wave dispersion data set, another presents a view of three-component seismograms, and two illustrate methods to explore a 3D seismic-velocity model. Unlike other 3D visualization packages, our visualization approach has amore » minimum requirement on users and is relatively easy to develop, provided you have reasonable programming skills. Finally, utilizing familiar web browsing interfaces, the dynamic tools provide us an effective and efficient approach to explore large data sets and models.« less
Karnon, Jonathan; Haji Ali Afzali, Hossein
2014-06-01
Modelling in economic evaluation is an unavoidable fact of life. Cohort-based state transition models are most common, though discrete event simulation (DES) is increasingly being used to implement more complex model structures. The benefits of DES relate to the greater flexibility around the implementation and population of complex models, which may provide more accurate or valid estimates of the incremental costs and benefits of alternative health technologies. The costs of DES relate to the time and expertise required to implement and review complex models, when perhaps a simpler model would suffice. The costs are not borne solely by the analyst, but also by reviewers. In particular, modelled economic evaluations are often submitted to support reimbursement decisions for new technologies, for which detailed model reviews are generally undertaken on behalf of the funding body. This paper reports the results from a review of published DES-based economic evaluations. Factors underlying the use of DES were defined, and the characteristics of applied models were considered, to inform options for assessing the potential benefits of DES in relation to each factor. Four broad factors underlying the use of DES were identified: baseline heterogeneity, continuous disease markers, time varying event rates, and the influence of prior events on subsequent event rates. If relevant, individual-level data are available, representation of the four factors is likely to improve model validity, and it is possible to assess the importance of their representation in individual cases. A thorough model performance evaluation is required to overcome the costs of DES from the users' perspective, but few of the reviewed DES models reported such a process. More generally, further direct, empirical comparisons of complex models with simpler models would better inform the benefits of DES to implement more complex models, and the circumstances in which such benefits are most likely.
A.J. Tepley; E.A. Thomann
2012-01-01
Recent increases in computation power have prompted enormous growth in the use of simulation models in ecological research. These models are valued for their ability to account for much of the ecological complexity found in field studies, but this ability usually comes at the cost of losing transparency into how the models work. In order to foster greater understanding...
A Comparison of Four Software Programs for Implementing Decision Analytic Cost-Effectiveness Models.
Hollman, Chase; Paulden, Mike; Pechlivanoglou, Petros; McCabe, Christopher
2017-08-01
The volume and technical complexity of both academic and commercial research using decision analytic modelling has increased rapidly over the last two decades. The range of software programs used for their implementation has also increased, but it remains true that a small number of programs account for the vast majority of cost-effectiveness modelling work. We report a comparison of four software programs: TreeAge Pro, Microsoft Excel, R and MATLAB. Our focus is on software commonly used for building Markov models and decision trees to conduct cohort simulations, given their predominance in the published literature around cost-effectiveness modelling. Our comparison uses three qualitative criteria as proposed by Eddy et al.: "transparency and validation", "learning curve" and "capability". In addition, we introduce the quantitative criterion of processing speed. We also consider the cost of each program to academic users and commercial users. We rank the programs based on each of these criteria. We find that, whilst Microsoft Excel and TreeAge Pro are good programs for educational purposes and for producing the types of analyses typically required by health technology assessment agencies, the efficiency and transparency advantages of programming languages such as MATLAB and R become increasingly valuable when more complex analyses are required.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brugger, K.E.; Tiebout, H.M. III
1994-12-31
Wildlife toxicologists pioneered methodologies for assessing ecological risk to nontarget species. Historically, ecological risk assessments (ERAS) focused on a limited array of species and were based on a relatively few population-level endpoints (mortality, reproduction). Currently, risk assessment models are becoming increasingly complex that factor in multi-species interactions (across trophic levels) and utilize an increasingly diverse number of ecologically significant endpoints. This trend suggests the increasing importance of safeguarding not only populations of individual species, but also the overall integrity of the larger biotic systems that support them. In this sense, ERAs are in alignment with Conservation Biology, an applied sciencemore » of ecological knowledge used to conserve biodiversity. A theoretical conservation biology model could be incorporated in ERAs to quantify impacts to biodiversity (structure, function or composition across levels of biological organization). The authors suggest that the Franklin-Noss model for evaluating biodiversity, with its nested, hierarchical approach, may provide a suitable paradigm for assessing and integrating the ecological risk that chemical contaminants pose to biological systems from the simplest levels (genotypes, individual organisms) to the most complex levels of organization (communities and ecosystems). The Franklin-Noss model can accommodate the existing ecotoxicological database and, perhaps more importantly, indicate new areas in which critical endpoints should be identified and investigated.« less
NASA Technical Reports Server (NTRS)
Mog, Robert A.
1999-01-01
Unique and innovative graph theory, neural network, organizational modeling, and genetic algorithms are applied to the design and evolution of programmatic and organizational architectures. Graph theory representations of programs and organizations increase modeling capabilities and flexibility, while illuminating preferable programmatic/organizational design features. Treating programs and organizations as neural networks results in better system synthesis, and more robust data modeling. Organizational modeling using covariance structures enhances the determination of organizational risk factors. Genetic algorithms improve programmatic evolution characteristics, while shedding light on rulebase requirements for achieving specified technological readiness levels, given budget and schedule resources. This program of research improves the robustness and verifiability of systems synthesis tools, including the Complex Organizational Metric for Programmatic Risk Environments (COMPRE).
Adapting APSIM to model the physiology and genetics of complex adaptive traits in field crops.
Hammer, Graeme L; van Oosterom, Erik; McLean, Greg; Chapman, Scott C; Broad, Ian; Harland, Peter; Muchow, Russell C
2010-05-01
Progress in molecular plant breeding is limited by the ability to predict plant phenotype based on its genotype, especially for complex adaptive traits. Suitably constructed crop growth and development models have the potential to bridge this predictability gap. A generic cereal crop growth and development model is outlined here. It is designed to exhibit reliable predictive skill at the crop level while also introducing sufficient physiological rigour for complex phenotypic responses to become emergent properties of the model dynamics. The approach quantifies capture and use of radiation, water, and nitrogen within a framework that predicts the realized growth of major organs based on their potential and whether the supply of carbohydrate and nitrogen can satisfy that potential. The model builds on existing approaches within the APSIM software platform. Experiments on diverse genotypes of sorghum that underpin the development and testing of the adapted crop model are detailed. Genotypes differing in height were found to differ in biomass partitioning among organs and a tall hybrid had significantly increased radiation use efficiency: a novel finding in sorghum. Introducing these genetic effects associated with plant height into the model generated emergent simulated phenotypic differences in green leaf area retention during grain filling via effects associated with nitrogen dynamics. The relevance to plant breeding of this capability in complex trait dissection and simulation is discussed.
NASA Astrophysics Data System (ADS)
Rubinstein, Alexander; Sabirianov, Renat
2011-03-01
Using a non-local electrostatic approach that incorporates the short-range structure of the contacting media, we evaluated the electrostatic contribution to the energy of the complex formation of two model proteins. In this study, we have demonstrated that the existence of an low-dielectric interfacial water layer at the protein-solvent interface reduces the charging energy of the proteins in the aqueous solvent, and consequently increases the electrostatic contribution to the protein binding (change in free energy upon the complex formation of two proteins). This is in contrast with the finding of the continuum electrostatic model, which suggests that electrostatic interactions are not strong enough to compensate for the unfavorable desolvation effects.
Predicting Complex Organic Molecule Emission from TW Hya
NASA Astrophysics Data System (ADS)
Vissapragada, Shreyas; Walsh, Catherine
2017-01-01
The Atacama Large Millimeter/submillimeter Array (ALMA) has significantly increased our ability to observe the rich chemical inventory of star and planet formation. ALMA has recently been used to detect CH3OH (methanol) and CH3CN (methyl cyanide) in protoplanetary disks; these molecules may be vital indicators of the complex organic ice reservoir in the comet-forming zone. We have constructed a physiochemical model of TW Hya, a well-studied protoplanetary disk, to explore the different formation mechanisms of complex ices. By running our model through a radiative transfer code and convolving with beam sizes appropriate for ALMA, we have obtained synthetic observations of methanol and methyl cyanide. Here, we compare and comment on these synthetic observations, and provide astrochemical justification for their spatial distributions.
Thaler, David S
2002-01-01
Vaillancourt and Newell (Neurobiol. of Aging 2001) show that although many aging systems decrease in complexity as anticipated by Lipsitz and Goldberger (JAMA 1992), other aging systems increase in complexity. Vaillancourt and Newell explain the discrepancy by proposing that systems with a point attractor decrease in complexity with age, whereas those with an oscillating attractor increase in complexity with age. Vaillancourt and Newell are certainly correct that no one direction fits all results. Aging and death sometimes follow from a system being too simple, or, too complex. A perspective, based on the work of W. Ross Ashby (1956 and http://pespmc1.vub.ac.be/ASHBBOOK.html) is used in this commentary to consider why some systems become apparently more simple and others more complex as they age. In this Ashby-inspired view the measured complexity of a system's Responses to Disturbances is proportional to the ratio D/R, where D and R are sets containing the variety of possible disturbances and responses. The model expands on Ashby's by proposing that D consists of two components, Dp and Du. Dp consists of disturbances that are a function of the system's perception. Responses to Dp are often anticipatory and the response itself dominates the outcome. Du are disturbances that are unavoidable. Outcomes decrease or increase in measured entropy as a function of changes in (Dp + Du)/R. The variety of elements in both Dp and R decrease with age. When D/R decreases with age, the system shows less complexity. Conversely when D/R increases with Age, the results become more entropic.
Shape-Reprogrammable Polymers: Encoding, Erasing, and Re-Encoding (Postprint)
2014-11-01
printing , is a layer-by-layer technology for producing 3D objects directly from a digital model. While 3D printing allows the fabrication of increasingly...one linear shape-translation processes often increase rapidly with shape complexity. Additive manufacturing, also called three-dimensional ( 3D
Shaping Science for Increasing Interdependence and Specialization.
Utzerath, Christian; Fernández, Guillén
2017-03-01
Like art, science affords an individualistic career. However, increasingly, complexity necessitates increased interdependency and specialization. Despite this change, many institutions, funding agencies, and publishers insist on an exclusively individualistic model of science. This hinders scientific progress by imposing a range of inefficiencies in the planning and execution of research plans. Copyright © 2016 Elsevier Ltd. All rights reserved.
QRS complex detection based on continuous density hidden Markov models using univariate observations
NASA Astrophysics Data System (ADS)
Sotelo, S.; Arenas, W.; Altuve, M.
2018-04-01
In the electrocardiogram (ECG), the detection of QRS complexes is a fundamental step in the ECG signal processing chain since it allows the determination of other characteristics waves of the ECG and provides information about heart rate variability. In this work, an automatic QRS complex detector based on continuous density hidden Markov models (HMM) is proposed. HMM were trained using univariate observation sequences taken either from QRS complexes or their derivatives. The detection approach is based on the log-likelihood comparison of the observation sequence with a fixed threshold. A sliding window was used to obtain the observation sequence to be evaluated by the model. The threshold was optimized by receiver operating characteristic curves. Sensitivity (Sen), specificity (Spc) and F1 score were used to evaluate the detection performance. The approach was validated using ECG recordings from the MIT-BIH Arrhythmia database. A 6-fold cross-validation shows that the best detection performance was achieved with 2 states HMM trained with QRS complexes sequences (Sen = 0.668, Spc = 0.360 and F1 = 0.309). We concluded that these univariate sequences provide enough information to characterize the QRS complex dynamics from HMM. Future works are directed to the use of multivariate observations to increase the detection performance.
Yang, Ruiyue; Huang, Zhongwei; Yu, Wei; Li, Gensheng; Ren, Wenxi; Zuo, Lihua; Tan, Xiaosi; Sepehrnoori, Kamy; Tian, Shouceng; Sheng, Mao
2016-01-01
A complex fracture network is generally generated during the hydraulic fracturing treatment in shale gas reservoirs. Numerous efforts have been made to model the flow behavior of such fracture networks. However, it is still challenging to predict the impacts of various gas transport mechanisms on well performance with arbitrary fracture geometry in a computationally efficient manner. We develop a robust and comprehensive model for real gas transport in shales with complex non-planar fracture network. Contributions of gas transport mechanisms and fracture complexity to well productivity and rate transient behavior are systematically analyzed. The major findings are: simple planar fracture can overestimate gas production than non-planar fracture due to less fracture interference. A “hump” that occurs in the transition period and formation linear flow with a slope less than 1/2 can infer the appearance of natural fractures. The sharpness of the “hump” can indicate the complexity and irregularity of the fracture networks. Gas flow mechanisms can extend the transition flow period. The gas desorption could make the “hump” more profound. The Knudsen diffusion and slippage effect play a dominant role in the later production time. Maximizing the fracture complexity through generating large connected networks is an effective way to increase shale gas production. PMID:27819349
Yang, Ruiyue; Huang, Zhongwei; Yu, Wei; Li, Gensheng; Ren, Wenxi; Zuo, Lihua; Tan, Xiaosi; Sepehrnoori, Kamy; Tian, Shouceng; Sheng, Mao
2016-11-07
A complex fracture network is generally generated during the hydraulic fracturing treatment in shale gas reservoirs. Numerous efforts have been made to model the flow behavior of such fracture networks. However, it is still challenging to predict the impacts of various gas transport mechanisms on well performance with arbitrary fracture geometry in a computationally efficient manner. We develop a robust and comprehensive model for real gas transport in shales with complex non-planar fracture network. Contributions of gas transport mechanisms and fracture complexity to well productivity and rate transient behavior are systematically analyzed. The major findings are: simple planar fracture can overestimate gas production than non-planar fracture due to less fracture interference. A "hump" that occurs in the transition period and formation linear flow with a slope less than 1/2 can infer the appearance of natural fractures. The sharpness of the "hump" can indicate the complexity and irregularity of the fracture networks. Gas flow mechanisms can extend the transition flow period. The gas desorption could make the "hump" more profound. The Knudsen diffusion and slippage effect play a dominant role in the later production time. Maximizing the fracture complexity through generating large connected networks is an effective way to increase shale gas production.
Steel, Jason C; Cavanagh, Heather M A; Burton, Mark A; Abu-Asab, Mones S; Tsokos, Maria; Morris, John C; Kalle, Wouter H J
2007-04-01
We aimed to increase the efficiency of adenoviral vectors by limiting adenoviral spread from the target site and reducing unwanted host immune responses to the vector. We complexed adenoviral vectors with DDAB-DOPE liposomes to form adenovirus-liposomal (AL) complexes. AL complexes were delivered by intratumoral injection in an immunocompetent subcutaneous rat tumor model and the immunogenicity of the AL complexes and the expression efficiency in the tumor and other organs was examined. Animals treated with the AL complexes had significantly lower levels of beta-galactosidase expression in systemic tissues compared to animals treated with the naked adenovirus (NA) (P<0.05). The tumor to non-tumor ratio of beta-galactosidase marker expression was significantly higher for the AL complex treated animals. NA induced significantly higher titers of adenoviral-specific antibodies compared to the AL complexes (P<0.05). The AL complexes provided protection (immunoshielding) to the adenovirus from neutralizing antibody. Forty-seven percent more beta-galactosidase expression was detected following intratumoral injection with AL complexes compared to the NA in animals pre-immunized with adenovirus. Complexing of adenovirus with liposomes provides a simple method to enhance tumor localization of the vector, decrease the immunogenicity of adenovirus, and provide protection of the virus from pre-existing neutralizing antibodies.
Teich, Andrew F; Qian, Ning
2010-03-01
Orientation adaptation and perceptual learning change orientation tuning curves of V1 cells. Adaptation shifts tuning curve peaks away from the adapted orientation, reduces tuning curve slopes near the adapted orientation, and increases the responses on the far flank of tuning curves. Learning an orientation discrimination task increases tuning curve slopes near the trained orientation. These changes have been explained previously in a recurrent model (RM) of orientation selectivity. However, the RM generates only complex cells when they are well tuned, so that there is currently no model of orientation plasticity for simple cells. In addition, some feedforward models, such as the modified feedforward model (MFM), also contain recurrent cortical excitation, and it is unknown whether they can explain plasticity. Here, we compare plasticity in the MFM, which simulates simple cells, and a recent modification of the RM (MRM), which displays a continuum of simple-to-complex characteristics. Both pre- and postsynaptic-based modifications of the recurrent and feedforward connections in the models are investigated. The MRM can account for all the learning- and adaptation-induced plasticity, for both simple and complex cells, while the MFM cannot. The key features from the MRM required for explaining plasticity are broadly tuned feedforward inputs and sharpening by a Mexican hat intracortical interaction profile. The mere presence of recurrent cortical interactions in feedforward models like the MFM is insufficient; such models have more rigid tuning curves. We predict that the plastic properties must be absent for cells whose orientation tuning arises from a feedforward mechanism.
NASA Astrophysics Data System (ADS)
Yang, Hyun Mo
2015-12-01
Currently, discrete modellings are largely accepted due to the access to computers with huge storage capacity and high performance processors and easy implementation of algorithms, allowing to develop and simulate increasingly sophisticated models. Wang et al. [7] present a review of dynamics in complex networks, focusing on the interaction between disease dynamics and human behavioral and social dynamics. By doing an extensive review regarding to the human behavior responding to disease dynamics, the authors briefly describe the complex dynamics found in the literature: well-mixed populations networks, where spatial structure can be neglected, and other networks considering heterogeneity on spatially distributed populations. As controlling mechanisms are implemented, such as social distancing due 'social contagion', quarantine, non-pharmaceutical interventions and vaccination, adaptive behavior can occur in human population, which can be easily taken into account in the dynamics formulated by networked populations.
Barton, C Michael; Ullah, Isaac I; Bergin, Sean
2010-11-28
The evolution of Mediterranean landscapes during the Holocene has been increasingly governed by the complex interactions of water and human land use. Different land-use practices change the amount of water flowing across the surface and infiltrating the soil, and change water's ability to move surface sediments. Conversely, water amplifies the impacts of human land use and extends the ecological footprint of human activities far beyond the borders of towns and fields. Advances in computational modelling offer new tools to study the complex feedbacks between land use, land cover, topography and surface water. The Mediterranean Landscape Dynamics project (MedLand) is building a modelling laboratory where experiments can be carried out on the long-term impacts of agropastoral land use, and whose results can be tested against the archaeological record. These computational experiments are providing new insights into the socio-ecological consequences of human decisions at varying temporal and spatial scales.
Noise Estimation in Electroencephalogram Signal by Using Volterra Series Coefficients
Hassani, Malihe; Karami, Mohammad Reza
2015-01-01
The Volterra model is widely used for nonlinearity identification in practical applications. In this paper, we employed Volterra model to find the nonlinearity relation between electroencephalogram (EEG) signal and the noise that is a novel approach to estimate noise in EEG signal. We show that by employing this method. We can considerably improve the signal to noise ratio by the ratio of at least 1.54. An important issue in implementing Volterra model is its computation complexity, especially when the degree of nonlinearity is increased. Hence, in many applications it is urgent to reduce the complexity of computation. In this paper, we use the property of EEG signal and propose a new and good approximation of delayed input signal to its adjacent samples in order to reduce the computation of finding Volterra series coefficients. The computation complexity is reduced by the ratio of at least 1/3 when the filter memory is 3. PMID:26284176
Coding Response to a Case-Mix Measurement System Based on Multiple Diagnoses
Preyra, Colin
2004-01-01
Objective To examine the hospital coding response to a payment model using a case-mix measurement system based on multiple diagnoses and the resulting impact on a hospital cost model. Data Sources Financial, clinical, and supplementary data for all Ontario short stay hospitals from years 1997 to 2002. Study Design Disaggregated trends in hospital case-mix growth are examined for five years following the adoption of an inpatient classification system making extensive use of combinations of secondary diagnoses. Hospital case mix is decomposed into base and complexity components. The longitudinal effects of coding variation on a standard hospital payment model are examined in terms of payment accuracy and impact on adjustment factors. Principal Findings Introduction of the refined case-mix system provided incentives for hospitals to increase reporting of secondary diagnoses and resulted in growth in highest complexity cases that were not matched by increased resource use over time. Despite a pronounced coding response on the part of hospitals, the increase in measured complexity and case mix did not reduce the unexplained variation in hospital unit cost nor did it reduce the reliance on the teaching adjustment factor, a potential proxy for case mix. The main implication was changes in the size and distribution of predicted hospital operating costs. Conclusions Jurisdictions introducing extensive refinements to standard diagnostic related group (DRG)-type payment systems should consider the effects of induced changes to hospital coding practices. Assessing model performance should include analysis of the robustness of classification systems to hospital-level variation in coding practices. Unanticipated coding effects imply that case-mix models hypothesized to perform well ex ante may not meet expectations ex post. PMID:15230940
Zhang, Liang; Zhang, Song; Maezawa, Izumi; Trushin, Sergey; Minhas, Paras; Pinto, Matthew; Jin, Lee-Way; Prasain, Keshar; Nguyen, Thi D.T.; Yamazaki, Yu; Kanekiyo, Takahisa; Bu, Guojun; Gateno, Benjamin; Chang, Kyeong-Ok; Nath, Karl A.; Nemutlu, Emirhan; Dzeja, Petras; Pang, Yuan-Ping; Hua, Duy H.; Trushina, Eugenia
2015-01-01
Development of therapeutic strategies to prevent Alzheimer's disease (AD) is of great importance. We show that mild inhibition of mitochondrial complex I with small molecule CP2 reduces levels of amyloid beta and phospho-Tau and averts cognitive decline in three animal models of familial AD. Low-mass molecular dynamics simulations and biochemical studies confirmed that CP2 competes with flavin mononucleotide for binding to the redox center of complex I leading to elevated AMP/ATP ratio and activation of AMP-activated protein kinase in neurons and mouse brain without inducing oxidative damage or inflammation. Furthermore, modulation of complex I activity augmented mitochondrial bioenergetics increasing coupling efficiency of respiratory chain and neuronal resistance to stress. Concomitant reduction of glycogen synthase kinase 3β activity and restoration of axonal trafficking resulted in elevated levels of neurotrophic factors and synaptic proteins in adult AD mice. Our results suggest that metabolic reprogramming induced by modulation of mitochondrial complex I activity represents promising therapeutic strategy for AD. PMID:26086035
Models of service delivery for cancer genetic risk assessment and counseling.
Trepanier, Angela M; Allain, Dawn C
2014-04-01
Increasing awareness of and the potentially concomitant increasing demand for cancer genetic services is driving the need to explore more efficient models of service delivery. The aims of this study were to determine which service delivery models are most commonly used by genetic counselors, assess how often they are used, compare the efficiency of each model as well as impact on access to services, and investigate the perceived benefits and barriers of each. Full members of the NSGC Familial Cancer Special Interest Group who subscribe to its listserv were invited to participate in a web-based survey. Eligible respondents were asked which of ten defined service delivery models they use and specific questions related to aspects of model use. One-hundred ninety-two of the approximately 450 members of the listserv responded (42.7%); 177 (92.2%) had provided clinical service in the last year and were eligible to complete all sections of the survey. The four direct care models most commonly used were the (traditional) face-to-face pre- and post-test model (92.2%), the face-to-face pretest without face-to-face post-test model (86.5%), the post-test counseling only for complex results model (36.2%), and the post test counseling for all results model (18.3%). Those using the face-to-face pretest only, post-test all, and post-test complex models reported seeing more new patients than when they used the traditional model and these differences were statistically significantly. There were no significant differences in appointment wait times or distances traveled by patients when comparing use of the traditional model to the other three models. Respondents recognize that a benefit of using alternative service delivery models is increased access to services; however, some are concerned that this may affect quality of care.
Planning and executing complex large-scale exercises.
McCormick, Lisa C; Hites, Lisle; Wakelee, Jessica F; Rucks, Andrew C; Ginter, Peter M
2014-01-01
Increasingly, public health departments are designing and engaging in complex operations-based full-scale exercises to test multiple public health preparedness response functions. The Department of Homeland Security's Homeland Security Exercise and Evaluation Program (HSEEP) supplies benchmark guidelines that provide a framework for both the design and the evaluation of drills and exercises; however, the HSEEP framework does not seem to have been designed to manage the development and evaluation of multiple, operations-based, parallel exercises combined into 1 complex large-scale event. Lessons learned from the planning of the Mississippi State Department of Health Emergency Support Function--8 involvement in National Level Exercise 2011 were used to develop an expanded exercise planning model that is HSEEP compliant but accounts for increased exercise complexity and is more functional for public health. The Expanded HSEEP (E-HSEEP) model was developed through changes in the HSEEP exercise planning process in areas of Exercise Plan, Controller/Evaluator Handbook, Evaluation Plan, and After Action Report and Improvement Plan development. The E-HSEEP model was tested and refined during the planning and evaluation of Mississippi's State-level Emergency Support Function-8 exercises in 2012 and 2013. As a result of using the E-HSEEP model, Mississippi State Department of Health was able to capture strengths, lessons learned, and areas for improvement, and identify microlevel issues that may have been missed using the traditional HSEEP framework. The South Central Preparedness and Emergency Response Learning Center is working to create an Excel-based E-HSEEP tool that will allow practice partners to build a database to track corrective actions and conduct many different types of analyses and comparisons.
A model of magnetic and relaxation properties of the mononuclear [Pc2Tb](-)TBA+ complex.
Reu, O S; Palii, A V; Ostrovsky, S M; Tregenna-Piggott, P L W; Klokishner, S I
2012-10-15
The present work is aimed at the elaboration of the model of magnetic properties and magnetic relaxation in the mononuclear [Pc(2)Tb](-)TBA(+) complex that displays single-molecule magnet properties. We calculate the Stark structure of the ground (7)F(6) term of the Tb(3+) ion in the exchange charge model of the crystal field, taking account for covalence effects. The ground Stark level of the complex possesses the maximum value of the total angular momentum projection, while the energies of the excited Stark levels increase with decreasing |M(J)| values, thus giving rise to a barrier for the reversal of magnetization. The one-phonon transitions between the Stark levels of the Tb(3+) ion induced by electron-vibrational interaction are shown to lead to magnetization relaxation in the [Pc(2)Tb](-)TBA(+) complex. The rates of all possible transitions between the low-lying Stark levels are calculated in the temperature range 14 K
NASA Astrophysics Data System (ADS)
Marsh, C.; Pomeroy, J. W.; Wheater, H. S.
2017-12-01
Accurate management of water resources is necessary for social, economic, and environmental sustainability worldwide. In locations with seasonal snowcovers, the accurate prediction of these water resources is further complicated due to frozen soils, solid-phase precipitation, blowing snow transport, and snowcover-vegetation-atmosphere interactions. Complex process interactions and feedbacks are a key feature of hydrological systems and may result in emergent phenomena, i.e., the arising of novel and unexpected properties within a complex system. One example is the feedback associated with blowing snow redistribution, which can lead to drifts that cause locally-increased soil moisture, thus increasing plant growth that in turn subsequently impacts snow redistribution, creating larger drifts. Attempting to simulate these emergent behaviours is a significant challenge, however, and there is concern that process conceptualizations within current models are too incomplete to represent the needed interactions. An improved understanding of the role of emergence in hydrological systems often requires high resolution distributed numerical hydrological models that incorporate the relevant process dynamics. The Canadian Hydrological Model (CHM) provides a novel tool for examining cold region hydrological systems. Key features include efficient terrain representation, allowing simulations at various spatial scales, reduced computational overhead, and a modular process representation allowing for an alternative-hypothesis framework. Using both physics-based and conceptual process representations sourced from long term process studies and the current cold regions literature allows for comparison of process representations and importantly, their ability to produce emergent behaviours. Examining the system in a holistic, process-based manner can hopefully derive important insights and aid in development of improved process representations.
NASA Astrophysics Data System (ADS)
Nikolaeva, L. S.; Semenov, A. N.
2018-02-01
The anticoagulant activity of high-molecular-weight heparin is increased by developing a new highly active heparin complex with glutamate using the thermodynamic model of chemical equilibria based on pH-metric data. The anticoagulant activity of the developed complexes is estimated in the pH range of blood plasma according to the drop in the calculated equilibrium Ca2+ concentration associated with the formation of mixed ligand complexes of Ca2+ ions, heparin (Na4hep), and glutamate (H2Glu). A thermodynamic model is calculated by mathematically modelling chemical equilibria in the CaCl2-Na4hep-H2Glu-H2O-NaCl system in the pH range of 2.30 ≤ pH ≤ 10.50 in diluted saline that acts as a background electrolyte (0.154 M NaCl) at 37°C and initial concentrations of the main components of ν × 10-3 M, where n ≤ 4. The thermodynamic model is used to determine the main complex of the monomeric unit of heparin with glutamate (HhepGlu5-) and the most stable mixed ligand complex of Ca2+ with heparin and glutamate (Ca2hepGlu2-) in the pH range of blood plasma (6.80 ≤ pH ≤ 7.40). It is concluded that the Ca2hepGlu2- complex reduces the Ca2+ concentration 107 times more than the Ca2+ complex with pure heparin. The anticoagulant effect of the developed HhepGlu5- complex is confirmed in vitro and in vivo via coagulation tests on the blood plasma of laboratory rats. Additional antithrombotic properties of the developed complex are identified. The new highly active anticoagulant, HhepGlu5- complex with additional antithrombotic properties, is patented.
Oxygen isotope trajectories of crystallizing melts: Insights from modeling and the plutonic record
NASA Astrophysics Data System (ADS)
Bucholz, Claire E.; Jagoutz, Oliver; VanTongeren, Jill A.; Setera, Jacob; Wang, Zhengrong
2017-06-01
Elevated oxygen isotope values in igneous rocks are often used to fingerprint supracrustal alteration or assimilation of material that once resided near the surface of the earth. The δ18O value of a melt, however, can also increase through closed-system fractional crystallization. In order to quantify the change in melt δ18O due to crystallization, we develop a detailed closed-system fractional crystallization mass balance model and apply it to six experimentally- and naturally-determined liquid lines of descent (LLDs), which cover nearly complete crystallization intervals (melt fractions of 1 to <0.1). The studied LLDs vary from anhydrous tholeiitic basalts to hydrous high-K and calc-alkaline basalts and are characterized by distinct melt temperature-SiO2 trajectories, as well as, crystallizing phase relationships. Our model results demonstrate that melt fraction-temperature-SiO2 relationships of crystallizing melts, which are strongly a function of magmatic water content, will control the specific δ18O path of a crystallizing melt. Hydrous melts, typical of subduction zones, undergo larger increases in δ18O during early stages of crystallization due to their lower magmatic temperatures, greater initial increases in SiO2 content, and high temperature stability of low δ18O phases, such as oxides, amphibole, and anorthitic plagioclase (versus albite). Conversely, relatively dry, tholeiitic melts only experience significant increases in δ18O at degrees of crystallization greater than 80%. Total calculated increases in melt δ18O of 1.0-1.5‰ can be attributed to crystallization from ∼50 to 70 wt.% SiO2 for modeled closed-system crystallizing melt compositions. As an example application, we compare our closed system model results to oxygen isotope mineral data from two natural plutonic sequences, a relatively dry, tholeiitic sequence from the Upper and Upper Main Zones (UUMZ) of the Bushveld Complex (South Africa) and a high-K, hydrous sequence from the arc-related Dariv Igneous Complex (Mongolia). These two sequences were chosen as their major and trace element compositions appear to have been predominantly controlled by closed-system fractional crystallization and their LLDs have been modeled in detail. We calculated equilibrium melt δ18O values using the measured mineral δ18O values and calculated mineral-melt fractionation factors. Increases of 2-3‰ and 1-1.5‰ in the equilibrium melts are observed for the Dariv Igneous Complex and the UUMZ of the Bushveld Complex, respectively. Closed-system fractional crystallization model results reproduce the 1‰ increase observed in the equilibrium melt δ18O for the Bushveld UUMZ, whereas for the Dariv Igneous Complex assimilation of high δ18O material is necessary to account for the increase in melt δ18O values. Assimilation of evolved supracrustal material is also confirmed with Sr and Nd isotope analyses of clinopyroxene from the sequence. Beginning with a range of mantle-derived basalt δ18O values of 5.7‰ ("pristine" mantle) to ∼7.0‰ (heavily subduction-influenced mantle), our model results demonstrated that high-silica melts (i.e. granites) with δ18O of up to 8.5‰ can be produced through fractional crystallization alone. Lastly, we model the zircon-melt δ18O fractionations of different LLDs, emphasizing their dependence on the specific SiO2-T relationships of a given crystallizing melt. Wet, relatively cool granitic melts will have larger zircon-melt fractionations, potentially by ∼1.5‰, compared to hot, dry granites. Therefore, it is critical to constrain zircon-melt fractionations specific to a system of interest when using zircon δ18O values to calculate melt δ18O.
Education Governance in Action: Lessons from Case Studies
ERIC Educational Resources Information Center
Burns, Tracey; Köster, Florian; Fuster, Marc
2016-01-01
Governing multi-level education systems requires governance models that balance responsiveness to local diversity with the ability to ensure national objectives. This delicate equilibrium is difficult to achieve given the complexity of many education systems. Countries are therefore increasingly looking for examples of good practice and models of…
There is a need to develop modeling and data analysis tools to increase our understanding of human exposures to air pollutants beyond what can be explained by "limited" field data. Modeling simulations of complex distributions of pollutant concentrations within roadw...
Design of numerical model for thermoacoustic devices using OpenFOAM
NASA Astrophysics Data System (ADS)
Tisovsky, Tomas; Vit, Tomas
2017-09-01
Thermoacoustic devices are increasingly popular especially because of their construction simplicity and the ability to easily convert waste heat into the form of usable energy. Aim of this paper is to introduce some of the effective procedures for creating a complex mathematical model of thermoacoustic devices in OpenFOAM.
The BGR Contingency Model for Leading Change
ERIC Educational Resources Information Center
Brown, Derek R.; Gordon, Raymond; Rose, Dennis Michael
2012-01-01
The continuing failure rates of change initiatives, combined with an increasingly complex business environment, have created significant challenges for the practice of change management. High failure rates suggest that existing change models are not working, or are being incorrectly used. A different mindset to change is required. The BGR…
ERIC Educational Resources Information Center
Virk, Satyugjit; Clark, Douglas; Sengupta, Pratim
2015-01-01
Environments in which learning involves coordinating multiple external representations (MERs) can productively support learners in making sense of complex models and relationships. Educational digital games provide an increasing popular medium for engaging students in manipulating and exploring such models and relationships. This article applies…
Data management in the mission data system
NASA Technical Reports Server (NTRS)
Wagner, David A.
2005-01-01
As spacecraft evolve from simple embedded devices to become more sophisticated computing platforms with complex behaviors it is increasingly necessary to model and manage the flow of data, and to provide uniform models for managing data that promote adaptability, yet pay heed to the physical limitations of the embedded and space environments.
NASA Technical Reports Server (NTRS)
Hops, J. M.; Sherif, J. S.
1994-01-01
A great deal of effort is now being devoted to the study, analysis, prediction, and minimization of software maintenance expected cost, long before software is delivered to users or customers. It has been estimated that, on the average, the effort spent on software maintenance is as costly as the effort spent on all other software costs. Software design methods should be the starting point to aid in alleviating the problems of software maintenance complexity and high costs. Two aspects of maintenance deserve attention: (1) protocols for locating and rectifying defects, and for ensuring that noe new defects are introduced in the development phase of the software process; and (2) protocols for modification, enhancement, and upgrading. This article focuses primarily on the second aspect, the development of protocols to help increase the quality and reduce the costs associated with modifications, enhancements, and upgrades of existing software. This study developed parsimonious models and a relative complexity metric for complexity measurement of software that were used to rank the modules in the system relative to one another. Some success was achieved in using the models and the relative metric to identify maintenance-prone modules.
Chen, Ying; Pham, Tuan D
2013-05-15
We apply for the first time the sample entropy (SampEn) and regularity dimension model for measuring signal complexity to quantify the structural complexity of the brain on MRI. The concept of the regularity dimension is based on the theory of chaos for studying nonlinear dynamical systems, where power laws and entropy measure are adopted to develop the regularity dimension for modeling a mathematical relationship between the frequencies with which information about signal regularity changes in various scales. The sample entropy and regularity dimension of MRI-based brain structural complexity are computed for early Alzheimer's disease (AD) elder adults and age and gender-matched non-demented controls, as well as for a wide range of ages from young people to elder adults. A significantly higher global cortical structure complexity is detected in AD individuals (p<0.001). The increase of SampEn and the regularity dimension are also found to be accompanied with aging which might indicate an age-related exacerbation of cortical structural irregularity. The provided model can be potentially used as an imaging bio-marker for early prediction of AD and age-related cognitive decline. Copyright © 2013 Elsevier B.V. All rights reserved.
A Principled Approach to the Specification of System Architectures for Space Missions
NASA Technical Reports Server (NTRS)
McKelvin, Mark L. Jr.; Castillo, Robert; Bonanne, Kevin; Bonnici, Michael; Cox, Brian; Gibson, Corrina; Leon, Juan P.; Gomez-Mustafa, Jose; Jimenez, Alejandro; Madni, Azad
2015-01-01
Modern space systems are increasing in complexity and scale at an unprecedented pace. Consequently, innovative methods, processes, and tools are needed to cope with the increasing complexity of architecting these systems. A key systems challenge in practice is the ability to scale processes, methods, and tools used to architect complex space systems. Traditionally, the process for specifying space system architectures has largely relied on capturing the system architecture in informal descriptions that are often embedded within loosely coupled design documents and domain expertise. Such informal descriptions often lead to misunderstandings between design teams, ambiguous specifications, difficulty in maintaining consistency as the architecture evolves throughout the system development life cycle, and costly design iterations. Therefore, traditional methods are becoming increasingly inefficient to cope with ever-increasing system complexity. We apply the principles of component-based design and platform-based design to the development of the system architecture for a practical space system to demonstrate feasibility of our approach using SysML. Our results show that we are able to apply a systematic design method to manage system complexity, thus enabling effective data management, semantic coherence and traceability across different levels of abstraction in the design chain. Just as important, our approach enables interoperability among heterogeneous tools in a concurrent engineering model based design environment.
NASA Astrophysics Data System (ADS)
McDonald, Karlie; Mika, Sarah; Kolbe, Tamara; Abbott, Ben; Ciocca, Francesco; Marruedo, Amaia; Hannah, David; Schmidt, Christian; Fleckenstein, Jan; Karuse, Stefan
2016-04-01
Sub-surface hydrologic processes are highly dynamic, varying spatially and temporally with strong links to the geomorphology and hydrogeologic properties of an area. This spatial and temporal complexity is a critical regulator of biogeochemical and ecological processes within the interface groundwater - surface water (GW-SW) ecohydrological interface and adjacent ecosystems. Many GW-SW models have attempted to capture this spatial and temporal complexity with varying degrees of success. The incorporation of spatial and temporal complexity within GW-SW model configuration is important to investigate interactions with transient storage and subsurface geology, infiltration and recharge, and mass balance of exchange fluxes at the GW-SW ecohydrological interface. Additionally, characterising spatial and temporal complexity in GW-SW models is essential to derive predictions using realistic environmental conditions. In this paper we conduct a systematic Web of Science meta-analysis of conceptual, hydrodynamic, and reactive and heat transport models of the GW-SW ecohydrological interface since 2004 to explore how these models handled spatial and temporal complexity. The freshwater - groundwater ecohydrological interface was the most commonly represented in publications between 2004 and 2014 with 91% of papers followed by marine 6% and estuarine systems with 3% of papers. Of the GW-SW models published since 2004, the 52% have focused on hydrodynamic processes and <15% covered more than one process (e.g. heat and reactive transport). Within the hydrodynamic subset, 25% of models focused on a vertical depth of <5m. The primary scientific and technological limitations of incorporating spatial and temporal variability into GW-SW models are identified as the inclusion of woody debris, carbon sources, subsurface geological structures and bioclogging into model parameterization. The technological limitations influence the types of models applied, such as hydrostatic coupled models and fully intrinsic saturated and unsaturated models, and the assumptions or simplifications scientists apply to investigate the GW-SW ecohydrological interface. We investigated the type of modelling approaches applied across different scales (site, reach, catchment, nested catchments) and assessed the simplifications in environmental conditions and complexity that are commonly made in model configuration. Understanding the theoretical concepts that underpin these current modelling approaches is critical for scientists to develop measures to derive predictions from realistic environmental conditions at management relevant scales and establish best-practice modelling approaches for improving the scientific understanding and management of the GW-SW interface. Additionally, the assessment of current modelling approaches informs our proposed framework for the progress of GW-SW models in the future. The framework presented aims to increase future scientific, technological and management integration and the identification of research priorities to allow spatial and temporal complexity to be better incorporated into GW-SW models.
Vibrational relaxation of I2 in complexing solvents: The role of solvent-solute attractive forces
NASA Astrophysics Data System (ADS)
Shiang, Joseph J.; Liu, Hongjun; Sension, Roseanne J.
1998-12-01
Femtosecond transient absorption studies of I2-arene complexes, with arene=hexamethylbenzene (HMB), mesitylene (MST), or m-xylene (mX), are used to investigate the effect of solvent-solute attractive forces upon the rate of vibrational relaxation in solution. Comparison of measurements on I2-MST complexes in neat mesitylene and I2-MST complexes diluted in carbontetrachloride demonstrate that binary solvent-solute attractive forces control the rate of vibrational relaxation in this prototypical model of diatomic vibrational relaxation. The data obtained for different arenes demonstrate that the rate of I2 relaxation increases with the magnitude of the I2-arene attractive interaction. I2-HMB relaxes much faster than I2 in MST or mX. The results of these experiments are discussed in terms of both isolated binary collision and instantaneous normal mode models for vibrational relaxation.
Eilmes, Andrzej; Kubisiak, Piotr
2010-01-21
Relative complexation energies for the lithium cation in acetonitrile and diethyl ether have been studied. Quantum-chemical calculations explicitly describing the solvation of Li(+) have been performed based on structures obtained from molecular dynamics simulations. The effect of an increasing number of solvent molecules beyond the first solvation shell has been found to consist in reduction of the differences in complexation energies for different coordination numbers. Explicit-solvation data have served as a benchmark to the results of polarizable continuum model (PCM) calculations. It has been demonstrated that the PCM approach can yield relative complexation energies comparable to the predictions based on molecular-level solvation, but at significantly lower computational cost. The best agreement between the explicit-solvation and the PCM results has been obtained when the van der Waals surface was adopted to build the molecular cavity.
McMahon, Michelle A; Christopher, Kimberly A
2011-08-19
As the complexity of health care delivery continues to increase, educators are challenged to determine educational best practices to prepare BSN students for the ambiguous clinical practice setting. Integrative, active, and student-centered curricular methods are encouraged to foster student ability to use clinical judgment for problem solving and informed clinical decision making. The proposed pedagogical model of progressive complexity in nursing education suggests gradually introducing students to complex and multi-contextual clinical scenarios through the utilization of case studies and problem-based learning activities, with the intention to transition nursing students into autonomous learners and well-prepared practitioners at the culmination of a nursing program. Exemplar curricular activities are suggested to potentiate student development of a transferable problem solving skill set and a flexible knowledge base to better prepare students for practice in future novel clinical experiences, which is a mutual goal for both educators and students.
Climate Modeling with a Million CPUs
NASA Astrophysics Data System (ADS)
Tobis, M.; Jackson, C. S.
2010-12-01
Michael Tobis, Ph.D. Research Scientist Associate University of Texas Institute for Geophysics Charles S. Jackson Research Scientist University of Texas Institute for Geophysics Meteorological, oceanographic, and climatological applications have been at the forefront of scientific computing since its inception. The trend toward ever larger and more capable computing installations is unabated. However, much of the increase in capacity is accompanied by an increase in parallelism and a concomitant increase in complexity. An increase of at least four additional orders of magnitude in the computational power of scientific platforms is anticipated. It is unclear how individual climate simulations can continue to make effective use of the largest platforms. Conversion of existing community codes to higher resolution, or to more complex phenomenology, or both, presents daunting design and validation challenges. Our alternative approach is to use the expected resources to run very large ensembles of simulations of modest size, rather than to await the emergence of very large simulations. We are already doing this in exploring the parameter space of existing models using the Multiple Very Fast Simulated Annealing algorithm, which was developed for seismic imaging. Our experiments have the dual intentions of tuning the model and identifying ranges of parameter uncertainty. Our approach is less strongly constrained by the dimensionality of the parameter space than are competing methods. Nevertheless, scaling up remains costly. Much could be achieved by increasing the dimensionality of the search and adding complexity to the search algorithms. Such ensemble approaches scale naturally to very large platforms. Extensions of the approach are anticipated. For example, structurally different models can be tuned to comparable effectiveness. This can provide an objective test for which there is no realistic precedent with smaller computations. We find ourselves inventing new code to manage our ensembles. Component computations involve tens to hundreds of CPUs and tens to hundreds of hours. The results of these moderately large parallel jobs influence the scheduling of subsequent jobs, and complex algorithms may be easily contemplated for this. The operating system concept of a "thread" re-emerges at a very coarse level, where each thread manages atomic computations of thousands of CPU-hours. That is, rather than multiple threads operating on a processor, at this level, multiple processors operate within a single thread. In collaboration with the Texas Advanced Computing Center, we are developing a software library at the system level, which should facilitate the development of computations involving complex strategies which invoke large numbers of moderately large multi-processor jobs. While this may have applications in other sciences, our key intent is to better characterize the coupled behavior of a very large set of climate model configurations.
Essential Requirements for Robust Signaling in Hfq Dependent Small RNA Networks
Adamson, David N.; Lim, Han N.
2011-01-01
Bacteria possess networks of small RNAs (sRNAs) that are important for modulating gene expression. At the center of many of these sRNA networks is the Hfq protein. Hfq's role is to quickly match cognate sRNAs and target mRNAs from among a large number of possible combinations and anneal them to form duplexes. Here we show using a kinetic model that Hfq can efficiently and robustly achieve this difficult task by minimizing the sequestration of sRNAs and target mRNAs in Hfq complexes. This sequestration can be reduced by two non-mutually exclusive kinetic mechanisms. The first mechanism involves heterotropic cooperativity (where sRNA and target mRNA binding to Hfq is influenced by other RNAs bound to Hfq); this cooperativity can selectively decrease singly-bound Hfq complexes and ternary complexes with non-cognate sRNA-target mRNA pairs while increasing cognate ternary complexes. The second mechanism relies on frequent RNA dissociation enabling the rapid cycling of sRNAs and target mRNAs among different Hfq complexes; this increases the probability the cognate ternary complex forms before the sRNAs and target mRNAs degrade. We further demonstrate that the performance of sRNAs in isolation is not predictive of their performance within a network. These findings highlight the importance of experimentally characterizing duplex formation in physiologically relevant contexts with multiple RNAs competing for Hfq. The model will provide a valuable framework for guiding and interpreting these experiments. PMID:21876666
A complex adaptive systems perspective of health information technology implementation.
Keshavjee, Karim; Kuziemsky, Craig; Vassanji, Karim; Ghany, Ahmad
2013-01-01
Implementing health information technology (HIT) is a challenge because of the complexity and multiple interactions that define HIT implementation. Much of the research on HIT implementation is descriptive in nature and has focused on distinct processes such as order entry or decision support. These studies fail to take into account the underlying complexity of the processes, people and settings that are typical of HIT implementations. Complex adaptive systems (CAS) is a promising field that could elucidate the complexity and non-linear interacting issues that are typical in HIT implementation. Initially we sought new models that would enable us to better understand the complex nature of HIT implementation, to proactively identify problem issues that could be a precursor to unintended consequences and to develop new models and new approaches to successful HIT implementations. Our investigation demonstrates that CAS does not provide prediction, but forces us to rethink our HIT implementation paradigms and question what we think we know. CAS provides new ways to conceptualize HIT implementation and suggests new approaches to increasing HIT implementation successes.
NASA Astrophysics Data System (ADS)
Haussaire, Jean-Matthieu; Bocquet, Marc
2016-04-01
Atmospheric chemistry models are becoming increasingly complex, with multiphasic chemistry, size-resolved particulate matter, and possibly coupled to numerical weather prediction models. In the meantime, data assimilation methods have also become more sophisticated. Hence, it will become increasingly difficult to disentangle the merits of data assimilation schemes, of models, and of their numerical implementation in a successful high-dimensional data assimilation study. That is why we believe that the increasing variety of problems encountered in the field of atmospheric chemistry data assimilation puts forward the need for simple low-order models, albeit complex enough to capture the relevant dynamics, physics and chemistry that could impact the performance of data assimilation schemes. Following this analysis, we developped a low-order coupled chemistry meteorology model named L95-GRS [1]. The advective wind is simulated by the Lorenz-95 model, while the chemistry is made of 6 reactive species and simulates ozone concentrations. With this model, we carried out data assimilation experiments to estimate the state of the system as well as the forcing parameter of the wind and the emissions of chemical compounds. This model proved to be a powerful playground giving insights on the hardships of online and offline estimation of atmospheric pollution. Building on the results on this low-order model, we test advanced data assimilation methods on a state-of-the-art chemical transport model to check if the conclusions obtained with our low-order model still stand. References [1] Haussaire, J.-M. and Bocquet, M.: A low-order coupled chemistry meteorology model for testing online and offline data assimilation schemes, Geosci. Model Dev. Discuss., 8, 7347-7394, doi:10.5194/gmdd-8-7347-2015, 2015.
Temporal-logic analysis of microglial phenotypic conversion with exposure to amyloid-β.
Anastasio, Thomas J
2015-02-01
Alzheimer Disease (AD) remains a leading killer with no adequate treatment. Ongoing research increasingly implicates the brain's immune system as a critical contributor to AD pathogenesis, but the complexity of the immune contribution poses a barrier to understanding. Here I use temporal logic to analyze a computational specification of the immune component of AD. Temporal logic is an extension of logic to propositions expressed in terms of time. It has traditionally been used to analyze computational specifications of complex engineered systems but applications to complex biological systems are now appearing. The inflammatory component of AD involves the responses of microglia to the peptide amyloid-β (Aβ), which is an inflammatory stimulus and a likely causative AD agent. Temporal-logic analysis of the model provides explanations for the puzzling findings that Aβ induces an anti-inflammatory and well as a pro-inflammatory response, and that Aβ is phagocytized by microglia in young but not in old animals. To potentially explain the first puzzle, the model suggests that interferon-γ acts as an "autocrine bridge" over which an Aβ-induced increase in pro-inflammatory cytokines leads to an increase in anti-inflammatory mediators also. To potentially explain the second puzzle, the model identifies a potential instability in signaling via insulin-like growth factor 1 that could explain the failure of old microglia to phagocytize Aβ. The model predicts that augmentation of insulin-like growth factor 1 signaling, and activation of protein kinase C in particular, could move old microglia from a neurotoxic back toward a more neuroprotective and phagocytic phenotype.
A finite element model of rigid body structures actuated by dielectric elastomer actuators
NASA Astrophysics Data System (ADS)
Simone, F.; Linnebach, P.; Rizzello, G.; Seelecke, S.
2018-06-01
This paper presents on finite element (FE) modeling and simulation of dielectric elastomer actuators (DEAs) coupled with articulated structures. DEAs have proven to represent an effective transduction technology for the realization of large deformation, low-power consuming, and fast mechatronic actuators. However, the complex dynamic behavior of the material, characterized by nonlinearities and rate-dependent phenomena, makes it difficult to accurately model and design DEA systems. The problem is further complicated in case the DEA is used to activate articulated structures, which increase both system complexity and implementation effort of numerical simulation models. In this paper, we present a model based tool which allows to effectively implement and simulate complex articulated systems actuated by DEAs. A first prototype of a compact switch actuated by DEA membranes is chosen as reference study to introduce the methodology. The commercially available FE software COMSOL is used for implementing and coupling a physics-based dynamic model of the DEA with the external structure, i.e., the switch. The model is then experimentally calibrated and validated in both quasi-static and dynamic loading conditions. Finally, preliminary results on how to use the simulation tool to optimize the design are presented.
Spiritual and Affective Responses to a Physical Church and Corresponding Virtual Model.
Murdoch, Matt; Davies, Jim
2017-11-01
Architectural and psychological theories posit that built environments have the potential to elicit complex psychological responses. However, few researchers have seriously explored this potential. Given the increasing importance and fidelity of virtual worlds, such research should explore whether virtual models of built environments are also capable of eliciting complex psychological responses. The goal of this study was to test these hypotheses, using a church, a corresponding virtual model, and an inclusive measure of state spirituality ("spiritual feelings"). Participants (n = 33) explored a physical church and corresponding virtual model, completing a measure of spiritual feelings after exploring the outside and inside of each version of the church. Using spiritual feelings after exploring the outside of the church as a baseline measure, change in state spirituality was assessed by taking the difference between spiritual feelings after exploring the inside and outside of the church (inside-outside) for both models. Although this change was greater in response to the physical church, there was no significant difference between the two models in eliciting such change in spiritual feelings. Despite the limitations of this exploratory study, these findings indicate that both built environments and corresponding virtual models are capable of evoking complex psychological responses.
2016-01-01
Muscle contractions are generated by cyclical interactions of myosin heads with actin filaments to form the actomyosin complex. To simulate actomyosin complex stable states, mathematical models usually define an energy landscape with a corresponding number of wells. The jumps between these wells are defined through rate constants. Almost all previous models assign these wells an infinite sharpness by imposing a relatively simple expression for the detailed balance, i.e., the ratio of the rate constants depends exponentially on the sole myosin elastic energy. Physically, this assumption corresponds to neglecting thermal fluctuations in the actomyosin complex stable states. By comparing three mathematical models, we examine the extent to which this hypothesis affects muscle model predictions at the single cross-bridge, single fiber, and organ levels in a ceteris paribus analysis. We show that including fluctuations in stable states allows the lever arm of the myosin to easily and dynamically explore all possible minima in the energy landscape, generating several backward and forward jumps between states during the lifetime of the actomyosin complex, whereas the infinitely sharp minima case is characterized by fewer jumps between states. Moreover, the analysis predicts that thermal fluctuations enable a more efficient contraction mechanism, in which a higher force is sustained by fewer attached cross-bridges. PMID:27626630
NASA Astrophysics Data System (ADS)
Hao, Na; Moysey, Stephen M. J.; Powell, Brian A.; Ntarlagiannis, Dimitrios
2016-12-01
Surface complexation models are widely used with batch adsorption experiments to characterize and predict surface geochemical processes in porous media. In contrast, the spectral induced polarization (SIP) method has recently been used to non-invasively monitor in situ subsurface chemical reactions in porous media, such as ion adsorption processes on mineral surfaces. Here we compare these tools for investigating surface site density changes during pH-dependent sodium adsorption on a silica gel. Continuous SIP measurements were conducted using a lab scale column packed with silica gel. A constant inflow of 0.05 M NaCl solution was introduced to the column while the influent pH was changed from 7.0 to 10.0 over the course of the experiment. The SIP measurements indicate that the pH change caused a 38.49 ± 0.30 μS cm- 1 increase in the imaginary conductivity of the silica gel. This increase is thought to result from deprotonation of silanol groups on the silica gel surface caused by the rise in pH, followed by sorption of Na+ cations. Fitting the SIP data using the mechanistic model of Leroy et al. (Leroyet al., 2008), which is based on the triple layer model of a mineral surface, we estimated an increase in the silica gel surface site density of 26.9 × 1016 sites m- 2. We independently used a potentiometric acid-base titration data for the silica gel to calibrate the triple layer model using the software FITEQL and observed a total increase in the surface site density for sodium sorption of 11.2 × 1016 sites m- 2, which is approximately 2.4 times smaller than the value estimated using the SIP model. By simulating the SIP response based on the calibrated surface complexation model, we found a moderate association between the measured and estimated imaginary conductivity (R2 = 0.65). These results suggest that the surface complexation model used here does not capture all mechanisms contributing to polarization of the silica gel captured by the SIP data.
Impacts of increasing the aerosol complexity in the Met Office global NWP model
NASA Astrophysics Data System (ADS)
Mulcahy, Jane; Walters, David; Bellouin, Nicolas; Milton, Sean
2014-05-01
Inclusion of the direct and indirect radiative effects of aerosols in high resolution global numerical weather prediction (NWP) models is being increasingly recognised as important for the improved accuracy of short-range weather forecasts. In this study the impacts of increasing the aerosol complexity in the global NWP configuration of the Met Office Unified Model (MetUM) are investigated. A hierarchy of aerosol representations are evaluated including three dimensional monthly mean speciated aerosol climatologies, fully prognostic aerosols modelled using the CLASSIC aerosol scheme and finally, initialised aerosols using assimilated aerosol fields from the GEMS project. The prognostic aerosol schemes are better able to predict the temporal and spatial variation of atmospheric aerosol optical depth, which is particularly important in cases of large sporadic aerosol events such as large dust storms or forest fires. Including the direct effect of aerosols improves model biases in outgoing longwave radiation over West Africa due to a better representation of dust. Inclusion of the indirect aerosol effects has significant impacts on the SW radiation particularly at high latitudes due to lower cloud amounts in high latitude clean air regions. This leads to improved surface radiation biases at the North Slope of Alaska ARM site. Verification of temperature and height forecasts is also improved in this region. Impacts on the global mean model precipitation and large-scale circulation fields were found to be generally small in the short range forecasts. However, the indirect aerosol effect leads to a strengthening of the low level monsoon flow over the Arabian Sea and Bay of Bengal and an increase in precipitation over Southeast Asia. This study highlights the importance of including a more realistic treatment of aerosol-cloud interactions in global NWP models and the potential for improved global environmental prediction systems through the incorporation of more complex aerosol schemes. This work is distributed under the Creative Commons Attribution 3.0 Unported License together with an author copyright. This license does not conflict with the regulations of the Crown Copyright.
A Distributed Leadership Change Process Model for Higher Education
ERIC Educational Resources Information Center
Jones, Sandra; Harvey, Marina
2017-01-01
The higher education sector operates in an increasingly complex global environment that is placing it under considerable stress and resulting in widespread change to the operating context and leadership of higher education institutions. The outcome has been the increased likelihood of conflict between academics and senior leaders, presaging the…
Formal modeling and analysis of ER-α associated Biological Regulatory Network in breast cancer.
Khalid, Samra; Hanif, Rumeza; Tareen, Samar H K; Siddiqa, Amnah; Bibi, Zurah; Ahmad, Jamil
2016-01-01
Breast cancer (BC) is one of the leading cause of death among females worldwide. The increasing incidence of BC is due to various genetic and environmental changes which lead to the disruption of cellular signaling network(s). It is a complex disease in which several interlinking signaling cascades play a crucial role in establishing a complex regulatory network. The logical modeling approach of René Thomas has been applied to analyze the behavior of estrogen receptor-alpha (ER- α ) associated Biological Regulatory Network (BRN) for a small part of complex events that leads to BC metastasis. A discrete model was constructed using the kinetic logic formalism and its set of logical parameters were obtained using the model checking technique implemented in the SMBioNet software which is consistent with biological observations. The discrete model was further enriched with continuous dynamics by converting it into an equivalent Petri Net (PN) to analyze the logical parameters of the involved entities. In-silico based discrete and continuous modeling of ER- α associated signaling network involved in BC provides information about behaviors and gene-gene interaction in detail. The dynamics of discrete model revealed, imperative behaviors represented as cyclic paths and trajectories leading to pathogenic states such as metastasis. Results suggest that the increased expressions of receptors ER- α , IGF-1R and EGFR slow down the activity of tumor suppressor genes (TSGs) such as BRCA1, p53 and Mdm2 which can lead to metastasis. Therefore, IGF-1R and EGFR are considered as important inhibitory targets to control the metastasis in BC. The in-silico approaches allow us to increase our understanding of the functional properties of living organisms. It opens new avenues of investigations of multiple inhibitory targets (ER- α , IGF-1R and EGFR) for wet lab experiments as well as provided valuable insights in the treatment of cancers such as BC.
Thermoelectric Properties of Complex Zintl Phases
NASA Astrophysics Data System (ADS)
Snyder, G. Jeffrey
2008-03-01
Complex Zintl phases make ideal thermoelectric materials because they can exhibit the necessary ``electron-crystal, phonon-glass'' properties required for high thermoelectric efficiency. Complex crystal structures can lead to high thermoelectric figure of merit (zT) by having extraordinarily low lattice thermal conductivity. A recent example is the discovery that Yb14MnSb11, a complex Zintl compound, has twice the zT as the SiGe based material currently in use at NASA. The high temperature (300K - 1300K) electronic properties of Yb14MnSb11 can be understood using models for heavily doped semiconductors. The free hole concentration, confirmed by Hall effect measurements, is set by the electron counting rules of Zintl and the valence of the transition metal (Mn^+2). Substitution of nonmagnetic Zn^+2 for the magnetic Mn^+2 reduces the spin-disorder scattering and leads to increased zT (10%). The reduction of spin-disorder scattering is consistent with the picture of Yb14MnSb11 as an underscreened Kondo lattice as derived from low temperature measurements. The hole concentration can be reduced by the substitution of Al^+3 for Mn^+2, which leads to an increase in the Seebeck coefficient and electrical resistivity consistent with models for degenerate semiconductors. This leads to further improvements (about 25%) in zT and a reduction in the temperature where the zT peaks. The peak in zT is due to the onset of minority carrier conduction and can be correlated with reduction in Seebeck coefficient, increase in electrical conductivity and increase in thermal conductivity due to bipolar thermal conduction.
Meta II: Multi-Model Language Suite for Cyber Physical Systems
2013-03-01
AVM META) projects have developed tools for designing cyber physical (or Mechatronic ) Systems . These systems are increasingly complex, take much...projects have developed tools for designing cyber physical (CPS) (or Mechatronic ) systems . Exemplified by modern amphibious and ground military...and parametric interface of Simulink models and defines associations with CyPhy components and component interfaces. 2. Embedded Systems Modeling
DNA damage and repair after high LET radiation
NASA Astrophysics Data System (ADS)
O'Neill, Peter; Cucinotta, Francis; Anderson, Jennifer
Predictions from biophysical models of interactions of radiation tracks with cellular DNA indicate that clustered DNA damage sites, defined as two or more lesions formed within one or two helical turns of the DNA by passage of a single radiation track, are formed in mammalian cells. These complex DNA damage sites are regarded as a signature of ionizing radiation exposure particularly as the likelihood of clustered damage sites arising endogenously is low. For instance, it was predicted from biophysical modelling that 30-40% of low LET-induced double strand breaks (DSB), a form of clustered damage, are complex with the yield increasing to >90% for high LET radiation, consistent with the reduced reparability of DSB with increasing ionization density of the radiation. The question arises whether the increased biological effects such as mutagenesis, carcinogenesis and lethality is in part related to DNA damage complexity and/or spatial distribution of the damage sites, which may lead to small DNA fragments. With particle radiation it is also important to consider not only delta-rays which may cause clustered damaged sites and may be highly mutagenic but the non-random spatial distribution of DSB which may lead to deletions. In this overview I will concentrate on the molecular aspects of the variation of the complexity of DNA damage on radiation quality and the challenges this complexity presents the DNA damage repair pathways. I will draw on data from micro-irradiations which indicate that the repair of DSBs by non-homologous end joining is highly regulated with pathway choice and kinetics of repair dependent on the chemical complexity of the DSB. In summary the aim is to emphasis the link between the spatial distribution of energy deposition events related to the track, the molecular products formed and the consequence of damage complexity contributing to biological effects and to present some of the outstanding molecular challenges with particle radiation.
Zhang, Xiaokai; Qin, Boqiang; Deng, Jianming; Wells, Mona
2017-10-01
As the world burden of environmental contamination increases, it is of the utmost importance to develop streamlined approaches to environmental risk assessment in order to prioritize mitigation measures. Whole-cell biosensors or bioreporters and speciation modeling have both become of increasing interest to determine the bioavailability of pollutants, as bioavailability is increasingly in use as an indicator of risk. Herein, we examine whether bioreporter results are able to reflect expectations based on chemical reactivity and speciation modeling, with the hope to extend the research into a wider framework of risk assessment. We study a specific test case concerning the bioavailability of lead (Pb) in aqueous environments containing Pb-complexing ligands. Ligands studied include ethylene diamine tetra-acetic acid (EDTA), meso-2,3 dimercaptosuccinic acid (DMSA), leucine, methionine, cysteine, glutathione, and humic acid (HA), and we also performed experiments using natural water samples from Lake Tai (Taihu), the third largest lake in China. We find that EDTA, DMSA, cysteine, glutathione, and HA amendment significantly reduced Pb bioavailability with increasing ligand concentration according to a log-sigmoid trend. Increasing dissolved organic carbon in Taihu water also had the same effect, whereas leucine and methionine had no notable effect on bioavailability at the concentrations tested. We find that bioreporter results are in accord with the reduction of aqueous Pb 2+ that we expect from the relative complexation affinities of the different ligands tested. For EDTA and HA, for which reasonably accurate ionization and complexation constants are known, speciation modeling is in agreement with bioreporter response to within the level of uncertainty recognised as reasonable by the United States Environmental Protection Agency for speciation-based risk assessment applications. These findings represent a first step toward using bioreporter technology to streamline the biological confirmation or validation of speciation modeling for use in environmental risk assessment. Copyright © 2017 Elsevier Ltd. All rights reserved.
Complex optimization for big computational and experimental neutron datasets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bao, Feng; Oak Ridge National Lab.; Archibald, Richard
Here, we present a framework to use high performance computing to determine accurate solutions to the inverse optimization problem of big experimental data against computational models. We demonstrate how image processing, mathematical regularization, and hierarchical modeling can be used to solve complex optimization problems on big data. We also demonstrate how both model and data information can be used to further increase solution accuracy of optimization by providing confidence regions for the processing and regularization algorithms. Finally, we use the framework in conjunction with the software package SIMPHONIES to analyze results from neutron scattering experiments on silicon single crystals, andmore » refine first principles calculations to better describe the experimental data.« less
Complex optimization for big computational and experimental neutron datasets
Bao, Feng; Oak Ridge National Lab.; Archibald, Richard; ...
2016-11-07
Here, we present a framework to use high performance computing to determine accurate solutions to the inverse optimization problem of big experimental data against computational models. We demonstrate how image processing, mathematical regularization, and hierarchical modeling can be used to solve complex optimization problems on big data. We also demonstrate how both model and data information can be used to further increase solution accuracy of optimization by providing confidence regions for the processing and regularization algorithms. Finally, we use the framework in conjunction with the software package SIMPHONIES to analyze results from neutron scattering experiments on silicon single crystals, andmore » refine first principles calculations to better describe the experimental data.« less
Cornejo-Donoso, Jorge; Einarsson, Baldvin; Birnir, Bjorn; Gaines, Steven D
2017-01-01
Marine Protected Areas (MPA) are important management tools shown to protect marine organisms, restore biomass, and increase fisheries yields. While MPAs have been successful in meeting these goals for many relatively sedentary species, highly mobile organisms may get few benefits from this type of spatial protection due to their frequent movement outside the protected area. The use of a large MPA can compensate for extensive movement, but testing this empirically is challenging, as it requires both large areas and sufficient time series to draw conclusions. To overcome this limitation, MPA models have been used to identify designs and predict potential outcomes, but these simulations are highly sensitive to the assumptions describing the organism's movements. Due to recent improvements in computational simulations, it is now possible to include very complex movement assumptions in MPA models (e.g. Individual Based Model). These have renewed interest in MPA simulations, which implicitly assume that increasing the detail in fish movement overcomes the sensitivity to the movement assumptions. Nevertheless, a systematic comparison of the designs and outcomes obtained under different movement assumptions has not been done. In this paper, we use an individual based model, interconnected to population and fishing fleet models, to explore the value of increasing the detail of the movement assumptions using four scenarios of increasing behavioral complexity: a) random, diffusive movement, b) aggregations, c) aggregations that respond to environmental forcing (e.g. sea surface temperature), and d) aggregations that respond to environmental forcing and are transported by currents. We then compare these models to determine how the assumptions affect MPA design, and therefore the effective protection of the stocks. Our results show that the optimal MPA size to maximize fisheries benefits increases as movement complexity increases from ~10% for the diffusive assumption to ~30% when full environment forcing was used. We also found that in cases of limited understanding of the movement dynamics of a species, simplified assumptions can be used to provide a guide for the minimum MPA size needed to effectively protect the stock. However, using oversimplified assumptions can produce suboptimal designs and lead to a density underestimation of ca. 30%; therefore, the main value of detailed movement dynamics is to provide more reliable MPA design and predicted outcomes. Large MPAs can be effective in recovering overfished stocks, protect pelagic fish and provide significant increases in fisheries yields. Our models provide a means to empirically test this spatial management tool, which theoretical evidence consistently suggests as an effective alternative to managing highly mobile pelagic stocks.
The new challenges of multiplex networks: Measures and models
NASA Astrophysics Data System (ADS)
Battiston, Federico; Nicosia, Vincenzo; Latora, Vito
2017-02-01
What do societies, the Internet, and the human brain have in common? They are all examples of complex relational systems, whose emerging behaviours are largely determined by the non-trivial networks of interactions among their constituents, namely individuals, computers, or neurons, rather than only by the properties of the units themselves. In the last two decades, network scientists have proposed models of increasing complexity to better understand real-world systems. Only recently we have realised that multiplexity, i.e. the coexistence of several types of interactions among the constituents of a complex system, is responsible for substantial qualitative and quantitative differences in the type and variety of behaviours that a complex system can exhibit. As a consequence, multilayer and multiplex networks have become a hot topic in complexity science. Here we provide an overview of some of the measures proposed so far to characterise the structure of multiplex networks, and a selection of models aiming at reproducing those structural properties and quantifying their statistical significance. Focusing on a subset of relevant topics, this brief review is a quite comprehensive introduction to the most basic tools for the analysis of multiplex networks observed in the real-world. The wide applicability of multiplex networks as a framework to model complex systems in different fields, from biology to social sciences, and the colloquial tone of the paper will make it an interesting read for researchers working on both theoretical and experimental analysis of networked systems.
Hong, Taehoon; Koo, Choongwan; Kim, Hyunjoong
2012-12-15
The number of deteriorated multi-family housing complexes in South Korea continues to rise, and consequently their electricity consumption is also increasing. This needs to be addressed as part of the nation's efforts to reduce energy consumption. The objective of this research was to develop a decision support model for determining the need to improve multi-family housing complexes. In this research, 1664 cases located in Seoul were selected for model development. The research team collected the characteristics and electricity energy consumption data of these projects in 2009-2010. The following were carried out in this research: (i) using the Decision Tree, multi-family housing complexes were clustered based on their electricity energy consumption; (ii) using Case-Based Reasoning, similar cases were retrieved from the same cluster; and (iii) using a combination of Multiple Regression Analysis, Artificial Neural Network, and Genetic Algorithm, the prediction performance of the developed model was improved. The results of this research can be used as follows: (i) as basic research data for continuously managing several energy consumption data of multi-family housing complexes; (ii) as advanced research data for predicting energy consumption based on the project characteristics; (iii) as practical research data for selecting the most optimal multi-family housing complex with the most potential in terms of energy savings; and (iv) as consistent and objective criteria for incentives and penalties. Copyright © 2012 Elsevier Ltd. All rights reserved.
Harris, Daniel L; Rovere, Alessio; Casella, Elisa; Power, Hannah; Canavesio, Remy; Collin, Antoine; Pomeroy, Andrew; Webster, Jody M; Parravicini, Valeriano
2018-02-01
Coral reefs are diverse ecosystems that support millions of people worldwide by providing coastal protection from waves. Climate change and human impacts are leading to degraded coral reefs and to rising sea levels, posing concerns for the protection of tropical coastal regions in the near future. We use a wave dissipation model calibrated with empirical wave data to calculate the future increase of back-reef wave height. We show that, in the near future, the structural complexity of coral reefs is more important than sea-level rise in determining the coastal protection provided by coral reefs from average waves. We also show that a significant increase in average wave heights could occur at present sea level if there is sustained degradation of benthic structural complexity. Our results highlight that maintaining the structural complexity of coral reefs is key to ensure coastal protection on tropical coastlines in the future.
Boros, Eszter; Srinivas, Raja; Kim, Hee -Kyung; ...
2017-04-11
Aqua ligands can undergo rapid internal rotation about the M-O bond. For magnetic resonance contrast agents, this rotation results in diminished relaxivity. Herein, we show that an intramolecular hydrogen bond to the aqua ligand can reduce this internal rotation and increase relaxivity. Molecular modeling was used to design a series of four Gd complexes capable of forming an intramolecular H-bond to the coordinated water ligand, and these complexes had anomalously high relaxivities compared to similar complexes lacking a H-bond acceptor. Molecular dynamics simulations supported the formation of a stable intramolecular H-bond, while alternative hypotheses that could explain the higher relaxivitymore » were systematically ruled out. Finally, intramolecular H-bonding represents a useful strategy to limit internal water rotational motion and increase relaxivity of Gd complexes.« less
Harris, Daniel L.; Rovere, Alessio; Casella, Elisa; Power, Hannah; Canavesio, Remy; Collin, Antoine; Pomeroy, Andrew; Webster, Jody M.; Parravicini, Valeriano
2018-01-01
Coral reefs are diverse ecosystems that support millions of people worldwide by providing coastal protection from waves. Climate change and human impacts are leading to degraded coral reefs and to rising sea levels, posing concerns for the protection of tropical coastal regions in the near future. We use a wave dissipation model calibrated with empirical wave data to calculate the future increase of back-reef wave height. We show that, in the near future, the structural complexity of coral reefs is more important than sea-level rise in determining the coastal protection provided by coral reefs from average waves. We also show that a significant increase in average wave heights could occur at present sea level if there is sustained degradation of benthic structural complexity. Our results highlight that maintaining the structural complexity of coral reefs is key to ensure coastal protection on tropical coastlines in the future. PMID:29503866
Experimentally modeling stochastic processes with less memory by the use of a quantum processor
Palsson, Matthew S.; Gu, Mile; Ho, Joseph; Wiseman, Howard M.; Pryde, Geoff J.
2017-01-01
Computer simulation of observable phenomena is an indispensable tool for engineering new technology, understanding the natural world, and studying human society. However, the most interesting systems are often so complex that simulating their future behavior demands storing immense amounts of information regarding how they have behaved in the past. For increasingly complex systems, simulation becomes increasingly difficult and is ultimately constrained by resources such as computer memory. Recent theoretical work shows that quantum theory can reduce this memory requirement beyond ultimate classical limits, as measured by a process’ statistical complexity, C. We experimentally demonstrate this quantum advantage in simulating stochastic processes. Our quantum implementation observes a memory requirement of Cq = 0.05 ± 0.01, far below the ultimate classical limit of C = 1. Scaling up this technique would substantially reduce the memory required in simulations of more complex systems. PMID:28168218
NASA Astrophysics Data System (ADS)
Georgiou, K.; Abramoff, R. Z.; Harte, J.; Riley, W. J.; Torn, M. S.
2016-12-01
As global temperatures and atmospheric CO2 concentrations continue to increase, soil microbial activity and decomposition of soil organic matter (SOM) are expected to follow suit, potentially limiting soil carbon storage. Traditional global- and ecosystem-scale models simulate SOM decomposition using linear kinetics, which are inherently unable to reproduce carbon-concentration feedbacks, such as priming of native SOM at elevated CO2 concentrations. Recent studies using nonlinear microbial models of SOM decomposition seek to capture these interactions, and several groups are currently integrating these microbial models into Earth System Models (ESMs). However, despite their widespread ability to exhibit nonlinear responses, these models vary tremendously in complexity and, consequently, dynamics. In this study, we explore, both analytically and numerically, the emergent oscillatory behavior and insensitivity of SOM stocks to carbon inputs that have been deemed `unrealistic' in recent microbial models. We discuss the sources of instability in four models of varying complexity, by sequentially reducing complexity of a detailed model that includes microbial physiology, a mineral sorption isotherm, and enzyme dynamics. We also present an alternative representation of microbial turnover that limits population sizes and, thus, reduces oscillations. We compare these models to several long-term carbon input manipulations, including the Detritus Input and Removal Treatment (DIRT) experiments, to show that there are clear metrics that can be used to distinguish and validate the inherent dynamics of each model structure. We find that traditional linear and nonlinear models cannot readily capture the range of long-term responses observed across the DIRT experiments as a direct consequence of their model structures, and that modifying microbial turnover results in more realistic predictions. Finally, we discuss our findings in the context of improving microbial model behavior for inclusion in ESMs.
NASA Astrophysics Data System (ADS)
Mesick, S.; Weathers, K. W.
2017-12-01
Data complexity can be seen as a continuum from complex to simple. The term data complexity refers to data collections that are disorganized, poorly documented, and generally do not follow best data management practices. Complex data collections are challenging and expensive to manage. Simplified collections readily support automated archival processes, enhanced discovery and data access, as well as production of services that make data easier to reuse. In this session, NOAA NCEI scientific data stewards will discuss the data complexity continuum. This talk will explore data simplification concepts, methods, and tools that data managers can employ which may offer more control over data management costs and processes, while achieving policy goals for open data access and ready reuse. Topics will include guidance for data managers on best allocation of limited data management resources; models for partnering with NCEI to accomplish shared data management goals; and will demonstrate through case studies the benefits of investing in documentation, accessibility, and services to increase data value and return on investment.
The Southern Ocean in the Coupled Model Intercomparison Project phase 5
Meijers, A. J. S.
2014-01-01
The Southern Ocean is an important part of the global climate system, but its complex coupled nature makes both its present state and its response to projected future climate forcing difficult to model. Clear trends in wind, sea-ice extent and ocean properties emerged from multi-model intercomparison in the Coupled Model Intercomparison Project phase 3 (CMIP3). Here, we review recent analyses of the historical and projected wind, sea ice, circulation and bulk properties of the Southern Ocean in the updated Coupled Model Intercomparison Project phase 5 (CMIP5) ensemble. Improvements to the models include higher resolutions, more complex and better-tuned parametrizations of ocean mixing, and improved biogeochemical cycles and atmospheric chemistry. CMIP5 largely reproduces the findings of CMIP3, but with smaller inter-model spreads and biases. By the end of the twenty-first century, mid-latitude wind stresses increase and shift polewards. All water masses warm, and intermediate waters freshen, while bottom waters increase in salinity. Surface mixed layers shallow, warm and freshen, whereas sea ice decreases. The upper overturning circulation intensifies, whereas bottom water formation is reduced. Significant disagreement exists between models for the response of the Antarctic Circumpolar Current strength, for reasons that are as yet unclear. PMID:24891395
Reed, H; Leckey, Cara A C; Dick, A; Harvey, G; Dobson, J
2018-01-01
Ultrasonic damage detection and characterization is commonly used in nondestructive evaluation (NDE) of aerospace composite components. In recent years there has been an increased development of guided wave based methods. In real materials and structures, these dispersive waves result in complicated behavior in the presence of complex damage scenarios. Model-based characterization methods utilize accurate three dimensional finite element models (FEMs) of guided wave interaction with realistic damage scenarios to aid in defect identification and classification. This work describes an inverse solution for realistic composite damage characterization by comparing the wavenumber-frequency spectra of experimental and simulated ultrasonic inspections. The composite laminate material properties are first verified through a Bayesian solution (Markov chain Monte Carlo), enabling uncertainty quantification surrounding the characterization. A study is undertaken to assess the efficacy of the proposed damage model and comparative metrics between the experimental and simulated output. The FEM is then parameterized with a damage model capable of describing the typical complex damage created by impact events in composites. The damage is characterized through a transdimensional Markov chain Monte Carlo solution, enabling a flexible damage model capable of adapting to the complex damage geometry investigated here. The posterior probability distributions of the individual delamination petals as well as the overall envelope of the damage site are determined. Copyright © 2017 Elsevier B.V. All rights reserved.
Lado, Bettina; Matus, Ivan; Rodríguez, Alejandra; Inostroza, Luis; Poland, Jesse; Belzile, François; del Pozo, Alejandro; Quincke, Martín; Castro, Marina; von Zitzewitz, Jarislav
2013-01-01
In crop breeding, the interest of predicting the performance of candidate cultivars in the field has increased due to recent advances in molecular breeding technologies. However, the complexity of the wheat genome presents some challenges for applying new technologies in molecular marker identification with next-generation sequencing. We applied genotyping-by-sequencing, a recently developed method to identify single-nucleotide polymorphisms, in the genomes of 384 wheat (Triticum aestivum) genotypes that were field tested under three different water regimes in Mediterranean climatic conditions: rain-fed only, mild water stress, and fully irrigated. We identified 102,324 single-nucleotide polymorphisms in these genotypes, and the phenotypic data were used to train and test genomic selection models intended to predict yield, thousand-kernel weight, number of kernels per spike, and heading date. Phenotypic data showed marked spatial variation. Therefore, different models were tested to correct the trends observed in the field. A mixed-model using moving-means as a covariate was found to best fit the data. When we applied the genomic selection models, the accuracy of predicted traits increased with spatial adjustment. Multiple genomic selection models were tested, and a Gaussian kernel model was determined to give the highest accuracy. The best predictions between environments were obtained when data from different years were used to train the model. Our results confirm that genotyping-by-sequencing is an effective tool to obtain genome-wide information for crops with complex genomes, that these data are efficient for predicting traits, and that correction of spatial variation is a crucial ingredient to increase prediction accuracy in genomic selection models. PMID:24082033
Automated reverse engineering of nonlinear dynamical systems
Bongard, Josh; Lipson, Hod
2007-01-01
Complex nonlinear dynamics arise in many fields of science and engineering, but uncovering the underlying differential equations directly from observations poses a challenging task. The ability to symbolically model complex networked systems is key to understanding them, an open problem in many disciplines. Here we introduce for the first time a method that can automatically generate symbolic equations for a nonlinear coupled dynamical system directly from time series data. This method is applicable to any system that can be described using sets of ordinary nonlinear differential equations, and assumes that the (possibly noisy) time series of all variables are observable. Previous automated symbolic modeling approaches of coupled physical systems produced linear models or required a nonlinear model to be provided manually. The advance presented here is made possible by allowing the method to model each (possibly coupled) variable separately, intelligently perturbing and destabilizing the system to extract its less observable characteristics, and automatically simplifying the equations during modeling. We demonstrate this method on four simulated and two real systems spanning mechanics, ecology, and systems biology. Unlike numerical models, symbolic models have explanatory value, suggesting that automated “reverse engineering” approaches for model-free symbolic nonlinear system identification may play an increasing role in our ability to understand progressively more complex systems in the future. PMID:17553966
Automated reverse engineering of nonlinear dynamical systems.
Bongard, Josh; Lipson, Hod
2007-06-12
Complex nonlinear dynamics arise in many fields of science and engineering, but uncovering the underlying differential equations directly from observations poses a challenging task. The ability to symbolically model complex networked systems is key to understanding them, an open problem in many disciplines. Here we introduce for the first time a method that can automatically generate symbolic equations for a nonlinear coupled dynamical system directly from time series data. This method is applicable to any system that can be described using sets of ordinary nonlinear differential equations, and assumes that the (possibly noisy) time series of all variables are observable. Previous automated symbolic modeling approaches of coupled physical systems produced linear models or required a nonlinear model to be provided manually. The advance presented here is made possible by allowing the method to model each (possibly coupled) variable separately, intelligently perturbing and destabilizing the system to extract its less observable characteristics, and automatically simplifying the equations during modeling. We demonstrate this method on four simulated and two real systems spanning mechanics, ecology, and systems biology. Unlike numerical models, symbolic models have explanatory value, suggesting that automated "reverse engineering" approaches for model-free symbolic nonlinear system identification may play an increasing role in our ability to understand progressively more complex systems in the future.
On the Way to Appropriate Model Complexity
NASA Astrophysics Data System (ADS)
Höge, M.
2016-12-01
When statistical models are used to represent natural phenomena they are often too simple or too complex - this is known. But what exactly is model complexity? Among many other definitions, the complexity of a model can be conceptualized as a measure of statistical dependence between observations and parameters (Van der Linde, 2014). However, several issues remain when working with model complexity: A unique definition for model complexity is missing. Assuming a definition is accepted, how can model complexity be quantified? How can we use a quantified complexity to the better of modeling? Generally defined, "complexity is a measure of the information needed to specify the relationships between the elements of organized systems" (Bawden & Robinson, 2015). The complexity of a system changes as the knowledge about the system changes. For models this means that complexity is not a static concept: With more data or higher spatio-temporal resolution of parameters, the complexity of a model changes. There are essentially three categories into which all commonly used complexity measures can be classified: (1) An explicit representation of model complexity as "Degrees of freedom" of a model, e.g. effective number of parameters. (2) Model complexity as code length, a.k.a. "Kolmogorov complexity": The longer the shortest model code, the higher its complexity (e.g. in bits). (3) Complexity defined via information entropy of parametric or predictive uncertainty. Preliminary results show that Bayes theorem allows for incorporating all parts of the non-static concept of model complexity like data quality and quantity or parametric uncertainty. Therefore, we test how different approaches for measuring model complexity perform in comparison to a fully Bayesian model selection procedure. Ultimately, we want to find a measure that helps to assess the most appropriate model.
Schoolmaster, Donald; Stagg, Camille L.
2018-01-01
A trade-off between competitive ability and stress tolerance has been hypothesized and empirically supported to explain the zonation of species across stress gradients for a number of systems. Since stress often reduces plant productivity, one might expect a pattern of decreasing productivity across the zones of the stress gradient. However, this pattern is often not observed in coastal wetlands that show patterns of zonation along a salinity gradient. To address the potentially complex relationship between stress, zonation, and productivity in coastal wetlands, we developed a model of plant biomass as a function of resource competition and salinity stress. Analysis of the model confirms the conventional wisdom that a trade-off between competitive ability and stress tolerance is a necessary condition for zonation. It also suggests that a negative relationship between salinity and production can be overcome if (1) the supply of the limiting resource increases with greater salinity stress or (2) nutrient use efficiency increases with increasing salinity. We fit the equilibrium solution of the dynamic model to data from Louisiana coastal wetlands to test its ability to explain patterns of production across the landscape gradient and derive predictions that could be tested with independent data. We found support for a number of the model predictions, including patterns of decreasing competitive ability and increasing nutrient use efficiency across a gradient from freshwater to saline wetlands. In addition to providing a quantitative framework to support the mechanistic hypotheses of zonation, these results suggest that this simple model is a useful platform to further build upon, simulate and test mechanistic hypotheses of more complex patterns and phenomena in coastal wetlands.
Effects of additional data on Bayesian clustering.
Yamazaki, Keisuke
2017-10-01
Hierarchical probabilistic models, such as mixture models, are used for cluster analysis. These models have two types of variables: observable and latent. In cluster analysis, the latent variable is estimated, and it is expected that additional information will improve the accuracy of the estimation of the latent variable. Many proposed learning methods are able to use additional data; these include semi-supervised learning and transfer learning. However, from a statistical point of view, a complex probabilistic model that encompasses both the initial and additional data might be less accurate due to having a higher-dimensional parameter. The present paper presents a theoretical analysis of the accuracy of such a model and clarifies which factor has the greatest effect on its accuracy, the advantages of obtaining additional data, and the disadvantages of increasing the complexity. Copyright © 2017 Elsevier Ltd. All rights reserved.
Robust Fixed-Structure Controller Synthesis
NASA Technical Reports Server (NTRS)
Corrado, Joseph R.; Haddad, Wassim M.; Gupta, Kajal (Technical Monitor)
2000-01-01
The ability to develop an integrated control system design methodology for robust high performance controllers satisfying multiple design criteria and real world hardware constraints constitutes a challenging task. The increasingly stringent performance specifications required for controlling such systems necessitates a trade-off between controller complexity and robustness. The principle challenge of the minimal complexity robust control design is to arrive at a tractable control design formulation in spite of the extreme complexity of such systems. Hence, design of minimal complexitY robust controllers for systems in the face of modeling errors has been a major preoccupation of system and control theorists and practitioners for the past several decades.
Complex adaptive systems: concept analysis.
Holden, Lela M
2005-12-01
The aim of this paper is to explicate the concept of complex adaptive systems through an analysis that provides a description, antecedents, consequences, and a model case from the nursing and health care literature. Life is more than atoms and molecules--it is patterns of organization. Complexity science is the latest generation of systems thinking that investigates patterns and has emerged from the exploration of the subatomic world and quantum physics. A key component of complexity science is the concept of complex adaptive systems, and active research is found in many disciplines--from biology to economics to health care. However, the research and literature related to these appealing topics have generated confusion. A thorough explication of complex adaptive systems is needed. A modified application of the methods recommended by Walker and Avant for concept analysis was used. A complex adaptive system is a collection of individual agents with freedom to act in ways that are not always totally predictable and whose actions are interconnected. Examples include a colony of termites, the financial market, and a surgical team. It is often referred to as chaos theory, but the two are not the same. Chaos theory is actually a subset of complexity science. Complexity science offers a powerful new approach--beyond merely looking at clinical processes and the skills of healthcare professionals. The use of complex adaptive systems as a framework is increasing for a wide range of scientific applications, including nursing and healthcare management research. When nursing and other healthcare managers focus on increasing connections, diversity, and interactions they increase information flow and promote creative adaptation referred to as self-organization. Complexity science builds on the rich tradition in nursing that views patients and nursing care from a systems perspective.
Schaefer, Kristina N.; Williams, Clara E.; Roberts, David M.; McKay, Daniel J.
2018-01-01
Wnt signaling provides a paradigm for cell-cell signals that regulate embryonic development and stem cell homeostasis and are inappropriately activated in cancers. The tumor suppressors APC and Axin form the core of the multiprotein destruction complex, which targets the Wnt-effector beta-catenin for phosphorylation, ubiquitination and destruction. Based on earlier work, we hypothesize that the destruction complex is a supramolecular entity that self-assembles by Axin and APC polymerization, and that regulating assembly and stability of the destruction complex underlie its function. We tested this hypothesis in Drosophila embryos, a premier model of Wnt signaling. Combining biochemistry, genetic tools to manipulate Axin and APC2 levels, advanced imaging and molecule counting, we defined destruction complex assembly, stoichiometry, and localization in vivo, and its downregulation in response to Wnt signaling. Our findings challenge and revise current models of destruction complex function. Endogenous Axin and APC2 proteins and their antagonist Dishevelled accumulate at roughly similar levels, suggesting competition for binding may be critical. By expressing Axin:GFP at near endogenous levels we found that in the absence of Wnt signals, Axin and APC2 co-assemble into large cytoplasmic complexes containing tens to hundreds of Axin proteins. Wnt signals trigger recruitment of these to the membrane, while cytoplasmic Axin levels increase, suggesting altered assembly/disassembly. Glycogen synthase kinase3 regulates destruction complex recruitment to the membrane and release of Armadillo/beta-catenin from the destruction complex. Manipulating Axin or APC2 levels had no effect on destruction complex activity when Wnt signals were absent, but, surprisingly, had opposite effects on the destruction complex when Wnt signals were present. Elevating Axin made the complex more resistant to inactivation, while elevating APC2 levels enhanced inactivation. Our data suggest both absolute levels and the ratio of these two core components affect destruction complex function, supporting models in which competition among Axin partners determines destruction complex activity. PMID:29641560
Complex coacervation of supercharged proteins with polyelectrolytes.
Obermeyer, Allie C; Mills, Carolyn E; Dong, Xue-Hui; Flores, Romeo J; Olsen, Bradley D
2016-04-21
Complexation of proteins with polyelectrolytes or block copolymers can lead to phase separation to generate a coacervate phase or self-assembly of coacervate core micelles. However, many proteins do not coacervate at conditions near neutral pH and physiological ionic strength. Here, protein supercharging is used to systematically explore the effect of protein charge on the complex coacervation with polycations. Four model proteins were anionically supercharged to varying degrees as quantified by mass spectrometry. Proteins phase separated with strong polycations when the ratio of negatively charged residues to positively charged residues on the protein (α) was greater than 1.1-1.2. Efficient partitioning of the protein into the coacervate phase required larger α (1.5-2.0). The preferred charge ratio for coacervation was shifted away from charge symmetry for three of the four model proteins and indicated an excess of positive charge in the coacervate phase. The composition of protein and polymer in the coacervate phase was determined using fluorescently labeled components, revealing that several of the coacervates likely have both induced charging and a macromolecular charge imbalance. The model proteins were also encapsulated in complex coacervate core micelles and micelles formed when the protein charge ratio α was greater than 1.3-1.4. Small angle neutron scattering and transmission electron microscopy showed that the micelles were spherical. The stability of the coacervate phase in both the bulk and micelles improved to increased ionic strength as the net charge on the protein increased. The micelles were also stable to dehydration and elevated temperatures.
Condition-based diagnosis of mechatronic systems using a fractional calculus approach
NASA Astrophysics Data System (ADS)
Gutiérrez-Carvajal, Ricardo Enrique; Flávio de Melo, Leonimer; Maurício Rosário, João; Tenreiro Machado, J. A.
2016-07-01
While fractional calculus (FC) is as old as integer calculus, its application has been mainly restricted to mathematics. However, many real systems are better described using FC equations than with integer models. FC is a suitable tool for describing systems characterised by their fractal nature, long-term memory and chaotic behaviour. It is a promising methodology for failure analysis and modelling, since the behaviour of a failing system depends on factors that increase the model's complexity. This paper explores the proficiency of FC in modelling complex behaviour by tuning only a few parameters. This work proposes a novel two-step strategy for diagnosis, first modelling common failure conditions and, second, by comparing these models with real machine signals and using the difference to feed a computational classifier. Our proposal is validated using an electrical motor coupled with a mechanical gear reducer.
NASA Astrophysics Data System (ADS)
Kan, Guangyuan; He, Xiaoyan; Ding, Liuqian; Li, Jiren; Hong, Yang; Zuo, Depeng; Ren, Minglei; Lei, Tianjie; Liang, Ke
2018-01-01
Hydrological model calibration has been a hot issue for decades. The shuffled complex evolution method developed at the University of Arizona (SCE-UA) has been proved to be an effective and robust optimization approach. However, its computational efficiency deteriorates significantly when the amount of hydrometeorological data increases. In recent years, the rise of heterogeneous parallel computing has brought hope for the acceleration of hydrological model calibration. This study proposed a parallel SCE-UA method and applied it to the calibration of a watershed rainfall-runoff model, the Xinanjiang model. The parallel method was implemented on heterogeneous computing systems using OpenMP and CUDA. Performance testing and sensitivity analysis were carried out to verify its correctness and efficiency. Comparison results indicated that heterogeneous parallel computing-accelerated SCE-UA converged much more quickly than the original serial version and possessed satisfactory accuracy and stability for the task of fast hydrological model calibration.
Development of a Three-Dimensional, Unstructured Material Response Design Tool
NASA Technical Reports Server (NTRS)
Schulz, Joseph C.; Stern, Eric C.; Muppidi, Suman; Palmer, Grant E.; Schroeder, Olivia
2017-01-01
A preliminary verification and validation of a new material response model is presented. This model, Icarus, is intended to serve as a design tool for the thermal protection systems of re-entry vehicles. Currently, the capability of the model is limited to simulating the pyrolysis of a material as a result of the radiative and convective surface heating imposed on the material from the surrounding high enthalpy gas. Since the major focus behind the development of Icarus has been model extensibility, the hope is that additional physics can be quickly added. This extensibility is critical since thermal protection systems are becoming increasing complex, e.g. woven carbon polymers. Additionally, as a three-dimensional, unstructured, finite-volume model, Icarus is capable of modeling complex geometries. In this paper, the mathematical and numerical formulation is presented followed by a discussion of the software architecture and some preliminary verification and validation studies.
Tailored Codes for Small Quantum Memories
NASA Astrophysics Data System (ADS)
Robertson, Alan; Granade, Christopher; Bartlett, Stephen D.; Flammia, Steven T.
2017-12-01
We demonstrate that small quantum memories, realized via quantum error correction in multiqubit devices, can benefit substantially by choosing a quantum code that is tailored to the relevant error model of the system. For a biased noise model, with independent bit and phase flips occurring at different rates, we show that a single code greatly outperforms the well-studied Steane code across the full range of parameters of the noise model, including for unbiased noise. In fact, this tailored code performs almost optimally when compared with 10 000 randomly selected stabilizer codes of comparable experimental complexity. Tailored codes can even outperform the Steane code with realistic experimental noise, and without any increase in the experimental complexity, as we demonstrate by comparison in the observed error model in a recent seven-qubit trapped ion experiment.
Learning outcomes as a tool to assess progression.
Harden, Ronald M
2007-09-01
In the move to outcome-based education (OBE) much of the attention has focussed on the exit learning outcomes-the outcomes expected of a student at the end of a course of studies. It is important also to plan for and monitor students progression to the exit outcomes. A model is described for considering this progression through the phases of undergraduate education. Four dimensions are included-increasing breadth, increasing depth, increasing utility and increasing proficiency. The model can also be used to develop a blueprint for a more seamless link between undergraduate education, postgraduate training and continuing professional development. The progression model recognises the complexities of medical practice and medical education. It supports the move to student-centred and adaptive approaches to learning in an OBE environment.
Model annotation for synthetic biology: automating model to nucleotide sequence conversion
Misirli, Goksel; Hallinan, Jennifer S.; Yu, Tommy; Lawson, James R.; Wimalaratne, Sarala M.; Cooling, Michael T.; Wipat, Anil
2011-01-01
Motivation: The need for the automated computational design of genetic circuits is becoming increasingly apparent with the advent of ever more complex and ambitious synthetic biology projects. Currently, most circuits are designed through the assembly of models of individual parts such as promoters, ribosome binding sites and coding sequences. These low level models are combined to produce a dynamic model of a larger device that exhibits a desired behaviour. The larger model then acts as a blueprint for physical implementation at the DNA level. However, the conversion of models of complex genetic circuits into DNA sequences is a non-trivial undertaking due to the complexity of mapping the model parts to their physical manifestation. Automating this process is further hampered by the lack of computationally tractable information in most models. Results: We describe a method for automatically generating DNA sequences from dynamic models implemented in CellML and Systems Biology Markup Language (SBML). We also identify the metadata needed to annotate models to facilitate automated conversion, and propose and demonstrate a method for the markup of these models using RDF. Our algorithm has been implemented in a software tool called MoSeC. Availability: The software is available from the authors' web site http://research.ncl.ac.uk/synthetic_biology/downloads.html. Contact: anil.wipat@ncl.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:21296753
Calibration of Automatically Generated Items Using Bayesian Hierarchical Modeling.
ERIC Educational Resources Information Center
Johnson, Matthew S.; Sinharay, Sandip
For complex educational assessments, there is an increasing use of "item families," which are groups of related items. However, calibration or scoring for such an assessment requires fitting models that take into account the dependence structure inherent among the items that belong to the same item family. C. Glas and W. van der Linden…
Analyzing Change in Students' Gene-to-Evolution Models in College-Level Introductory Biology
ERIC Educational Resources Information Center
Dauer, Joseph T.; Momsen, Jennifer L.; Speth, Elena Bray; Makohon-Moore, Sasha C.; Long, Tammy M.
2013-01-01
Research in contemporary biology has become increasingly complex and organized around understanding biological processes in the context of systems. To better reflect the ways of thinking required for learning about systems, we developed and implemented a pedagogical approach using box-and-arrow models (similar to concept maps) as a foundational…
Technique for ranking potential predictor layers for use in remote sensing analysis
Andrew Lister; Mike Hoppus; Rachel Riemann
2004-01-01
Spatial modeling using GIS-based predictor layers often requires that extraneous predictors be culled before conducting analysis. In some cases, using extraneous predictor layers might improve model accuracy but at the expense of increasing complexity and interpretability. In other cases, using extraneous layers can dilute the relationship between predictors and target...
Spatial scaling and multi-model inference in landscape genetics: Martes americana in northern Idaho
Tzeidle N. Wasserman; Samuel A. Cushman; Michael K. Schwartz; David O. Wallin
2010-01-01
Individual-based analyses relating landscape structure to genetic distances across complex landscapes enable rigorous evaluation of multiple alternative hypotheses linking landscape structure to gene flow. We utilize two extensions to increase the rigor of the individual-based causal modeling approach to inferring relationships between landscape patterns and gene flow...
ERIC Educational Resources Information Center
Hardeman, Wendy; Sutton, Stephen; Griffin, Simon; Johnston, Marie; White, Anthony; Wareham, Nicholas J.; Kinmonth, Ann Louise
2005-01-01
Theory-based intervention programmes to support health-related behaviour change aim to increase health impact and improve understanding of mechanisms of behaviour change. However, the science of intervention development remains at an early stage. We present a causal modelling approach to developing complex interventions for evaluation in…
Edwards, Stefan M.; Sørensen, Izel F.; Sarup, Pernille; Mackay, Trudy F. C.; Sørensen, Peter
2016-01-01
Predicting individual quantitative trait phenotypes from high-resolution genomic polymorphism data is important for personalized medicine in humans, plant and animal breeding, and adaptive evolution. However, this is difficult for populations of unrelated individuals when the number of causal variants is low relative to the total number of polymorphisms and causal variants individually have small effects on the traits. We hypothesized that mapping molecular polymorphisms to genomic features such as genes and their gene ontology categories could increase the accuracy of genomic prediction models. We developed a genomic feature best linear unbiased prediction (GFBLUP) model that implements this strategy and applied it to three quantitative traits (startle response, starvation resistance, and chill coma recovery) in the unrelated, sequenced inbred lines of the Drosophila melanogaster Genetic Reference Panel. Our results indicate that subsetting markers based on genomic features increases the predictive ability relative to the standard genomic best linear unbiased prediction (GBLUP) model. Both models use all markers, but GFBLUP allows differential weighting of the individual genetic marker relationships, whereas GBLUP weighs the genetic marker relationships equally. Simulation studies show that it is possible to further increase the accuracy of genomic prediction for complex traits using this model, provided the genomic features are enriched for causal variants. Our GFBLUP model using prior information on genomic features enriched for causal variants can increase the accuracy of genomic predictions in populations of unrelated individuals and provides a formal statistical framework for leveraging and evaluating information across multiple experimental studies to provide novel insights into the genetic architecture of complex traits. PMID:27235308
Zhang, Yong; Green, Christopher T.; Tick, Geoffrey R.
2015-01-01
This study evaluates the role of the Peclet number as affected by molecular diffusion in transient anomalous transport, which is one of the major knowledge gaps in anomalous transport, by combining Monte Carlo simulations and stochastic model analysis. Two alluvial settings containing either short- or long-connected hydrofacies are generated and used as media for flow and transport modeling. Numerical experiments show that 1) the Peclet number affects both the duration of the power-law segment of tracer breakthrough curves (BTCs) and the transition rate from anomalous to Fickian transport by determining the solute residence time for a given low-permeability layer, 2) mechanical dispersion has a limited contribution to the anomalous characteristics of late-time transport as compared to molecular diffusion due to an almost negligible velocity in floodplain deposits, and 3) the initial source dimensions only enhance the power-law tail of the BTCs at short travel distances. A tempered stable stochastic (TSS) model is then applied to analyze the modeled transport. Applications show that the time-nonlocal parameters in the TSS model relate to the Peclet number, Pe. In particular, the truncation parameter in the TSS model increases nonlinearly with a decrease in Pe due to the decrease of the mean residence time, and the capacity coefficient increases with an increase in molecular diffusion which is probably due to the increase in the number of immobile particles. The above numerical experiments and stochastic analysis therefore reveal that the Peclet number as affected by molecular diffusion controls transient anomalous transport in alluvial aquifer–aquitard complexes.
A Framework of Complex Adaptive Systems: Parents As Partners in the Neonatal Intensive Care Unit.
DʼAgata, Amy L; McGrath, Jacqueline M
2016-01-01
Advances in neonatal care are allowing for increased infant survival; however, neurodevelopmental complications continue. Using a complex adaptive system framework, a broad analysis of the network of agents most influential to vulnerable infants in the neonatal intensive care unit (NICU) is presented: parent, nurse, and organization. By exploring these interconnected relationships and the emergent behaviors, a model of care that increases parental caregiving in the NICU is proposed. Supportive parent caregiving early in an infant's NICU stay has the potential for more sensitive caregiving and enhanced opportunities for attachment, perhaps positively impacting neurodevelopment.
C++, objected-oriented programming, and astronomical data models
NASA Technical Reports Server (NTRS)
Farris, A.
1992-01-01
Contemporary astronomy is characterized by increasingly complex instruments and observational techniques, higher data collection rates, and large data archives, placing severe stress on software analysis systems. The object-oriented paradigm represents a significant new approach to software design and implementation that holds great promise for dealing with this increased complexity. The basic concepts of this approach will be characterized in contrast to more traditional procedure-oriented approaches. The fundamental features of objected-oriented programming will be discussed from a C++ programming language perspective, using examples familiar to astronomers. This discussion will focus on objects, classes and their relevance to the data type system; the principle of information hiding; and the use of inheritance to implement generalization/specialization relationships. Drawing on the object-oriented approach, features of a new database model to support astronomical data analysis will be presented.
The influence of gyroscopic forces on the dynamic behavior and flutter of rotating blades
NASA Technical Reports Server (NTRS)
Sisto, F.; Chang, A. T.
1983-01-01
The structural dynamics of a cantilever turbomachine blade mounted on a spinning and precessing rotor are investigated. Both stability and forced vibration are considered with a blade model that increases in complexity (and verisimilitude) from a spring-restrained point mass, to a uniform cantilever, to a twisted uniform cantilever turbomachine blade mounted on a spinning and precessing rotor are investigated. Both stability and forced vibration are considered with a blade model that increases in complexity (and verisimilitude) from a spring-restrained point mass, to a uniform cantilever, to a twisted uniform cantilever, to a tapered twisted cantilever of arbitrary cross-section. In every instance the formulation is from first principles using a finite element based on beam theory. Both ramp-type and periodic-type precessional angular displacements are considered. In concluding, forced vibrating and flutter are studied using the final and most sophisticated structural model. The analysis of stability is presented and a number of numerical examples are worked out.
The dynamic three-dimensional organization of the diploid yeast genome
Kim, Seungsoo; Liachko, Ivan; Brickner, Donna G; Cook, Kate; Noble, William S; Brickner, Jason H; Shendure, Jay; Dunham, Maitreya J
2017-01-01
The budding yeast Saccharomyces cerevisiae is a long-standing model for the three-dimensional organization of eukaryotic genomes. However, even in this well-studied model, it is unclear how homolog pairing in diploids or environmental conditions influence overall genome organization. Here, we performed high-throughput chromosome conformation capture on diverged Saccharomyces hybrid diploids to obtain the first global view of chromosome conformation in diploid yeasts. After controlling for the Rabl-like orientation using a polymer model, we observe significant homolog proximity that increases in saturated culture conditions. Surprisingly, we observe a localized increase in homologous interactions between the HAS1-TDA1 alleles specifically under galactose induction and saturated growth. This pairing is accompanied by relocalization to the nuclear periphery and requires Nup2, suggesting a role for nuclear pore complexes. Together, these results reveal that the diploid yeast genome has a dynamic and complex 3D organization. DOI: http://dx.doi.org/10.7554/eLife.23623.001 PMID:28537556
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, Austin; Chakraborty, Sudipta; Wang, Dexin
This paper presents a cyber-physical testbed, developed to investigate the complex interactions between emerging microgrid technologies such as grid-interactive power sources, control systems, and a wide variety of communication platforms and bandwidths. The cyber-physical testbed consists of three major components for testing and validation: real time models of a distribution feeder model with microgrid assets that are integrated into the National Renewable Energy Laboratory's (NREL) power hardware-in-the-loop (PHIL) platform; real-time capable network-simulator-in-the-loop (NSIL) models; and physical hardware including inverters and a simple system controller. Several load profiles and microgrid configurations were tested to examine the effect on system performance withmore » increasing channel delays and router processing delays in the network simulator. Testing demonstrated that the controller's ability to maintain a target grid import power band was severely diminished with increasing network delays and laid the foundation for future testing of more complex cyber-physical systems.« less
Computational modelling of genome-scale metabolic networks and its application to CHO cell cultures.
Rejc, Živa; Magdevska, Lidija; Tršelič, Tilen; Osolin, Timotej; Vodopivec, Rok; Mraz, Jakob; Pavliha, Eva; Zimic, Nikolaj; Cvitanović, Tanja; Rozman, Damjana; Moškon, Miha; Mraz, Miha
2017-09-01
Genome-scale metabolic models (GEMs) have become increasingly important in recent years. Currently, GEMs are the most accurate in silico representation of the genotype-phenotype link. They allow us to study complex networks from the systems perspective. Their application may drastically reduce the amount of experimental and clinical work, improve diagnostic tools and increase our understanding of complex biological phenomena. GEMs have also demonstrated high potential for the optimisation of bio-based production of recombinant proteins. Herein, we review the basic concepts, methods, resources and software tools used for the reconstruction and application of GEMs. We overview the evolution of the modelling efforts devoted to the metabolism of Chinese Hamster Ovary (CHO) cells. We present a case study on CHO cell metabolism under different amino acid depletions. This leads us to the identification of the most influential as well as essential amino acids in selected CHO cell lines. Copyright © 2017 Elsevier Ltd. All rights reserved.
An analysis of electrical conductivity model in saturated porous media
NASA Astrophysics Data System (ADS)
Cai, J.; Wei, W.; Qin, X.; Hu, X.
2017-12-01
Electrical conductivity of saturated porous media has numerous applications in many fields. In recent years, the number of theoretical methods to model electrical conductivity of complex porous media has dramatically increased. Nevertheless, the process of modeling the spatial conductivity distributed function continues to present challenges when these models used in reservoirs, particularly in porous media with strongly heterogeneous pore-space distributions. Many experiments show a more complex distribution of electrical conductivity data than the predictions derived from the experiential model. Studies have observed anomalously-high electrical conductivity of some low-porosity (tight) formations compared to more- porous reservoir rocks, which indicates current flow in porous media is complex and difficult to predict. Moreover, the change of electrical conductivity depends not only on the pore volume fraction but also on several geometric properties of the more extensive pore network, including pore interconnection and tortuosity. In our understanding of electrical conductivity models in porous media, we study the applicability of several well-known methods/theories to electrical characteristics of porous rocks as a function of pore volume, tortuosity and interconnection, to estimate electrical conductivity based on the micro-geometrical properties of rocks. We analyze the state of the art of scientific knowledge and practice for modeling porous structural systems, with the purpose of identifying current limitations and defining a blueprint for future modeling advances. We compare conceptual descriptions of electrical current flow processes in pore space considering several distinct modeling approaches. Approaches to obtaining more reasonable electrical conductivity models are discussed. Experiments suggest more complex relationships between electrical conductivity and porosity than experiential models, particularly in low-porosity formations. However, the available theoretical models combined with simulations do provide insight to how microscale physics affects macroscale electrical conductivity in porous media.
NASA Astrophysics Data System (ADS)
Swallow, B.; Rigby, M. L.; Rougier, J.; Manning, A.; Thomson, D.; Webster, H. N.; Lunt, M. F.; O'Doherty, S.
2016-12-01
In order to understand underlying processes governing environmental and physical phenomena, a complex mathematical model is usually required. However, there is an inherent uncertainty related to the parameterisation of unresolved processes in these simulators. Here, we focus on the specific problem of accounting for uncertainty in parameter values in an atmospheric chemical transport model. Systematic errors introduced by failing to account for these uncertainties have the potential to have a large effect on resulting estimates in unknown quantities of interest. One approach that is being increasingly used to address this issue is known as emulation, in which a large number of forward runs of the simulator are carried out, in order to approximate the response of the output to changes in parameters. However, due to the complexity of some models, it is often unfeasible to run large numbers of training runs that is usually required for full statistical emulators of the environmental processes. We therefore present a simplified model reduction method for approximating uncertainties in complex environmental simulators without the need for very large numbers of training runs. We illustrate the method through an application to the Met Office's atmospheric transport model NAME. We show how our parameter estimation framework can be incorporated into a hierarchical Bayesian inversion, and demonstrate the impact on estimates of UK methane emissions, using atmospheric mole fraction data. We conclude that accounting for uncertainties in the parameterisation of complex atmospheric models is vital if systematic errors are to be minimized and all relevant uncertainties accounted for. We also note that investigations of this nature can prove extremely useful in highlighting deficiencies in the simulator that might otherwise be missed.
Kuzuya, Teiji; Katano, Yoshiaki; Nakano, Isao; Hirooka, Yoshiki; Itoh, Akihiro; Ishigami, Masatoshi; Hayashi, Kazuhiko; Honda, Takashi; Goto, Hidemi; Fujita, Yuko; Shikano, Rie; Muramatsu, Yuji; Bajotto, Gustavo; Tamura, Tomohiro; Tamura, Noriko; Shimomura, Yoshiharu
2008-08-15
The branched-chain alpha-keto acid dehydrogenase (BCKDH) complex is the most important regulatory enzyme in branched-chain amino acid (BCAA) catabolism. We examined the regulation of hepatic BCKDH complex activity in spontaneous type 2 diabetes Otsuka Long-Evans Tokushima Fatty (OLETF) rats and Zucker diabetic fatty rats. Hepatic BCKDH complex activity in these rats was significantly lower than in corresponding control rats. The amount of BCKDH complex in OLETF rats corresponded to the total activity of the complex. Activity and abundance of the bound form of BCKDH kinase, which is responsible for inactivation of the complex, showed an inverse correlation to BCKDH complex activity in OLETF rats. Dietary supplementation of 5% BCAAs for 10 weeks markedly increased BCKDH complex activity, and decreased the activity and bound form of BCKDH kinase in the rats. These results suggest that BCAA catabolism in type 2 diabetes is downregulated and enhanced by BCAA supplementation.
Virtual planning for craniomaxillofacial surgery--7 years of experience.
Adolphs, Nicolai; Haberl, Ernst-Johannes; Liu, Weichen; Keeve, Erwin; Menneking, Horst; Hoffmeister, Bodo
2014-07-01
Contemporary computer-assisted surgery systems more and more allow for virtual simulation of even complex surgical procedures with increasingly realistic predictions. Preoperative workflows are established and different commercially software solutions are available. Potential and feasibility of virtual craniomaxillofacial surgery as an additional planning tool was assessed retrospectively by comparing predictions and surgical results. Since 2006 virtual simulation has been performed in selected patient cases affected by complex craniomaxillofacial disorders (n = 8) in addition to standard surgical planning based on patient specific 3d-models. Virtual planning could be performed for all levels of the craniomaxillofacial framework within a reasonable preoperative workflow. Simulation of even complex skeletal displacements corresponded well with the real surgical result and soft tissue simulation proved to be helpful. In combination with classic 3d-models showing the underlying skeletal pathology virtual simulation improved planning and transfer of craniomaxillofacial corrections. Additional work and expenses may be justified by increased possibilities of visualisation, information, instruction and documentation in selected craniomaxillofacial procedures. Copyright © 2013 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Pediatric Care Coordination: Lessons Learned and Future Priorities.
Cady, Rhonda G; Looman, Wendy S; Lindeke, Linda L; LaPlante, Bonnie; Lundeen, Barbara; Seeley, Amanda; Kautto, Mary E
2015-09-30
A fundamental component of the medical home model is care coordination. In Minnesota, this model informed design and implementation of the state's health care home (HCH) model, a key element of statewide healthcare reform legislation. Children with medical complexity (CMC) often require care from multiple specialists and community resources. Coordinating this multi-faceted care within the HCH is challenging. This article describes the need for specialized models of care coordination for CMC. Two models of care coordination for CMC were developed to address this challenge. The TeleFamilies Model of Pediatric Care Coordination uses an advanced practice registered nurse care (APRN) coordinator embedded within an established HCH. The PRoSPer Model of Pediatric Care Coordination uses a registered nurse/social worker care coordinator team embedded within a specialty care system. We describe key findings from implementation of these models, and conclude with lessons learned. Replication of the models is encouraged to increase the evidence base for care coordination for the growing population of children with medical complexities.
Sachetto Oliveira, Rafael; Martins Rocha, Bernardo; Burgarelli, Denise; Meira, Wagner; Constantinides, Christakis; Weber Dos Santos, Rodrigo
2018-02-01
The use of computer models as a tool for the study and understanding of the complex phenomena of cardiac electrophysiology has attained increased importance nowadays. At the same time, the increased complexity of the biophysical processes translates into complex computational and mathematical models. To speed up cardiac simulations and to allow more precise and realistic uses, 2 different techniques have been traditionally exploited: parallel computing and sophisticated numerical methods. In this work, we combine a modern parallel computing technique based on multicore and graphics processing units (GPUs) and a sophisticated numerical method based on a new space-time adaptive algorithm. We evaluate each technique alone and in different combinations: multicore and GPU, multicore and GPU and space adaptivity, multicore and GPU and space adaptivity and time adaptivity. All the techniques and combinations were evaluated under different scenarios: 3D simulations on slabs, 3D simulations on a ventricular mouse mesh, ie, complex geometry, sinus-rhythm, and arrhythmic conditions. Our results suggest that multicore and GPU accelerate the simulations by an approximate factor of 33×, whereas the speedups attained by the space-time adaptive algorithms were approximately 48. Nevertheless, by combining all the techniques, we obtained speedups that ranged between 165 and 498. The tested methods were able to reduce the execution time of a simulation by more than 498× for a complex cellular model in a slab geometry and by 165× in a realistic heart geometry simulating spiral waves. The proposed methods will allow faster and more realistic simulations in a feasible time with no significant loss of accuracy. Copyright © 2017 John Wiley & Sons, Ltd.
Søreide, K; Thorsen, K; Søreide, J A
2015-02-01
Mortality prediction models for patients with perforated peptic ulcer (PPU) have not yielded consistent or highly accurate results. Given the complex nature of this disease, which has many non-linear associations with outcomes, we explored artificial neural networks (ANNs) to predict the complex interactions between the risk factors of PPU and death among patients with this condition. ANN modelling using a standard feed-forward, back-propagation neural network with three layers (i.e., an input layer, a hidden layer and an output layer) was used to predict the 30-day mortality of consecutive patients from a population-based cohort undergoing surgery for PPU. A receiver-operating characteristic (ROC) analysis was used to assess model accuracy. Of the 172 patients, 168 had their data included in the model; the data of 117 (70%) were used for the training set, and the data of 51 (39%) were used for the test set. The accuracy, as evaluated by area under the ROC curve (AUC), was best for an inclusive, multifactorial ANN model (AUC 0.90, 95% CIs 0.85-0.95; p < 0.001). This model outperformed standard predictive scores, including Boey and PULP. The importance of each variable decreased as the number of factors included in the ANN model increased. The prediction of death was most accurate when using an ANN model with several univariate influences on the outcome. This finding demonstrates that PPU is a highly complex disease for which clinical prognoses are likely difficult. The incorporation of computerised learning systems might enhance clinical judgments to improve decision making and outcome prediction.
A Guide to the Literature on Learning Graphical Models
NASA Technical Reports Server (NTRS)
Buntine, Wray L.; Friedland, Peter (Technical Monitor)
1994-01-01
This literature review discusses different methods under the general rubric of learning Bayesian networks from data, and more generally, learning probabilistic graphical models. Because many problems in artificial intelligence, statistics and neural networks can be represented as a probabilistic graphical model, this area provides a unifying perspective on learning. This paper organizes the research in this area along methodological lines of increasing complexity.
Collaborative modelling: the future of computational neuroscience?
Davison, Andrew P
2012-01-01
Given the complexity of biological neural circuits and of their component cells and synapses, building and simulating robust, well-validated, detailed models increasingly surpasses the resources of an individual researcher or small research group. In this article, I will briefly review possible solutions to this problem, argue for open, collaborative modelling as the optimal solution for advancing neuroscience knowledge, and identify potential bottlenecks and possible solutions.
Analyzing a suitable elastic geomechanical model for Vaca Muerta Formation
NASA Astrophysics Data System (ADS)
Sosa Massaro, Agustin; Espinoza, D. Nicolas; Frydman, Marcelo; Barredo, Silvia; Cuervo, Sergio
2017-11-01
Accurate geomechanical evaluation of oil and gas reservoir rocks is important to provide design parameters for drilling, completion and predict production rates. In particular, shale reservoir rocks are geologically complex and heterogeneous. Wells need to be hydraulically fractured for stimulation and, in complex tectonic environments, it is to consider that rock fabric and in situ stress, strongly influence fracture propagation geometry. This article presents a combined wellbore-laboratory characterization of the geomechanical properties of a well in El Trapial/Curamched Field, over the Vaca Muerta Formation, located in the Neuquén Basin in Argentina. The study shows the results of triaxial tests with acoustic measurements in rock plugs from outcrops and field cores, and corresponding dynamic to static correlations considering various elastic models. The models, with increasing complexity, include the Isotropic Elastic Model (IEM), the Anisotropic Elastic Model (AEM) and the Detailed Anisotropic Elastic Model (DAEM). Each model shows advantages over the others. An IEM offers a quick overview, being easy to run without much detailed data for heterogeneous and anisotropic rocks. The DAEM requires significant amounts of data, time and a multidisciplinary team to arrive to a detailed model. Finally, an AEM suits well to an anisotropic and realistic rock without the need of massive amounts of data.
Beyond a series of security nets: Applying STAMP & STPA to port security
Williams, Adam D.
2015-11-17
Port security is an increasing concern considering the significant role of ports in global commerce and today’s increasingly complex threat environment. Current approaches to port security mirror traditional models of accident causality -- ‘a series of security nets’ based on component reliability and probabilistic assumptions. Traditional port security frameworks result in isolated and inconsistent improvement strategies. Recent work in engineered safety combines the ideas of hierarchy, emergence, control and communication into a new paradigm for understanding port security as an emergent complex system property. The ‘System-Theoretic Accident Model and Process (STAMP)’ is a new model of causality based on systemsmore » and control theory. The associated analysis process -- System Theoretic Process Analysis (STPA) -- identifies specific technical or procedural security requirements designed to work in coordination with (and be traceable to) overall port objectives. This process yields port security design specifications that can mitigate (if not eliminate) port security vulnerabilities related to an emphasis on component reliability, lack of coordination between port security stakeholders or economic pressures endemic in the maritime industry. As a result, this article aims to demonstrate how STAMP’s broader view of causality and complexity can better address the dynamic and interactive behaviors of social, organizational and technical components of port security.« less
Beyond a series of security nets: Applying STAMP & STPA to port security
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Adam D.
Port security is an increasing concern considering the significant role of ports in global commerce and today’s increasingly complex threat environment. Current approaches to port security mirror traditional models of accident causality -- ‘a series of security nets’ based on component reliability and probabilistic assumptions. Traditional port security frameworks result in isolated and inconsistent improvement strategies. Recent work in engineered safety combines the ideas of hierarchy, emergence, control and communication into a new paradigm for understanding port security as an emergent complex system property. The ‘System-Theoretic Accident Model and Process (STAMP)’ is a new model of causality based on systemsmore » and control theory. The associated analysis process -- System Theoretic Process Analysis (STPA) -- identifies specific technical or procedural security requirements designed to work in coordination with (and be traceable to) overall port objectives. This process yields port security design specifications that can mitigate (if not eliminate) port security vulnerabilities related to an emphasis on component reliability, lack of coordination between port security stakeholders or economic pressures endemic in the maritime industry. As a result, this article aims to demonstrate how STAMP’s broader view of causality and complexity can better address the dynamic and interactive behaviors of social, organizational and technical components of port security.« less
The Impact of Neuroimmune Alterations in Autism Spectrum Disorder
Gottfried, Carmem; Bambini-Junior, Victorio; Francis, Fiona; Riesgo, Rudimar; Savino, Wilson
2015-01-01
Autism spectrum disorder (ASD) involves a complex interplay of both genetic and environmental risk factors, with immune alterations and synaptic connection deficiency in early life. In the past decade, studies of ASD have substantially increased, in both humans and animal models. Immunological imbalance (including autoimmunity) has been proposed as a major etiological component in ASD, taking into account increased levels of pro-inflammatory cytokines observed in postmortem brain from patients, as well as autoantibody production. Also, epidemiological studies have established a correlation of ASD with family history of autoimmune diseases; associations with major histocompatibility complex haplotypes and abnormal levels of immunological markers in the blood. Moreover, the use of animal models to study ASD is providing increasing information on the relationship between the immune system and the pathophysiology of ASD. Herein, we will discuss the accumulating literature for ASD, giving special attention to the relevant aspects of factors that may be related to the neuroimmune interface in the development of ASD, including changes in neuroplasticity. PMID:26441683
Tang, Kai; Escola Casas, Monica; Ooi, Gordon T H; Kaarsholm, Kamilla M S; Bester, Kai; Andersen, Henrik R
2017-05-01
The degradation of organic micropollutants in wastewater treatment is suspected to depend on co-degradation i.e. be dependent on concentrations of substrate. This complicates predicting and modelling their fate. The effect of humic acid, as a model for complex organic substrate, was investigated in relation to the biodegradation of pharmaceuticals by suspended biofilm carriers adapted to polishing effluent water from a tertiary sewage treatment plant. Twelve out of 22 investigated pharmaceuticals were significantly biodegradable. The biodegradation rate constants of ten of those compounds were increasing with increased humic acid concentrations. At the highest humic acid concentration (30mgC/L), the biodegradation rate constants were four times higher than the biodegradation rate constants without added humic acid. This shows that the presence of complex substrate stimulates degradation via a co-metabolism-like mechanism and competitive inhibition does not occur. Increases of rate constant per mgC/L are tentatively calculated. Copyright © 2017 Elsevier GmbH. All rights reserved.
USDA-ARS?s Scientific Manuscript database
Increasing water use efficiency (WUE) is one of the oldest goals in agricultural sciences, yet it is still not fully understood and achieved due to the complexity of soil-weather-management interactions. System models that quantify these interactions are increasingly used for optimizing crop WUE, es...
Advanced Multivariate Inversion Techniques for High Resolution 3D Geophysical Modeling
2010-09-01
crustal structures. But short periods are difficult to measure, especially in tectonically and geologically complex areas. On the other hand, gravity...East Africa Rift System Knowledge of crustal and upper mantle structure is of importance for understanding East Africa’s geodynamic evolution and for...area with less lateral heterogeneity but great tectonic complexity. To increase the effectiveness of the technique in this region, we explore gravity
[A MODEL OF COMPREHENSIVE CARE FOR COMPLEX CHRONIC PATIENT. EXPERIENCE OF A TERRITORY].
Torres, Montserrat; Fabrellas, Núria; Solà, Montserrat; Rubio Merchán, Antonia; Camañes Garcia, Neus; Berlanga, Sofía
2015-03-01
The Increase in life expectancy has brought an increase in chronic diseases. The evolution of chronic disease is the cause of several organic and systemic dysfunctions, leading to physical and mental limitations that determine the need for some aid to perform basic vital tasks. Primary health care has a key role in the monitoring of fragility, chronicity, and complexity of population. However, in order to address properly high complexity diseases it is necessary to know and coordinate the different resources existing inside the territory. THE DEVELOPMENT OF THE MODEL FOR ACTION: THE IMPLEMENTATIONS OF A FUNCTIONAL UNIT. The Primary Health Care must ensure equity, accessibility, longitudinally, and continuity of care, bearing in mind that health outcomes must be optimal. There are several health care providers in the Delta del Llobregat SAP, so it was implemented a strategic plan focused on the coordination and/or the reconciliation of all the devices involved in the assistance in order to provide comprehensive attention to the patient. The patients included in this program were to be identified as CCP (Complex chronic Patient), in an evolved and tributary phase of intensive follow-up. CONCLUSIONS. The identification ofpatients listed as CCP and at clinical risk allows a comprehensive monitoring in order to prevent exacerbations and overuse of unscheduled hospital resources.
NASA Astrophysics Data System (ADS)
Simion (Ciuciu), Ana-Maria; Aprodu, Iuliana; Dumitrașcu, Loredana; Bahrim, Gabriela Elena; Alexe, Petru; Stănciuc, Nicoleta
2015-09-01
Bovine β-lactoglobulin is able to interact with different bioactive compounds, thus being an important candidate in the development of delivery systems with improved functionality. The heat induced changes in the β-lactoglobulin-oleic acid complex were examined by means of fluorescence spectroscopy and molecular modeling techniques. Fluorescence spectroscopy results indicated a rigid protein structure in the temperature range 25-70 °C, whereas at temperatures over 75 °C, the rearrangements of the polypeptide chains led to higher exposure of hydrophobic residues. The most significant increase of the accessible surface area with temperature increase was identified in case of Tyr99 and Tyr102. The phase diagram method indicated an all or none transition between two conformations. Due to conformational changes, no contact between Ile56 or Lys60 and the fatty acid could be identified at 85 °C, but new non-bonding interaction were established with Ile12 and Val15. The results obtained in this study provide important details about thermal induced changes in the conformation of β-lactoglobulin-oleic acid complex. Significant conformational changes were registered above 75 °C, suggesting the possibility of obtaining highly functional complexes between whey proteins and natural unsaturated fatty acids.
Rare earth element scavenging in seawater
NASA Astrophysics Data System (ADS)
Byrne, Robert H.; Kim, Ki-Hyun
1990-10-01
Examinations of rare earth element (REE) adsorption in seawater, using a variety of surface-types, indicated that, for most surfaces, light rare earth elements (LREEs) are preferentially adsorbed compared to the heavy rare earths (HREEs). Exceptions to this behavior were observed only for silica phases (glass surfaces, acid-cleaned diatomaceous earth, and synthetic SiO 2). The affinity of the rare earths for surfaces can be strongly affected by thin organic coatings. Glass surfaces which acquired an organic coating through immersion in Tampa Bay exhibited adsorptive behavior typical of organic-rich, rather than glass, surfaces. Models of rare earth distributions between seawater and carboxylate-rich surfaces indicate that scavenging processes which involve such surfaces should exhibit a strong dependence on pH and carbonate complexation. Scavenging models involving carboxylate surfaces produce relative REE abundance patterns in good general agreement with observed shale-normalized REE abundances in seawater. Scavenging by carboxylate-rich surfaces should produce HREE enrichments in seawater relative to the LREEs and may produce enrichments of lanthanum relative to its immediate trivalent neighbors. Due to the origin of distribution coefficients as a difference between REE solution complexation (which increases strongly with atomic number) and surface complexation (which apparently also increases with atomic number) the relative solution abundance patterns of the REEs produced by scavenging reactions can be quite complex.
Understanding Activist Leadership Effort in the Movement Opposing Drinking and Driving
Dorius, Cassandra R.; McCarthy, John D.
2012-01-01
Why do some social movement leaders work harder than others? And, how does gender affect the patterns we uncover? Utilizing historical case study evidence of local chapters in the emerging movement opposing drinking and driving we are able to develop and test theoretical expectations about predictors of weekly effort among MADD and RID leaders. Taken together, our model explains 45 percent of the variation in leadership effort. We find bureaucratic complexity and victim support activities are more powerful predictors of effort than are individual leader characteristics, although all are important. Further analysis reveals that gender almost wholly conditions the strong effect of bureaucratic complexity on leadership effort so that increasingly complex chapter structures are associated with substantial increases in work hours for women but not men. PMID:22993454
Wetzel, Hanna N; Zhang, Tongli; Norman, Andrew B
2017-09-01
A recombinant humanized anti-cocaine monoclonal antibody (mAb), h2E2, is at an advanced stage of pre-clinical development as an immunotherapy for cocaine abuse. It is hypothesized that h2E2 binds to and sequesters cocaine in the blood. A three-compartment model of the effects of h2E2 on cocaine's distribution was constructed. The model assumes that h2E2 binds to cocaine and that the h2E2-cocaine complex does not enter the brain but distributes between the central and peripheral compartments. Free cocaine is eliminated from both the central and peripheral compartments, and h2E2 and the h2E2-cocaine complex are eliminated from the central compartment only. This model was tested against a new dataset measuring cocaine concentrations in the brain and plasma over 1h in the presence and absence of h2E2. The mAb significantly increased plasma cocaine concentrations with a concomitant significant decrease in brain concentration. Plasma concentrations declined over the 1-hour sampling period in both groups. With a set of parameters within reasonable physiological ranges, the three-compartment model was able to qualitatively and quantitatively simulate the increased plasma concentration in the presence of the antibody and the decreased peak brain concentration in the presence of antibody. Importantly, the model explained the decline in plasma concentrations over time as distribution of the cocaine-h2E2 complex into a peripheral compartment. This model will facilitate the targeting of ideal mAb PK/PD properties thus accelerating the identification of lead candidate anti-drug mAbs. Copyright © 2017 Elsevier Inc. All rights reserved.
Stoichiometric vs hydroclimatic controls on soil biogeochemical processes
NASA Astrophysics Data System (ADS)
Manzoni, Stefano; Porporato, Amilcare
2010-05-01
Soil nutrient cycles are controlled by both stoichiometric constraints (e.g., carbon to nutrient ratios) and hydroclimatic conditions (e.g., soil moisture and temperature). Both controls tend to act in a nonlinear manner and give rise to complex dynamics in soil biogeochemistry at different space-time scales. We first review the theoretical basis of soil biogeochemical models, looking for the general principles underlying these models across space-time scales and scientific disciplines. By comparing more than 250 models, we show that similar kinetic and stoichiometric laws, formulated to mechanistically represent the complex biochemical constraints to decomposition, are common to most models, providing a basis for their classification. Moreover, a historic analysis reveals that the complexity (e.g., phase space dimension, model architecture) and degree and number of nonlinearities generally increased with date, while they decreased with increasing spatial and temporal scale of interest. Soil biogeochmical dynamics may be suitable conceptualized using a number of compartments (e.g., decomposers, organic substrates, inorganic ions) interacting among each other at rates that depend (nonlinearly) on climatic drivers. As a consequence, hydroclimatic-induced fluctuations at the daily scale propagate through the various soil compartments leading to cascading effects ranging from short-term fluctuations in the smaller pools to long-lasting changes in the larger ones. Such cascading effects are known to occur in dryland ecosystems, and are increasingly being recongnized to control the long-term carbon and nutrient balances in more mesic ecosystems. We also show that separating biochemical from climatic impacts on organic matter decomposition results in universal curves describing data of plant residue decomposition and nutrient mineralization across the globe. Future extensions to larger spatial scales and managed ecosystems are also briefly outlined. It is critical that future modeling efforts carefully account for the scale-dependence of their mathematical formulations, especially when applied to a wide range of scales.
NASA Astrophysics Data System (ADS)
Freytag, B.; Liljegren, S.; Höfner, S.
2017-04-01
Context. Observations of asymptotic giant branch (AGB) stars with increasing spatial resolution reveal new layers of complexity of atmospheric processes on a variety of scales. Aims: To analyze the physical mechanisms that cause asymmetries and surface structures in observed images, we use detailed 3D dynamical simulations of AGB stars; these simulations self-consistently describe convection and pulsations. Methods: We used the CO5BOLD radiation-hydrodynamics code to produce an exploratory grid of global "star-in-a-box" models of the outer convective envelope and the inner atmosphere of AGB stars to study convection, pulsations, and shock waves and their dependence on stellar and numerical parameters. Results: The model dynamics are governed by the interaction of long-lasting giant convection cells, short-lived surface granules, and strong, radial, fundamental-mode pulsations. Radial pulsations and shorter wavelength, traveling, acoustic waves induce shocks on various scales in the atmosphere. Convection, waves, and shocks all contribute to the dynamical pressure and, thus, to an increase of the stellar radius and to a levitation of material into layers where dust can form. Consequently, the resulting relation of pulsation period and stellar radius is shifted toward larger radii compared to that of non-linear 1D models. The dependence of pulsation period on luminosity agrees well with observed relations. The interaction of the pulsation mode with the non-stationary convective flow causes occasional amplitude changes and phase shifts. The regularity of the pulsations decreases with decreasing gravity as the relative size of convection cells increases. The model stars do not have a well-defined surface. Instead, the light is emitted from a very extended inhomogeneous atmosphere with a complex dynamic pattern of high-contrast features. Conclusions: Our models self-consistently describe convection, convectively generated acoustic noise, fundamental-mode radial pulsations, and atmospheric shocks of various scales, which give rise to complex changing structures in the atmospheres of AGB stars.
Zhang, Ziyang; Li, Haiyan; Liu, Huijuan
2018-03-01
In order to study the influences of functionalized groups onto the adsorption of tetracycline, we prepared a series of amino and amino-Fe 3+ complex mesoporous silica adsorbents with diverse content of amino and Fe 3+ groups (named N,N-SBA15 and Fe-N,N-SBA15). The resulting mesoporous silica adsorbents were fully characterized by X-ray powder diffraction (XRD), Fourier transform infrared spectrometer (FTIR) and N 2 adsorption/desorption isotherms. Furthermore, the effects of functionalized groups on the removal of TC were investigated. The results showed that the periodic ordered structure of SBA-15 was maintained after modification of amino/Fe 3+ groups. The functionalized amino groups decreased the adsorption capacity while the coordinated Fe 3+ increased the adsorption capacity. The adsorption kinetics of TC fitted pseudo-second-order model well and the equilibrium was achieved quickly. The adsorption isotherms fitted the Langmuir model well and with the Fe 3+ content increased from 3.93% to 8.26%, the Q max of the adsorbents increased from 102 to 188mmol/kg. The solution pH affected the adsorption of TC onto amino complex adsorbents slightly while influenced the adsorption onto Fe-amine complex adsorbents greatly. The adsorption of TC on SBA15 and N,N-SBA15 may be related to the formation of outer-sphere surface complexes, while the adsorption of TC onto Fe-N,N-SBA15 was mainly attributed to the inner-sphere surface complexes. This study could offer potential materials that have excellent adsorption behavior for environmental remediation and suggested useful information for the preparing other adsorbents in environmental applications. Copyright © 2017. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Schmidt, M.; Martinez, C. E.
2017-12-01
Adsorption of biomolecule rich supramolecular complexes onto mineral surfaces plays an important role in the development of organo-mineral associations in soils. In this study, a series of supramolecular complexes of a model nucleic acid (deoxyribonucleic acid (DNA)) and protein (bovine serum albumin (BSA)) are synthesized, characterized and exposed to goethite to probe their adsorption behavior. To synthesize DNA/BSA complexes, a fixed DNA concentration (0.1 mg/mL) was mixed with a range of BSA concentrations (0.025-0.5 mg/mL) in 5 mM KCl at pH=5.0. Circular dichroism spectroscopy demonstrates strong, cooperative, Hill-type binding between DNA and BSA (Ka= 4.74 x 105 M-1) with DNA saturation achieved when BSA concentration reaches 0.4 mg/mL. Dynamic light scattering measurements of DNA/BSA complexes suggest binding accompanies disruption of DNA-DNA intermolecular electrostatic repulsion, resulting in a decrease of the DNA slow relaxation mode with increasing amount of BSA. Zeta potential measurements show increasing amounts of BSA lead to a reduction of negative charge on DNA/BSA complexes, in line with light scattering results. In situ attenuated total reflectance Fourier transform infrared spectroscopic studies of adsorption of DNA/BSA complexes onto goethite show that complexation of BSA with DNA appears to hinder direct coordination of DNA backbone phosphodiester groups with goethite, relative to DNA by itself. Furthermore, increasing amount of BSA (up to 0.4 mg/mL) in DNA/BSA complexes enhances DNA adsorption, possibly as a result of reduced repulsion between adsorbed DNA helices. When BSA concentration exceeds 0.4 mg/mL, a decrease in adsorbed DNA is observed. We hypothesize that this discrepancy in behavior between systems with BSA concentrations below and above saturation of DNA is caused by initial fast adsorption of loosely associated BSA on goethite, restricting access to goethite surface sites. Overall, these results highlight the impact of solution interaction between biomolecules on subsequent behavior at mineral surfaces. This work represents a bridge between model experiments with individual biomolecules and more complex natural systems, yielding a fundamental viewpoint of the formation of organo-mineral associations in soils.
Atmospheric Carbon Dioxide and the Global Carbon Cycle: The Key Uncertainties
DOE R&D Accomplishments Database
Peng, T. H.; Post, W. M.; DeAngelis, D. L.; Dale, V. H.; Farrell, M. P.
1987-12-01
The biogeochemical cycling of carbon between its sources and sinks determines the rate of increase in atmospheric CO{sub 2} concentrations. The observed increase in atmospheric CO{sub 2} content is less than the estimated release from fossil fuel consumption and deforestation. This discrepancy can be explained by interactions between the atmosphere and other global carbon reservoirs such as the oceans, and the terrestrial biosphere including soils. Undoubtedly, the oceans have been the most important sinks for CO{sub 2} produced by man. But, the physical, chemical, and biological processes of oceans are complex and, therefore, credible estimates of CO{sub 2} uptake can probably only come from mathematical models. Unfortunately, one- and two-dimensional ocean models do not allow for enough CO{sub 2} uptake to accurately account for known releases. Thus, they produce higher concentrations of atmospheric CO{sub 2} than was historically the case. More complex three-dimensional models, while currently being developed, may make better use of existing tracer data than do one- and two-dimensional models and will also incorporate climate feedback effects to provide a more realistic view of ocean dynamics and CO{sub 2} fluxes. The instability of current models to estimate accurately oceanic uptake of CO{sub 2} creates one of the key uncertainties in predictions of atmospheric CO{sub 2} increases and climate responses over the next 100 to 200 years.
Kroll, Thomas; Hadt, Ryan G.; Wilson, Samuel A.; ...
2014-12-04
Axial Cu–S(Met) bonds in electron transfer (ET) active sites are generally found to lower their reduction potentials. An axial S(Met) bond is also present in cytochrome c (cyt c) and is generally thought to increase the reduction potential. The highly covalent nature of the porphyrin environment in heme proteins precludes using many spectroscopic approaches to directly study the Fe site to experimentally quantify this bond. Alternatively, L-edge X-ray absorption spectroscopy (XAS) enables one to directly focus on the 3d-orbitals in a highly covalent environment and has previously been successfully applied to porphyrin model complexes. However, this technique cannot be extendedmore » to metalloproteins in solution. Here, we use metal K-edge XAS to obtain L-edge like data through 1s2p resonance inelastic X-ray scattering (RIXS). It has been applied here to a bis-imidazole porphyrin model complex and cyt c. The RIXS data on the model complex are directly correlated to L-edge XAS data to develop the complementary nature of these two spectroscopic methods. Comparison between the bis-imidazole model complex and cyt c in ferrous and ferric oxidation states show quantitative differences that reflect differences in axial ligand covalency. The data reveal an increased covalency for the S(Met) relative to N(His) axial ligand and a higher degree of covalency for the ferric states relative to the ferrous states. These results are reproduced by DFT calculations, which are used to evaluate the thermodynamics of the Fe–S(Met) bond and its dependence on redox state. Furthermore, these results provide insight into a number of previous chemical and physical results on cyt c.« less
Comparison of in situ uranium KD values with a laboratory determined surface complexation model
Curtis, G.P.; Fox, P.; Kohler, M.; Davis, J.A.
2004-01-01
Reactive solute transport simulations in groundwater require a large number of parameters to describe hydrologic and chemical reaction processes. Appropriate methods for determining chemical reaction parameters required for reactive solute transport simulations are still under investigation. This work compares U(VI) distribution coefficients (i.e. KD values) measured under field conditions with KD values calculated from a surface complexation model developed in the laboratory. Field studies were conducted in an alluvial aquifer at a former U mill tailings site near the town of Naturita, CO, USA, by suspending approximately 10 g samples of Naturita aquifer background sediments (NABS) in 17-5.1-cm diameter wells for periods of 3 to 15 months. Adsorbed U(VI) on these samples was determined by extraction with a pH 9.45 NaHCO3/Na2CO3 solution. In wells where the chemical conditions in groundwater were nearly constant, adsorbed U concentrations for samples taken after 3 months of exposure to groundwater were indistinguishable from samples taken after 15 months. Measured in situ K D values calculated from the measurements of adsorbed and dissolved U(VI) ranged from 0.50 to 10.6 mL/g and the KD values decreased with increasing groundwater alkalinity, consistent with increased formation of soluble U(VI)-carbonate complexes at higher alkalinities. The in situ K D values were compared with KD values predicted from a surface complexation model (SCM) developed under laboratory conditions in a separate study. A good agreement between the predicted and measured in situ KD values was observed. The demonstration that the laboratory derived SCM can predict U(VI) adsorption in the field provides a critical independent test of a submodel used in a reactive transport model. ?? 2004 Elsevier Ltd. All rights reserved.
Kim, Soo-Jin; Toshimoto, Kota; Yao, Yoshiaki; Yoshikado, Takashi; Sugiyama, Yuichi
2017-09-01
Quantitative analysis of transporter- and enzyme-mediated complex drug-drug interactions (DDIs) is challenging. Repaglinide (RPG) is transported into the liver by OATP1B1 and then is metabolized by CYP2C8 and CYP3A4. The purpose of this study was to describe the complex DDIs of RPG quantitatively based on unified physiologically based pharmacokinetic (PBPK) models using in vitro K i values for OATP1B1, CYP3A4, and CYP2C8. Cyclosporin A (CsA) or gemfibrozil (GEM) increased the blood concentrations of RPG. The time profiles of RPG and the inhibitors were analyzed by PBPK models, considering the inhibition of OATP1B1 and CYP3A4 by CsA or OATP1B1 inhibition by GEM and its glucuronide and the mechanism-based inhibition of CYP2C8 by GEM glucuronide. RPG-CsA interaction was closely predicted using a reported in vitro K i,OATP1B1 value in the presence of CsA preincubation. RPG-GEM interaction was underestimated compared with observed data, but the simulation was improved with the increase of f m,CYP2C8 . These results based on in vitro K i values for transport and metabolism suggest the possibility of a bottom-up approach with in vitro inhibition data for the prediction of complex DDIs using unified PBPK models and in vitro f m value of a substrate for multiple enzymes should be considered carefully for the prediction. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Heerdt, G N J Ter; Schep, S A; Janse, J H; Ouboter, M
2007-01-01
In order to set ecological goals and determine measures for the European Water Framework Directive, the effects of climate change on lake ecosystems should be estimated. It is thought that the complexity of lake ecosystems makes this effect inherently unpredictable. However, models that deal with this complexity are available and well calibrated and tested. In this study we use the ecosystem model PCLake to demonstrate how climate change might affect the ecological status of a shallow peaty lake in 2050. With the model PCLake, combined with a long-term water and nutrient balance, it is possible to describe adequately the present status of the lake. Simulations of future scenarios with increasing precipitation, evaporation and temperature, showed that climate change will lead to higher nutrient loadings. At the same time, it will lead to lower critical loadings. Together this might cause the lake to shift easier from a clear water to a turbid state. The amount of algae, expressed as the concentration Chl-a, will increase, as a consequence turbidity will increase. The outcome of this study; increasing stability of the turbid state of the lake, and thus the need for more drastic measures, is consistent with some earlier studies.
The influence of weather on Golden Eagle migration in northwestern Montana
Yates, R.E.; McClelland, B.R.; Mcclelland, P.T.; Key, C.H.; Bennetts, R.E.
2001-01-01
We analyzed the influence of 17 weather factors on migrating Golden Eagles (Aquila chrysaetos) near the Continental Divide in Glacier National Park, Montana, U.S.A. Local weather measurements were recorded at automated stations on the flanks of two peaks within the migration path. During a total of 506 hr of observation, the yearly number of Golden Eagles in autumn counts (1994-96) averaged 1973; spring counts (1995 and 1996) averaged 605 eagles. Mean passage rates (eagles/hr) were 16.5 in autumn and 8.2 in spring. Maximum rates were 137 in autumn and 67 in spring. Using generalized linear modeling, we tested for the effects of weather factors on the number of eagles counted. In the autumn model, the number of eagles increased with increasing air temperature, rising barometric pressure, decreasing relative humidity, and interactions among those factors. In the spring model, the number of eagles increased with increasing wind speed, barometric pressure, and the interaction between these factors. Our data suggest that a complex interaction among weather factors influenced the number of eagles passing on a given day. We hypothesize that in complex landscapes with high topographic relief, such as Glacier National Park, numerous weather factors produce different daily combinations to which migrating eagles respond opportunistically. ?? 2001 The Raptor Research Foundation, Inc.
Essential role for Abi1 in embryonic survival and WAVE2 complex integrity.
Dubielecka, Patrycja M; Ladwein, Kathrin I; Xiong, Xiaoling; Migeotte, Isabelle; Chorzalska, Anna; Anderson, Kathryn V; Sawicki, Janet A; Rottner, Klemens; Stradal, Theresia E; Kotula, Leszek
2011-04-26
Abl interactor 1 (Abi1) plays a critical function in actin cytoskeleton dynamics through participation in the WAVE2 complex. To gain a better understanding of the specific role of Abi1, we generated a conditional Abi1-KO mouse model and MEFs lacking Abi1 expression. Abi1-KO cells displayed defective regulation of the actin cytoskeleton, and this dysregulation was ascribed to altered activity of the WAVE2 complex. Changes in motility of Abi1-KO cells were manifested by a decreased migration rate and distance but increased directional persistence. Although these phenotypes did not correlate with peripheral ruffling, which was unaffected, Abi1-KO cells exhibited decreased dorsal ruffling. Western blotting analysis of Abi1-KO cell lysates indicated reduced levels of the WAVE complex components WAVE1 and WAVE2, Nap1, and Sra-1/PIR121. Although relative Abi2 levels were more than doubled in Abi1-KO cells, the absolute Abi2 expression in these cells amounted only to a fifth of Abi1 levels in the control cell line. This finding suggests that the presence of Abi1 is critical for the integrity and stability of WAVE complex and that Abi2 levels are not sufficiently increased to compensate fully for the loss of Abi1 in KO cells and to restore the integrity and function of the WAVE complex. The essential function of Abi1 in WAVE complexes and their regulation might explain the observed embryonic lethality of Abi1-deficient embryos, which survived until approximately embryonic day 11.5 and displayed malformations in the developing heart and brain. Cells lacking Abi1 and the conditional Abi1-KO mouse will serve as critical models for defining Abi1 function.
Essential role for Abi1 in embryonic survival and WAVE2 complex integrity
Dubielecka, Patrycja M.; Ladwein, Kathrin I.; Xiong, Xiaoling; Migeotte, Isabelle; Chorzalska, Anna; Anderson, Kathryn V.; Sawicki, Janet A.; Rottner, Klemens; Stradal, Theresia E.; Kotula, Leszek
2011-01-01
Abl interactor 1 (Abi1) plays a critical function in actin cytoskeleton dynamics through participation in the WAVE2 complex. To gain a better understanding of the specific role of Abi1, we generated a conditional Abi1-KO mouse model and MEFs lacking Abi1 expression. Abi1-KO cells displayed defective regulation of the actin cytoskeleton, and this dysregulation was ascribed to altered activity of the WAVE2 complex. Changes in motility of Abi1-KO cells were manifested by a decreased migration rate and distance but increased directional persistence. Although these phenotypes did not correlate with peripheral ruffling, which was unaffected, Abi1-KO cells exhibited decreased dorsal ruffling. Western blotting analysis of Abi1-KO cell lysates indicated reduced levels of the WAVE complex components WAVE1 and WAVE2, Nap1, and Sra-1/PIR121. Although relative Abi2 levels were more than doubled in Abi1-KO cells, the absolute Abi2 expression in these cells amounted only to a fifth of Abi1 levels in the control cell line. This finding suggests that the presence of Abi1 is critical for the integrity and stability of WAVE complex and that Abi2 levels are not sufficiently increased to compensate fully for the loss of Abi1 in KO cells and to restore the integrity and function of the WAVE complex. The essential function of Abi1 in WAVE complexes and their regulation might explain the observed embryonic lethality of Abi1-deficient embryos, which survived until approximately embryonic day 11.5 and displayed malformations in the developing heart and brain. Cells lacking Abi1 and the conditional Abi1-KO mouse will serve as critical models for defining Abi1 function. PMID:21482783
Interactive Visualizations of Complex Seismic Data and Models
NASA Astrophysics Data System (ADS)
Chai, C.; Ammon, C. J.; Maceira, M.; Herrmann, R. B.
2016-12-01
The volume and complexity of seismic data and models have increased dramatically thanks to dense seismic station deployments and advances in data modeling and processing. Seismic observations such as receiver functions and surface-wave dispersion are multidimensional: latitude, longitude, time, amplitude and latitude, longitude, period, and velocity. Three-dimensional seismic velocity models are characterized with three spatial dimensions and one additional dimension for the speed. In these circumstances, exploring the data and models and assessing the data fits is a challenge. A few professional packages are available to visualize these complex data and models. However, most of these packages rely on expensive commercial software or require a substantial time investment to master, and even when that effort is complete, communicating the results to others remains a problem. A traditional approach during the model interpretation stage is to examine data fits and model features using a large number of static displays. Publications include a few key slices or cross-sections of these high-dimensional data, but this prevents others from directly exploring the model and corresponding data fits. In this presentation, we share interactive visualization examples of complex seismic data and models that are based on open-source tools and are easy to implement. Model and data are linked in an intuitive and informative web-browser based display that can be used to explore the model and the features in the data that influence various aspects of the model. We encode the model and data into HTML files and present high-dimensional information using two approaches. The first uses a Python package to pack both data and interactive plots in a single file. The second approach uses JavaScript, CSS, and HTML to build a dynamic webpage for seismic data visualization. The tools have proven useful and led to deeper insight into 3D seismic models and the data that were used to construct them. Such easy-to-use interactive displays are essential in teaching environments - user-friendly interactivity allows students to explore large, complex data sets and models at their own pace, enabling a more accessible learning experience.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Raymond H.; Truax, Ryan A.; Lankford, David A.
Solid-phase iron concentrations and generalized composite surface complexation models were used to evaluate procedures in determining uranium sorption on oxidized aquifer material at a proposed U in situ recovery (ISR) site. At the proposed Dewey Burdock ISR site in South Dakota, USA, oxidized aquifer material occurs downgradient of the U ore zones. Solid-phase Fe concentrations did not explain our batch sorption test results,though total extracted Fe appeared to be positively correlated with overall measured U sorption. Batch sorption test results were used to develop generalized composite surface complexation models that incorporated the full genericsorption potential of each sample, without detailedmore » mineralogiccharacterization. The resultant models provide U sorption parameters (site densities and equilibrium constants) for reactive transport modeling. The generalized composite surface complexation sorption models were calibrated to batch sorption data from three oxidized core samples using inverse modeling, and gave larger sorption parameters than just U sorption on the measured solidphase Fe. These larger sorption parameters can significantly influence reactive transport modeling, potentially increasing U attenuation. Because of the limited number of calibration points, inverse modeling required the reduction of estimated parameters by fixing two parameters. The best-fit models used fixed values for equilibrium constants, with the sorption site densities being estimated by the inversion process. While these inverse routines did provide best-fit sorption parameters, local minima and correlated parameters might require further evaluation. Despite our limited number of proxy samples, the procedures presented provide a valuable methodology to consider for sites where metal sorption parameters are required. Furthermore, these sorption parameters can be used in reactive transport modeling to assess downgradient metal attenuation, especially when no other calibration data are available, such as at proposed U ISR sites.« less
Johnson, Raymond H.; Truax, Ryan A.; Lankford, David A.; ...
2016-02-03
Solid-phase iron concentrations and generalized composite surface complexation models were used to evaluate procedures in determining uranium sorption on oxidized aquifer material at a proposed U in situ recovery (ISR) site. At the proposed Dewey Burdock ISR site in South Dakota, USA, oxidized aquifer material occurs downgradient of the U ore zones. Solid-phase Fe concentrations did not explain our batch sorption test results,though total extracted Fe appeared to be positively correlated with overall measured U sorption. Batch sorption test results were used to develop generalized composite surface complexation models that incorporated the full genericsorption potential of each sample, without detailedmore » mineralogiccharacterization. The resultant models provide U sorption parameters (site densities and equilibrium constants) for reactive transport modeling. The generalized composite surface complexation sorption models were calibrated to batch sorption data from three oxidized core samples using inverse modeling, and gave larger sorption parameters than just U sorption on the measured solidphase Fe. These larger sorption parameters can significantly influence reactive transport modeling, potentially increasing U attenuation. Because of the limited number of calibration points, inverse modeling required the reduction of estimated parameters by fixing two parameters. The best-fit models used fixed values for equilibrium constants, with the sorption site densities being estimated by the inversion process. While these inverse routines did provide best-fit sorption parameters, local minima and correlated parameters might require further evaluation. Despite our limited number of proxy samples, the procedures presented provide a valuable methodology to consider for sites where metal sorption parameters are required. Furthermore, these sorption parameters can be used in reactive transport modeling to assess downgradient metal attenuation, especially when no other calibration data are available, such as at proposed U ISR sites.« less
Kim, Yong Sun; Choi, Hyeong Ho; Cho, Young Nam; Park, Yong Jae; Lee, Jong B; Yang, King H; King, Albert I
2005-11-01
Although biomechanical studies on the knee-thigh-hip (KTH) complex have been extensive, interactions between the KTH and various vehicular interior design parameters in frontal automotive crashes for newer models have not been reported in the open literature to the best of our knowledge. A 3D finite element (FE) model of a 50(th) percentile male KTH complex, which includes explicit representations of the iliac wing, acetabulum, pubic rami, sacrum, articular cartilage, femoral head, femoral neck, femoral condyles, patella, and patella tendon, has been developed to simulate injuries such as fracture of the patella, femoral neck, acetabulum, and pubic rami of the KTH complex. Model results compared favorably against regional component test data including a three-point bending test of the femur, axial loading of the isolated knee-patella, axial loading of the KTH complex, axial loading of the femoral head, and lateral loading of the isolated pelvis. The model was further integrated into a Wayne State University upper torso model and validated against data obtained from whole body sled tests. The model was validated against these experimental data over a range of impact speeds, impactor masses and boundary conditions. Using Design Of Experiment (DOE) methods based on Taguchi's approach and the developed FE model of the whole body, including the KTH complex, eight vehicular interior design parameters, namely the load limiter force, seat belt elongation, pretensioner inlet amount, knee-knee bolster distance, knee bolster angle, knee bolster stiffness, toe board angle and impact speed, each with either two or three design levels, were simulated to predict their respective effects on the potential of KTH injury in frontal impacts. Simulation results proposed best design levels for vehicular interior design parameters to reduce the injury potential of the KTH complex due to frontal automotive crashes. This study is limited by the fact that prediction of bony fracture was based on an element elimination method available in the LS-DYNA code. No validation study was conducted to determine if this method is suitable when simulating fractures of biological tissues. More work is still needed to further validate the FE model of the KTH complex to increase its reliability in the assessment of various impact loading conditions associated with vehicular crash scenarios.
Numerical Modeling of Geomorphic Change on Sandy Coasts as a Function of Changing Wave Climate
NASA Astrophysics Data System (ADS)
Adams, P. N.; McNamara, D.; Murray, A. B.; Lovering, J.
2009-12-01
Climate change is expected to affect sandy coast geomorphology through two principal mechanisms: (1) sea level rise, which affects cross-shore sediment transport tending to drive shoreline retreat, and (2) alteration of statistical distributions in ocean storm wave climate (deep water wave height, period, and direction), which affects longshore sediment transport gradients that result in shoreline erosion and accretion. To address potential climate change-driven effects on longshore sediment transport gradients, we are developing techniques to link various numerical models of wave transformation with several different longshore sediment transport formulae in accordance with the Community Surface Dynamics Modeling System (CSDMS) project. Results of the various wave transformation models are compared to field observations of cross-shelf wave transformation along the North Florida Atlantic coast for purposes of model verification and calibration. Initial comparisons between wave-transformation methods (assumption of shore-parallel contours, simple wave ray tracing, and the SWAN spectral wave model) on artificially constructed continental shelves reveal an increasing discrepancy of results for increasing complexity of shelf bathymetry. When the more advanced SWAN spectral wave model is coupled with a simple CERC-type formulation of longshore sediment transport and applied to a real coast with complex offshore shoals (Cape Canaveral region of the North Florida Atlantic Coast), the patterns of erosion and accretion agree with results of the simplest wave-propagation models for some wave conditions, but disagree in others. Model simulations in which wave height and period are held constant show that locations of divergence and convergence of sediment flux shift with deep water wave-approach angle in ways that would not always be predicted using less sophisticated wave propagation models. Thus, predicting long-term local shoreline change on actual coastlines featuring complex bathymetry requires the extra computational effort to run the more advanced model over a wide range of wave conditions.
NASA Astrophysics Data System (ADS)
French, Jon; Payo, Andres; Murray, Brad; Orford, Julian; Eliot, Matt; Cowell, Peter
2016-03-01
Coastal and estuarine landforms provide a physical template that not only accommodates diverse ecosystem functions and human activities, but also mediates flood and erosion risks that are expected to increase with climate change. In this paper, we explore some of the issues associated with the conceptualisation and modelling of coastal morphological change at time and space scales relevant to managers and policy makers. Firstly, we revisit the question of how to define the most appropriate scales at which to seek quantitative predictions of landform change within an age defined by human interference with natural sediment systems and by the prospect of significant changes in climate and ocean forcing. Secondly, we consider the theoretical bases and conceptual frameworks for determining which processes are most important at a given scale of interest and the related problem of how to translate this understanding into models that are computationally feasible, retain a sound physical basis and demonstrate useful predictive skill. In particular, we explore the limitations of a primary scale approach and the extent to which these can be resolved with reference to the concept of the coastal tract and application of systems theory. Thirdly, we consider the importance of different styles of landform change and the need to resolve not only incremental evolution of morphology but also changes in the qualitative dynamics of a system and/or its gross morphological configuration. The extreme complexity and spatially distributed nature of landform systems means that quantitative prediction of future changes must necessarily be approached through mechanistic modelling of some form or another. Geomorphology has increasingly embraced so-called 'reduced complexity' models as a means of moving from an essentially reductionist focus on the mechanics of sediment transport towards a more synthesist view of landform evolution. However, there is little consensus on exactly what constitutes a reduced complexity model and the term itself is both misleading and, arguably, unhelpful. Accordingly, we synthesise a set of requirements for what might be termed 'appropriate complexity modelling' of quantitative coastal morphological change at scales commensurate with contemporary management and policy-making requirements: 1) The system being studied must be bounded with reference to the time and space scales at which behaviours of interest emerge and/or scientific or management problems arise; 2) model complexity and comprehensiveness must be appropriate to the problem at hand; 3) modellers should seek a priori insights into what kind of behaviours are likely to be evident at the scale of interest and the extent to which the behavioural validity of a model may be constrained by its underlying assumptions and its comprehensiveness; 4) informed by qualitative insights into likely dynamic behaviour, models should then be formulated with a view to resolving critical state changes; and 5) meso-scale modelling of coastal morphological change should reflect critically on the role of modelling and its relation to the observable world.
Role of humic acid on oral drug delivery of an antiepileptic drug.
Mirza, Mohd Aamir; Agarwal, Suraj Prakash; Rahman, Md Akhlaquer; Rauf, Abdur; Ahmad, Niyaz; Alam, Aftab; Iqbal, Zeenat
2011-03-01
Humic acid (HA) is omnipresent in natural organic matter that is a macromolecular, negatively charged polyelectrolyte that contains a hydrophobic core. It is also present in a significant amount in Shilajit (used frequently in traditional medicines), which is used in this study as a source of extraction. HA is evaluated for the oral drug delivery of carbamazepine (CBZ). HA is used in this study to increase the dissolution, intestinal permeation, and pharmacodynamic response of CBZ (bio pharmaceutics classification system (BCS) II) by the technique of complexation and other related mechanism reported with humic substances. Different complexation techniques were explored in this study for the entrapment of CBZ, which was authenticated by molecular modeling and conformational analysis. These were further characterized using differential scanning calorimetry (DSC), Fourier transform infrared spectroscopy (FT-IR), and X-ray diffraction (XRD). Solubility analysis and dissolution release profile were carried out to access the in vitro parameters. For ex vivo studies, rat gut intestinal permeability was done. And finally pharmacodynamic evaluation (maximal electroshock method) was carried out for optimized complexes. Molecular modeling approach and instrumental analysis (DSC, XRD, and FT-IR) confirmed the entrapment of CBZ inside the complexing agent. Increased solubility (∼1742%), sustained release (∼78%), better permeability (∼3.5 times), and enhanced pharmacodynamic responses conferred the best to 1:2 freeze dried (FD) and then 1:2 kneading (KD) complexes compared with pure CBZ. Now it could be concluded that HA may be tried as a complexing agent for antiepileptic drug and other classes of low water-soluble drug.
O’Hagan, Rónán C.; Heyer, Joerg
2011-01-01
KRAS is a potent oncogene and is mutated in about 30% of all human cancers. However, the biological context of KRAS-dependent oncogenesis is poorly understood. Genetically engineered mouse models of cancer provide invaluable tools to study the oncogenic process, and insights from KRAS-driven models have significantly increased our understanding of the genetic, cellular, and tissue contexts in which KRAS is competent for oncogenesis. Moreover, variation among tumors arising in mouse models can provide insight into the mechanisms underlying response or resistance to therapy in KRAS-dependent cancers. Hence, it is essential that models of KRAS-driven cancers accurately reflect the genetics of human tumors and recapitulate the complex tumor-stromal intercommunication that is manifest in human cancers. Here, we highlight the progress made in modeling KRAS-dependent cancers and the impact that these models have had on our understanding of cancer biology. In particular, the development of models that recapitulate the complex biology of human cancers enables translational insights into mechanisms of therapeutic intervention in KRAS-dependent cancers. PMID:21779503
Molecular modeling of the neurophysin I/oxytocin complex
NASA Astrophysics Data System (ADS)
Kazmierkiewicz, R.; Czaplewski, C.; Lammek, B.; Ciarkowski, J.
1997-01-01
Neurophysins I and II (NPI and NPII) act in the neurosecretory granules as carrier proteinsfor the neurophyseal hormones oxytocin (OT) and vasopressin (VP), respectively. The NPI/OTfunctional unit, believed to be an (NPI/OT)2 heterotetramer, was modeled using low-resolution structure information, viz. the Cα carbon atom coordinates of the homologousNPII/dipeptide complex (file 1BN2 in the Brookhaven Protein Databank) as a template. Itsall-atom representation was obtained using standard modeling tools available within theINSIGHT/Biopolymer modules supplied by Biosym Technologies Inc. A conformation of theNPI-bound OT, similar to that recently proposed in a transfer NOE experiment, was dockedinto the ligand-binding site by a superposition of its Cys1-Tyr2 fragment onto the equivalentportion of the dipeptide in the template. The starting complex for the initial refinements wasprepared by two alternative strategies, termed Model I and Model II, each ending with a˜100 ps molecular dynamics (MD) simulation in water using the AMBER 4.1 force field. The freehomodimer NPI2 was obtained by removal of the two OT subunits from their sites, followedby a similar structure refinement. The use of Model I, consisting of a constrained simulatedannealing, resulted in a structure remarkably similar to both the NPII/dipeptide complex anda recently published solid-state structure of the NPII/OT complex. Thus, Model I isrecommended as the method of choice for the preparation of the starting all-atom data forMD. The MD simulations indicate that, both in the homodimer and in the heterotetramer, the310-helices demonstrate an increased mobility relative to the remaining body of the protein.Also, the C-terminal domains in the NPI2 homodimer are more mobile than the N-terminalones. Finally, a distinct intermonomer interaction is identified, concentrated around its mostprominent, although not unique, contribution provided by an H-bond from Ser25Oγ in one NPI unit to Glu81 Oɛ in the other unit. This interaction is present in the heterotetramer(NPI/OT)2 and absent or weak in the NPI2 homodimer. We speculate that this interaction,along with the increased mobility of the 310-helices and the carboxy domains, may contributeto the allosteric communication between ligand binding and NPI dimerization.
NASA Astrophysics Data System (ADS)
Allen, J. Icarus; Holt, Jason T.; Blackford, Jerry; Proctor, Roger
2007-12-01
Marine systems models are becoming increasingly complex and sophisticated, but far too little attention has been paid to model errors and the extent to which model outputs actually relate to ecosystem processes. Here we describe the application of summary error statistics to a complex 3D model (POLCOMS-ERSEM) run for the period 1988-1989 in the southern North Sea utilising information from the North Sea Project, which collected a wealth of observational data. We demonstrate that to understand model data misfit and the mechanisms creating errors, we need to use a hierarchy of techniques, including simple correlations, model bias, model efficiency, binary discriminator analysis and the distribution of model errors to assess model errors spatially and temporally. We also demonstrate that a linear cost function is an inappropriate measure of misfit. This analysis indicates that the model has some skill for all variables analysed. A summary plot of model performance indicates that model performance deteriorates as we move through the ecosystem from the physics, to the nutrients and plankton.
Rubinstein, Alexander I; Sabirianov, Renat F; Namavar, Fereydoon
2016-10-14
The rapid development of nanoscience and nanotechnology has raised many fundamental questions that significantly impede progress in these fields. In particular, understanding the physicochemical processes at the interface in aqueous solvents requires the development and application of efficient and accurate methods. In the present work we evaluate the electrostatic contribution to the energy of model protein-ceramic complex formation in an aqueous solvent. We apply a non-local (NL) electrostatic approach that accounts for the effects of the short-range structure of the solvent on the electrostatic interactions of the interfacial systems. In this approach the aqueous solvent is considered as a non-ionic liquid, with the rigid and strongly correlated dipoles of the water molecules. We have found that an ordered interfacial aqueous solvent layer at the protein- and ceramic-solvent interfaces reduces the charging energy of both the ceramic and the protein in the solvent, and significantly increases the electrostatic contribution to their association into a complex. This contribution in the presented NL approach was found to be significantly shifted with respect to the classical model at any dielectric constant value of the ceramics. This implies a significant increase of the adsorption energy in the protein-ceramic complex formation for any ceramic material. We show that for several biocompatible ceramics (for example HfO2, ZrO2, and Ta2O5) the above effect predicts electrostatically induced protein-ceramic complex formation. However, in the framework of the classical continuum electrostatic model (the aqueous solvent as a uniform dielectric medium with a high dielectric constant ∼80) the above ceramics cannot be considered as suitable for electrostatically induced complex formation. Our results also show that the protein-ceramic electrostatic interactions can be strong enough to compensate for the unfavorable desolvation effect in the process of protein-ceramic complex formation.
NASA Astrophysics Data System (ADS)
Rubinstein, Alexander I.; Sabirianov, Renat F.; Namavar, Fereydoon
2016-10-01
The rapid development of nanoscience and nanotechnology has raised many fundamental questions that significantly impede progress in these fields. In particular, understanding the physicochemical processes at the interface in aqueous solvents requires the development and application of efficient and accurate methods. In the present work we evaluate the electrostatic contribution to the energy of model protein-ceramic complex formation in an aqueous solvent. We apply a non-local (NL) electrostatic approach that accounts for the effects of the short-range structure of the solvent on the electrostatic interactions of the interfacial systems. In this approach the aqueous solvent is considered as a non-ionic liquid, with the rigid and strongly correlated dipoles of the water molecules. We have found that an ordered interfacial aqueous solvent layer at the protein- and ceramic-solvent interfaces reduces the charging energy of both the ceramic and the protein in the solvent, and significantly increases the electrostatic contribution to their association into a complex. This contribution in the presented NL approach was found to be significantly shifted with respect to the classical model at any dielectric constant value of the ceramics. This implies a significant increase of the adsorption energy in the protein-ceramic complex formation for any ceramic material. We show that for several biocompatible ceramics (for example HfO2, ZrO2, and Ta2O5) the above effect predicts electrostatically induced protein-ceramic complex formation. However, in the framework of the classical continuum electrostatic model (the aqueous solvent as a uniform dielectric medium with a high dielectric constant ˜80) the above ceramics cannot be considered as suitable for electrostatically induced complex formation. Our results also show that the protein-ceramic electrostatic interactions can be strong enough to compensate for the unfavorable desolvation effect in the process of protein-ceramic complex formation.
Qamhieh, Khawla; Nylander, Tommy; Black, Camilla F; Attard, George S; Dias, Rita S; Ainalem, Marie-Louise
2014-07-14
This study deals with the build-up of biomaterials consisting of biopolymers, namely DNA, and soft particles, poly(amido amine) (PAMAM) dendrimers, and how to model their interactions. We adopted and applied an analytical model to provide further insight into the complexation between DNA (4331 bp) and positively charged PAMAM dendrimers of generations 1, 2, 4, 6 and 8, previously studied experimentally. The theoretical models applied describe the DNA as a semiflexible polyelectrolyte that interacts with dendrimers considered as either hard (impenetrable) spheres or as penetrable and soft spheres. We found that the number of DNA turns around one dendrimer, thus forming a complex, increases with the dendrimer size or generation. The DNA penetration required for the complex to become charge neutral depends on dendrimer generation, where lower generation dendrimers require little penetration to give charge neutral complexes. High generation dendrimers display charge inversion for all considered dendrimer sizes and degrees of penetration. Consistent with the morphologies observed experimentally for dendrimer/DNA aggregates, where highly ordered rods and toroids are found for low generation dendrimers, the DNA wraps less than one turn around the dendrimer. Disordered globular structures appear for high generation dendrimers, where the DNA wraps several turns around the dendrimer. Particularly noteworthy is that the dendrimer generation 4 complexes, where the DNA wraps about one turn around the dendrimers, are borderline cases and can form all types of morphologies. The net-charges of the aggregate have been estimated using zeta potential measurements and are discussed within the theoretical framework.
Tinamit: Making coupled system dynamics models accessible to stakeholders
NASA Astrophysics Data System (ADS)
Malard, Julien; Inam Baig, Azhar; Rojas Díaz, Marcela; Hassanzadeh, Elmira; Adamowski, Jan; Tuy, Héctor; Melgar-Quiñonez, Hugo
2017-04-01
Model coupling is increasingly used as a method of combining the best of two models when representing socio-environmental systems, though barriers to successful model adoption by stakeholders are particularly present with the use of coupled models, due to their high complexity and typically low implementation flexibility. Coupled system dynamics - physically-based modelling is a promising method to improve stakeholder participation in environmental modelling while retaining a high level of complexity for physical process representation, as the system dynamics components are readily understandable and can be built by stakeholders themselves. However, this method is not without limitations in practice, including 1) inflexible and complicated coupling methods, 2) difficult model maintenance after the end of the project, and 3) a wide variety of end-user cultures and languages. We have developed the open-source Python-language software tool Tinamit to overcome some of these limitations to the adoption of stakeholder-based coupled system dynamics - physically-based modelling. The software is unique in 1) its inclusion of both a graphical user interface (GUI) and a library of available commands (API) that allow users with little or no coding abilities to rapidly, effectively, and flexibly couple models, 2) its multilingual support for the GUI, allowing users to couple models in their preferred language (and to add new languages as necessary for their community work), and 3) its modular structure allowing for very easy model coupling and modification without the direct use of code, and to which programming-savvy users can easily add support for new types of physically-based models. We discuss how the use of Tinamit for model coupling can greatly increase the accessibility of coupled models to stakeholders, using an example of a stakeholder-built system dynamics model of soil salinity issues in Pakistan coupled with the physically-based soil salinity and water flow model SAHYSMOD. Different socioeconomic and environmental policies for soil salinity remediation are tested within the coupled model, allowing for the identification of the most efficient actions from an environmental and a farmer economy standpoint while taking into account the complex feedbacks between socioeconomics and the physical environment.
Practical ethical theory for nurses responding to complexity in care.
Fairchild, Roseanne Moody
2010-05-01
In the context of health care system complexity, nurses need responsive leadership and organizational support to maintain intrinsic motivation, moral sensitivity and a caring stance in the delivery of patient care. The current complexity of nurses' work environment promotes decreases in work motivation and moral satisfaction, thus creating motivational and ethical dissonance in practice. These and other work-related factors increase emotional stress and burnout for nurses, prompting both new and seasoned nurse professionals to leave their current position, or even the profession. This article presents a theoretical conceptual model for professional nurses to review and make sense of the ethical reasoning skills needed to maintain a caring stance in relation to the competing values that must coexist among nurses, health care administrators, patients and families in the context of the complex health care work environments in which nurses are expected to practice. A model, Nurses' Ethical Reasoning Skills, is presented as a framework for nurses' thinking through and problem solving ethical issues in clinical practice in the context of complexity in health care.
An example of complex modelling in dentistry using Markov chain Monte Carlo (MCMC) simulation.
Helfenstein, Ulrich; Menghini, Giorgio; Steiner, Marcel; Murati, Francesca
2002-09-01
In the usual regression setting one regression line is computed for a whole data set. In a more complex situation, each person may be observed for example at several points in time and thus a regression line might be calculated for each person. Additional complexities, such as various forms of errors in covariables may make a straightforward statistical evaluation difficult or even impossible. During recent years methods have been developed allowing convenient analysis of problems where the data and the corresponding models show these and many other forms of complexity. The methodology makes use of a Bayesian approach and Markov chain Monte Carlo (MCMC) simulations. The methods allow the construction of increasingly elaborate models by building them up from local sub-models. The essential structure of the models can be represented visually by directed acyclic graphs (DAG). This attractive property allows communication and discussion of the essential structure and the substantial meaning of a complex model without needing algebra. After presentation of the statistical methods an example from dentistry is presented in order to demonstrate their application and use. The dataset of the example had a complex structure; each of a set of children was followed up over several years. The number of new fillings in permanent teeth had been recorded at several ages. The dependent variables were markedly different from the normal distribution and could not be transformed to normality. In addition, explanatory variables were assumed to be measured with different forms of error. Illustration of how the corresponding models can be estimated conveniently via MCMC simulation, in particular, 'Gibbs sampling', using the freely available software BUGS is presented. In addition, how the measurement error may influence the estimates of the corresponding coefficients is explored. It is demonstrated that the effect of the independent variable on the dependent variable may be markedly underestimated if the measurement error is not taken into account ('regression dilution bias'). Markov chain Monte Carlo methods may be of great value to dentists in allowing analysis of data sets which exhibit a wide range of different forms of complexity.
NASA Astrophysics Data System (ADS)
Fang, Jin-Qing; Li, Yong
2010-02-01
A large unified hybrid network model with a variable speed growth (LUHNM-VSG) is proposed as third model of the unified hybrid network theoretical framework (UHNTF). A hybrid growth ratio vg of deterministic linking number to random linking number and variable speed growth index α are introduced in it. The main effects of vg and α on topological transition features of the LUHNM-VSG are revealed. For comparison with the other models, we construct a type of the network complexity pyramid with seven levels, in which from the bottom level-1 to the top level-7 of the pyramid simplicity-universality is increasing but complexity-diversity is decreasing. The transition relations between them depend on matching of four hybrid ratios (dr, fd, gr, vg). Thus the most of network models can be investigated in the unification way via four hybrid ratios (dr, fd, gr, vg). The LUHNM-VSG as the level-1 of the pyramid is much better and closer to description of real-world networks as well as has potential application.
Supersonic projectile models for asynchronous shooter localization
NASA Astrophysics Data System (ADS)
Kozick, Richard J.; Whipps, Gene T.; Ash, Joshua N.
2011-06-01
In this work we consider the localization of a gunshot using a distributed sensor network measuring time differences of arrival between a firearm's muzzle blast and the shockwave induced by a supersonic bullet. This so-called MB-SW approach is desirable because time synchronization is not required between the sensors, however it suffers from increased computational complexity and requires knowledge of the bullet's velocity at all points along its trajectory. While the actual velocity profile of a particular gunshot is unknown, one may use a parameterized model for the velocity profile and simultaneously fit the model and localize the shooter. In this paper we study efficient solutions for the localization problem and identify deceleration models that trade off localization accuracy and computational complexity. We also develop a statistical analysis that includes bias due to mismatch between the true and actual deceleration models and covariance due to additive noise.
Rights and Intentions in Value Modeling
NASA Astrophysics Data System (ADS)
Johannesson, Paul; Bergholtz, Maria
In order to manage increasingly complex business and IT environments, organizations need effective instruments for representing and understanding this complexity. Essential among these instruments are enterprise models, i.e. computational representations of the structure, processes, information, resources, and intentions of organizations. One important class of enterprise models are value models, which focus on the business motivations and intentions behind business processes and describe them in terms of high level notions like actors, resources, and value exchanges. The essence of these value exchanges is often taken to be an ownership transfer. However, some value exchanges cannot be analyzed in this way, e.g. the use of a service does not influence ownership. The goal of this chapter is to offer an analysis of the notion of value exchanges, based on Hohfeld's classification of rights, and to propose notation and practical modeling guidelines that make use of this analysis.
Salústio, P J; Feio, G; Figueirinhas, J L; Pinto, J F; Cabral Marques, H M
2009-02-01
The work aims to prove the complexation of two model drugs (ibuprofen, IB and indomethacin, IN) by beta-cyclodextrin (betaCD), and the effect of water in such a process, and makes a comparison of their complexation yields. Two methods were considered: kneading of a binary mixture of the drug, betaCD, and inclusion of either IB or IN in aqueous solutions of betaCD. In the latter method water was removed by air stream, spray-drying and freeze-drying. To prove the formation of complexes in final products, optical microscopy, UV spectroscopy, IR spectroscopy, DSC, X-ray and NMR were considered. Each powder was added to an acidic solution (pH=2) to quantify the concentration of the drug inside betaCD cavity. Other media (pH=5 and 7) were used to prove the existence of drug not complexed in each powder, as the drugs solubility increases with the pH. It was observed that complexation occurred in all powders, and that the fraction of drug inside the betaCD did not depend neither on the method of complexation nor on the processes of drying considered.
Pereira, Vanessa Helena; Gama, Maria Carolina Traina; Sousa, Filipe Antônio Barros; Lewis, Theodore Gyle; Gobatto, Claudio Alexandre; Manchado - Gobatto, Fúlvia Barros
2015-01-01
The aims of the present study were analyze the fatigue process at distinct intensity efforts and to investigate its occurrence as interactions at distinct body changes during exercise, using complex network models. For this, participants were submitted to four different running intensities until exhaustion, accomplished in a non-motorized treadmill using a tethered system. The intensities were selected according to critical power model. Mechanical (force, peak power, mean power, velocity and work) and physiological related parameters (heart rate, blood lactate, time until peak blood lactate concentration (lactate time), lean mass, anaerobic and aerobic capacities) and IPAQ score were obtained during exercises and it was used to construction of four complex network models. Such models have both, theoretical and mathematical value, and enables us to perceive new insights that go beyond conventional analysis. From these, we ranked the influences of each node at the fatigue process. Our results shows that nodes, links and network metrics are sensibility according to increase of efforts intensities, been the velocity a key factor to exercise maintenance at models/intensities 1 and 2 (higher time efforts) and force and power at models 3 and 4, highlighting mechanical variables in the exhaustion occurrence and even training prescription applications. PMID:25994386
Drosophila as an In Vivo Model for Human Neurodegenerative Disease
McGurk, Leeanne; Berson, Amit; Bonini, Nancy M.
2015-01-01
With the increase in the ageing population, neurodegenerative disease is devastating to families and poses a huge burden on society. The brain and spinal cord are extraordinarily complex: they consist of a highly organized network of neuronal and support cells that communicate in a highly specialized manner. One approach to tackling problems of such complexity is to address the scientific questions in simpler, yet analogous, systems. The fruit fly, Drosophila melanogaster, has been proven tremendously valuable as a model organism, enabling many major discoveries in neuroscientific disease research. The plethora of genetic tools available in Drosophila allows for exquisite targeted manipulation of the genome. Due to its relatively short lifespan, complex questions of brain function can be addressed more rapidly than in other model organisms, such as the mouse. Here we discuss features of the fly as a model for human neurodegenerative disease. There are many distinct fly models for a range of neurodegenerative diseases; we focus on select studies from models of polyglutamine disease and amyotrophic lateral sclerosis that illustrate the type and range of insights that can be gleaned. In discussion of these models, we underscore strengths of the fly in providing understanding into mechanisms and pathways, as a foundation for translational and therapeutic research. PMID:26447127
Determination of effective loss factors in reduced SEA models
NASA Astrophysics Data System (ADS)
Chimeno Manguán, M.; Fernández de las Heras, M. J.; Roibás Millán, E.; Simón Hidalgo, F.
2017-01-01
The definition of Statistical Energy Analysis (SEA) models for large complex structures is highly conditioned by the classification of the structure elements into a set of coupled subsystems and the subsequent determination of the loss factors representing both the internal damping and the coupling between subsystems. The accurate definition of the complete system can lead to excessively large models as the size and complexity increases. This fact can also rise practical issues for the experimental determination of the loss factors. This work presents a formulation of reduced SEA models for incomplete systems defined by a set of effective loss factors. This reduced SEA model provides a feasible number of subsystems for the application of the Power Injection Method (PIM). For structures of high complexity, their components accessibility can be restricted, for instance internal equipments or panels. For these cases the use of PIM to carry out an experimental SEA analysis is not possible. New methods are presented for this case in combination with the reduced SEA models. These methods allow defining some of the model loss factors that could not be obtained through PIM. The methods are validated with a numerical analysis case and they are also applied to an actual spacecraft structure with accessibility restrictions: a solar wing in folded configuration.
Drosophila as an In Vivo Model for Human Neurodegenerative Disease.
McGurk, Leeanne; Berson, Amit; Bonini, Nancy M
2015-10-01
With the increase in the ageing population, neurodegenerative disease is devastating to families and poses a huge burden on society. The brain and spinal cord are extraordinarily complex: they consist of a highly organized network of neuronal and support cells that communicate in a highly specialized manner. One approach to tackling problems of such complexity is to address the scientific questions in simpler, yet analogous, systems. The fruit fly, Drosophila melanogaster, has been proven tremendously valuable as a model organism, enabling many major discoveries in neuroscientific disease research. The plethora of genetic tools available in Drosophila allows for exquisite targeted manipulation of the genome. Due to its relatively short lifespan, complex questions of brain function can be addressed more rapidly than in other model organisms, such as the mouse. Here we discuss features of the fly as a model for human neurodegenerative disease. There are many distinct fly models for a range of neurodegenerative diseases; we focus on select studies from models of polyglutamine disease and amyotrophic lateral sclerosis that illustrate the type and range of insights that can be gleaned. In discussion of these models, we underscore strengths of the fly in providing understanding into mechanisms and pathways, as a foundation for translational and therapeutic research. Copyright © 2015 by the Genetics Society of America.
A framework for scalable parameter estimation of gene circuit models using structural information.
Kuwahara, Hiroyuki; Fan, Ming; Wang, Suojin; Gao, Xin
2013-07-01
Systematic and scalable parameter estimation is a key to construct complex gene regulatory models and to ultimately facilitate an integrative systems biology approach to quantitatively understand the molecular mechanisms underpinning gene regulation. Here, we report a novel framework for efficient and scalable parameter estimation that focuses specifically on modeling of gene circuits. Exploiting the structure commonly found in gene circuit models, this framework decomposes a system of coupled rate equations into individual ones and efficiently integrates them separately to reconstruct the mean time evolution of the gene products. The accuracy of the parameter estimates is refined by iteratively increasing the accuracy of numerical integration using the model structure. As a case study, we applied our framework to four gene circuit models with complex dynamics based on three synthetic datasets and one time series microarray data set. We compared our framework to three state-of-the-art parameter estimation methods and found that our approach consistently generated higher quality parameter solutions efficiently. Although many general-purpose parameter estimation methods have been applied for modeling of gene circuits, our results suggest that the use of more tailored approaches to use domain-specific information may be a key to reverse engineering of complex biological systems. http://sfb.kaust.edu.sa/Pages/Software.aspx. Supplementary data are available at Bioinformatics online.