Modelling the evolution of complex conductivity during calcite precipitation on glass beads
NASA Astrophysics Data System (ADS)
Leroy, Philippe; Li, Shuai; Jougnot, Damien; Revil, André; Wu, Yuxin
2017-04-01
When pH and alkalinity increase, calcite frequently precipitates and hence modifies the petrophysical properties of porous media. The complex conductivity method can be used to directly monitor calcite precipitation in porous media because it is sensitive to the evolution of the mineralogy, pore structure and its connectivity. We have developed a mechanistic grain polarization model considering the electrochemical polarization of the Stern and diffuse layers surrounding calcite particles. Our complex conductivity model depends on the surface charge density of the Stern layer and on the electrical potential at the onset of the diffuse layer, which are computed using a basic Stern model of the calcite/water interface. The complex conductivity measurements of Wu et al. on a column packed with glass beads where calcite precipitation occurs are reproduced by our surface complexation and complex conductivity models. The evolution of the size and shape of calcite particles during the calcite precipitation experiment is estimated by our complex conductivity model. At the early stage of the calcite precipitation experiment, modelled particles sizes increase and calcite particles flatten with time because calcite crystals nucleate at the surface of glass beads and grow into larger calcite grains. At the later stage of the calcite precipitation experiment, modelled sizes and cementation exponents of calcite particles decrease with time because large calcite grains aggregate over multiple glass beads and only small calcite crystals polarize.
Drewes, Rich; Zou, Quan; Goodman, Philip H
2009-01-01
Neuroscience modeling experiments often involve multiple complex neural network and cell model variants, complex input stimuli and input protocols, followed by complex data analysis. Coordinating all this complexity becomes a central difficulty for the experimenter. The Python programming language, along with its extensive library packages, has emerged as a leading "glue" tool for managing all sorts of complex programmatic tasks. This paper describes a toolkit called Brainlab, written in Python, that leverages Python's strengths for the task of managing the general complexity of neuroscience modeling experiments. Brainlab was also designed to overcome the major difficulties of working with the NCS (NeoCortical Simulator) environment in particular. Brainlab is an integrated model-building, experimentation, and data analysis environment for the powerful parallel spiking neural network simulator system NCS.
Drewes, Rich; Zou, Quan; Goodman, Philip H.
2008-01-01
Neuroscience modeling experiments often involve multiple complex neural network and cell model variants, complex input stimuli and input protocols, followed by complex data analysis. Coordinating all this complexity becomes a central difficulty for the experimenter. The Python programming language, along with its extensive library packages, has emerged as a leading “glue” tool for managing all sorts of complex programmatic tasks. This paper describes a toolkit called Brainlab, written in Python, that leverages Python's strengths for the task of managing the general complexity of neuroscience modeling experiments. Brainlab was also designed to overcome the major difficulties of working with the NCS (NeoCortical Simulator) environment in particular. Brainlab is an integrated model-building, experimentation, and data analysis environment for the powerful parallel spiking neural network simulator system NCS. PMID:19506707
A model of clutter for complex, multivariate geospatial displays.
Lohrenz, Maura C; Trafton, J Gregory; Beck, R Melissa; Gendron, Marlin L
2009-02-01
A novel model of measuring clutter in complex geospatial displays was compared with human ratings of subjective clutter as a measure of convergent validity. The new model is called the color-clustering clutter (C3) model. Clutter is a known problem in displays of complex data and has been shown to affect target search performance. Previous clutter models are discussed and compared with the C3 model. Two experiments were performed. In Experiment 1, participants performed subjective clutter ratings on six classes of information visualizations. Empirical results were used to set two free parameters in the model. In Experiment 2, participants performed subjective clutter ratings on aeronautical charts. Both experiments compared and correlated empirical data to model predictions. The first experiment resulted in a .76 correlation between ratings and C3. The second experiment resulted in a .86 correlation, significantly better than results from a model developed by Rosenholtz et al. Outliers to our correlation suggest further improvements to C3. We suggest that (a) the C3 model is a good predictor of subjective impressions of clutter in geospatial displays, (b) geospatial clutter is a function of color density and saliency (primary C3 components), and (c) pattern analysis techniques could further improve C3. The C3 model could be used to improve the design of electronic geospatial displays by suggesting when a display will be too cluttered for its intended audience.
Qualitative models and experimental investigation of chaotic NOR gates and set/reset flip-flops
NASA Astrophysics Data System (ADS)
Rahman, Aminur; Jordan, Ian; Blackmore, Denis
2018-01-01
It has been observed through experiments and SPICE simulations that logical circuits based upon Chua's circuit exhibit complex dynamical behaviour. This behaviour can be used to design analogues of more complex logic families and some properties can be exploited for electronics applications. Some of these circuits have been modelled as systems of ordinary differential equations. However, as the number of components in newer circuits increases so does the complexity. This renders continuous dynamical systems models impractical and necessitates new modelling techniques. In recent years, some discrete dynamical models have been developed using various simplifying assumptions. To create a robust modelling framework for chaotic logical circuits, we developed both deterministic and stochastic discrete dynamical models, which exploit the natural recurrence behaviour, for two chaotic NOR gates and a chaotic set/reset flip-flop. This work presents a complete applied mathematical investigation of logical circuits. Experiments on our own designs of the above circuits are modelled and the models are rigorously analysed and simulated showing surprisingly close qualitative agreement with the experiments. Furthermore, the models are designed to accommodate dynamics of similarly designed circuits. This will allow researchers to develop ever more complex chaotic logical circuits with a simple modelling framework.
Qualitative models and experimental investigation of chaotic NOR gates and set/reset flip-flops.
Rahman, Aminur; Jordan, Ian; Blackmore, Denis
2018-01-01
It has been observed through experiments and SPICE simulations that logical circuits based upon Chua's circuit exhibit complex dynamical behaviour. This behaviour can be used to design analogues of more complex logic families and some properties can be exploited for electronics applications. Some of these circuits have been modelled as systems of ordinary differential equations. However, as the number of components in newer circuits increases so does the complexity. This renders continuous dynamical systems models impractical and necessitates new modelling techniques. In recent years, some discrete dynamical models have been developed using various simplifying assumptions. To create a robust modelling framework for chaotic logical circuits, we developed both deterministic and stochastic discrete dynamical models, which exploit the natural recurrence behaviour, for two chaotic NOR gates and a chaotic set/reset flip-flop. This work presents a complete applied mathematical investigation of logical circuits. Experiments on our own designs of the above circuits are modelled and the models are rigorously analysed and simulated showing surprisingly close qualitative agreement with the experiments. Furthermore, the models are designed to accommodate dynamics of similarly designed circuits. This will allow researchers to develop ever more complex chaotic logical circuits with a simple modelling framework.
Emulator-assisted data assimilation in complex models
NASA Astrophysics Data System (ADS)
Margvelashvili, Nugzar Yu; Herzfeld, Mike; Rizwi, Farhan; Mongin, Mathieu; Baird, Mark E.; Jones, Emlyn; Schaffelke, Britta; King, Edward; Schroeder, Thomas
2016-09-01
Emulators are surrogates of complex models that run orders of magnitude faster than the original model. The utility of emulators for the data assimilation into ocean models is still not well understood. High complexity of ocean models translates into high uncertainty of the corresponding emulators which may undermine the quality of the assimilation schemes based on such emulators. Numerical experiments with a chaotic Lorenz-95 model are conducted to illustrate this point and suggest a strategy to alleviate this problem through the localization of the emulation and data assimilation procedures. Insights gained through these experiments are used to design and implement data assimilation scenario for a 3D fine-resolution sediment transport model of the Great Barrier Reef (GBR), Australia.
Using machine learning tools to model complex toxic interactions with limited sampling regimes.
Bertin, Matthew J; Moeller, Peter; Guillette, Louis J; Chapman, Robert W
2013-03-19
A major impediment to understanding the impact of environmental stress, including toxins and other pollutants, on organisms, is that organisms are rarely challenged by one or a few stressors in natural systems. Thus, linking laboratory experiments that are limited by practical considerations to a few stressors and a few levels of these stressors to real world conditions is constrained. In addition, while the existence of complex interactions among stressors can be identified by current statistical methods, these methods do not provide a means to construct mathematical models of these interactions. In this paper, we offer a two-step process by which complex interactions of stressors on biological systems can be modeled in an experimental design that is within the limits of practicality. We begin with the notion that environment conditions circumscribe an n-dimensional hyperspace within which biological processes or end points are embedded. We then randomly sample this hyperspace to establish experimental conditions that span the range of the relevant parameters and conduct the experiment(s) based upon these selected conditions. Models of the complex interactions of the parameters are then extracted using machine learning tools, specifically artificial neural networks. This approach can rapidly generate highly accurate models of biological responses to complex interactions among environmentally relevant toxins, identify critical subspaces where nonlinear responses exist, and provide an expedient means of designing traditional experiments to test the impact of complex mixtures on biological responses. Further, this can be accomplished with an astonishingly small sample size.
A business process modeling experience in a complex information system re-engineering.
Bernonville, Stéphanie; Vantourout, Corinne; Fendeler, Geneviève; Beuscart, Régis
2013-01-01
This article aims to share a business process modeling experience in a re-engineering project of a medical records department in a 2,965-bed hospital. It presents the modeling strategy, an extract of the results and the feedback experience.
NASA Astrophysics Data System (ADS)
Zhang, Ning; Du, Yunsong; Miao, Shiguang; Fang, Xiaoyi
2016-08-01
The simulation performance over complex building clusters of a wind simulation model (Wind Information Field Fast Analysis model, WIFFA) in a micro-scale air pollutant dispersion model system (Urban Microscale Air Pollution dispersion Simulation model, UMAPS) is evaluated using various wind tunnel experimental data including the CEDVAL (Compilation of Experimental Data for Validation of Micro-Scale Dispersion Models) wind tunnel experiment data and the NJU-FZ experiment data (Nanjing University-Fang Zhuang neighborhood wind tunnel experiment data). The results show that the wind model can reproduce the vortexes triggered by urban buildings well, and the flow patterns in urban street canyons and building clusters can also be represented. Due to the complex shapes of buildings and their distributions, the simulation deviations/discrepancies from the measurements are usually caused by the simplification of the building shapes and the determination of the key zone sizes. The computational efficiencies of different cases are also discussed in this paper. The model has a high computational efficiency compared to traditional numerical models that solve the Navier-Stokes equations, and can produce very high-resolution (1-5 m) wind fields of a complex neighborhood scale urban building canopy (~ 1 km ×1 km) in less than 3 min when run on a personal computer.
Tian, Xin; Li, Zengyuan; Chen, Erxue; Liu, Qinhuo; Yan, Guangjian; Wang, Jindi; Niu, Zheng; Zhao, Shaojie; Li, Xin; Pang, Yong; Su, Zhongbo; van der Tol, Christiaan; Liu, Qingwang; Wu, Chaoyang; Xiao, Qing; Yang, Le; Mu, Xihan; Bo, Yanchen; Qu, Yonghua; Zhou, Hongmin; Gao, Shuai; Chai, Linna; Huang, Huaguo; Fan, Wenjie; Li, Shihua; Bai, Junhua; Jiang, Lingmei; Zhou, Ji
2015-01-01
The Complicate Observations and Multi-Parameter Land Information Constructions on Allied Telemetry Experiment (COMPLICATE) comprises a network of remote sensing experiments designed to enhance the dynamic analysis and modeling of remotely sensed information for complex land surfaces. Two types of experimental campaigns were established under the framework of COMPLICATE. The first was designed for continuous and elaborate experiments. The experimental strategy helps enhance our understanding of the radiative and scattering mechanisms of soil and vegetation and modeling of remotely sensed information for complex land surfaces. To validate the methodologies and models for dynamic analyses of remote sensing for complex land surfaces, the second campaign consisted of simultaneous satellite-borne, airborne, and ground-based experiments. During field campaigns, several continuous and intensive observations were obtained. Measurements were undertaken to answer key scientific issues, as follows: 1) Determine the characteristics of spatial heterogeneity and the radiative and scattering mechanisms of remote sensing on complex land surfaces. 2) Determine the mechanisms of spatial and temporal scale extensions for remote sensing on complex land surfaces. 3) Determine synergist inversion mechanisms for soil and vegetation parameters using multi-mode remote sensing on complex land surfaces. Here, we introduce the background, the objectives, the experimental designs, the observations and measurements, and the overall advances of COMPLICATE. As a result of the implementation of COMLICATE and for the next several years, we expect to contribute to quantitative remote sensing science and Earth observation techniques. PMID:26332035
The Limitations of Model-Based Experimental Design and Parameter Estimation in Sloppy Systems.
White, Andrew; Tolman, Malachi; Thames, Howard D; Withers, Hubert Rodney; Mason, Kathy A; Transtrum, Mark K
2016-12-01
We explore the relationship among experimental design, parameter estimation, and systematic error in sloppy models. We show that the approximate nature of mathematical models poses challenges for experimental design in sloppy models. In many models of complex biological processes it is unknown what are the relevant physical mechanisms that must be included to explain system behaviors. As a consequence, models are often overly complex, with many practically unidentifiable parameters. Furthermore, which mechanisms are relevant/irrelevant vary among experiments. By selecting complementary experiments, experimental design may inadvertently make details that were ommitted from the model become relevant. When this occurs, the model will have a large systematic error and fail to give a good fit to the data. We use a simple hyper-model of model error to quantify a model's discrepancy and apply it to two models of complex biological processes (EGFR signaling and DNA repair) with optimally selected experiments. We find that although parameters may be accurately estimated, the discrepancy in the model renders it less predictive than it was in the sloppy regime where systematic error is small. We introduce the concept of a sloppy system-a sequence of models of increasing complexity that become sloppy in the limit of microscopic accuracy. We explore the limits of accurate parameter estimation in sloppy systems and argue that identifying underlying mechanisms controlling system behavior is better approached by considering a hierarchy of models of varying detail rather than focusing on parameter estimation in a single model.
The Limitations of Model-Based Experimental Design and Parameter Estimation in Sloppy Systems
Tolman, Malachi; Thames, Howard D.; Mason, Kathy A.
2016-01-01
We explore the relationship among experimental design, parameter estimation, and systematic error in sloppy models. We show that the approximate nature of mathematical models poses challenges for experimental design in sloppy models. In many models of complex biological processes it is unknown what are the relevant physical mechanisms that must be included to explain system behaviors. As a consequence, models are often overly complex, with many practically unidentifiable parameters. Furthermore, which mechanisms are relevant/irrelevant vary among experiments. By selecting complementary experiments, experimental design may inadvertently make details that were ommitted from the model become relevant. When this occurs, the model will have a large systematic error and fail to give a good fit to the data. We use a simple hyper-model of model error to quantify a model’s discrepancy and apply it to two models of complex biological processes (EGFR signaling and DNA repair) with optimally selected experiments. We find that although parameters may be accurately estimated, the discrepancy in the model renders it less predictive than it was in the sloppy regime where systematic error is small. We introduce the concept of a sloppy system–a sequence of models of increasing complexity that become sloppy in the limit of microscopic accuracy. We explore the limits of accurate parameter estimation in sloppy systems and argue that identifying underlying mechanisms controlling system behavior is better approached by considering a hierarchy of models of varying detail rather than focusing on parameter estimation in a single model. PMID:27923060
NASA Technical Reports Server (NTRS)
Befrui, Bizhan A.
1995-01-01
This viewgraph presentation discusses the following: STAR-CD computational features; STAR-CD turbulence models; common features of industrial complex flows; industry-specific CFD development requirements; applications and experiences of industrial complex flows, including flow in rotating disc cavities, diffusion hole film cooling, internal blade cooling, and external car aerodynamics; and conclusions on turbulence modeling needs.
Calibration of 3D ALE finite element model from experiments on friction stir welding of lap joints
NASA Astrophysics Data System (ADS)
Fourment, Lionel; Gastebois, Sabrina; Dubourg, Laurent
2016-10-01
In order to support the design of such a complex process like Friction Stir Welding (FSW) for the aeronautic industry, numerical simulation software requires (1) developing an efficient and accurate Finite Element (F.E.) formulation that allows predicting welding defects, (2) properly modeling the thermo-mechanical complexity of the FSW process and (3) calibrating the F.E. model from accurate measurements from FSW experiments. This work uses a parallel ALE formulation developed in the Forge® F.E. code to model the different possible defects (flashes and worm holes), while pin and shoulder threads are modeled by a new friction law at the tool / material interface. FSW experiments require using a complex tool with scroll on shoulder, which is instrumented for providing sensitive thermal data close to the joint. Calibration of unknown material thermal coefficients, constitutive equations parameters and friction model from measured forces, torques and temperatures is carried out using two F.E. models, Eulerian and ALE, to reach a satisfactory agreement assessed by the proper sensitivity of the simulation to process parameters.
Long, Katrina M; McDermott, Fiona; Meadows, Graham N
2018-06-20
The healthcare system has proved a challenging environment for innovation, especially in the area of health services management and research. This is often attributed to the complexity of the healthcare sector, characterized by intersecting biological, social and political systems spread across geographically disparate areas. To help make sense of this complexity, researchers are turning towards new methods and frameworks, including simulation modeling and complexity theory. Herein, we describe our experiences implementing and evaluating a health services innovation in the form of simulation modeling. We explore the strengths and limitations of complexity theory in evaluating health service interventions, using our experiences as examples. We then argue for the potential of pragmatism as an epistemic foundation for the methodological pluralism currently found in complexity research. We discuss the similarities between complexity theory and pragmatism, and close by revisiting our experiences putting pragmatic complexity theory into practice. We found the commonalities between pragmatism and complexity theory to be striking. These included a sensitivity to research context, a focus on applied research, and the valuing of different forms of knowledge. We found that, in practice, a pragmatic complexity theory approach provided more flexibility to respond to the rapidly changing context of health services implementation and evaluation. However, this approach requires a redefinition of implementation success, away from pre-determined outcomes and process fidelity, to one that embraces the continual learning, evolution, and emergence that characterized our project.
pathogenic protozoa Trichomonas vaginalis have been studied. Material and methods are described in the paper. The efficacy of the individual admixtures from the vitamin-B2-complex is subsequently discussed. (Author)
NASA Technical Reports Server (NTRS)
Jaap, John; Davis, Elizabeth; Richardson, Lea
2004-01-01
Planning and scheduling systems organize tasks into a timeline or schedule. Tasks are logically grouped into containers called models. Models are a collection of related tasks, along with their dependencies and requirements, that when met will produce the desired result. One challenging domain for a planning and scheduling system is the operation of on-board experiments for the International Space Station. In these experiments, the equipment used is among the most complex hardware ever developed; the information sought is at the cutting edge of scientific endeavor; and the procedures are intricate and exacting. Scheduling is made more difficult by a scarcity of station resources. The models to be fed into the scheduler must describe both the complexity of the experiments and procedures (to ensure a valid schedule) and the flexibilities of the procedures and the equipment (to effectively utilize available resources). Clearly, scheduling International Space Station experiment operations calls for a maximally expressive modeling schema.
Stratospheric General Circulation with Chemistry Model (SGCCM)
NASA Technical Reports Server (NTRS)
Rood, Richard B.; Douglass, Anne R.; Geller, Marvin A.; Kaye, Jack A.; Nielsen, J. Eric; Rosenfield, Joan E.; Stolarski, Richard S.
1990-01-01
In the past two years constituent transport and chemistry experiments have been performed using both simple single constituent models and more complex reservoir species models. Winds for these experiments have been taken from the data assimilation effort, Stratospheric Data Analysis System (STRATAN).
Chisholm, Anna; Nelson, Pauline A; Pearce, Christina J; Keyworth, Chris; Griffiths, Christopher E M; Cordingley, Lis; Bundy, Christine
2016-02-01
Individuals' illness representations, including beliefs about psoriasis (a complex immune-mediated condition), and their emotional responses to the condition guide self-management behaviour. It is also plausible that health care providers' illness representations guide their own management of psoriasis. Patients commonly report poor health care experiences related to psoriasis, and the role of health care providers' beliefs, emotions, as well as their knowledge, experiences and behaviours ('personal models') in this is unexplored. This study aimed explore health care providers' personal models of psoriasis. Qualitative analysis of 23 semi-structured interviews with health care professionals providing care for psoriasis patients was performed. Purposive sampling achieved maximum variation regarding participant discipline, level of experience, gender and age. The self-regulatory/common sense model informed data collection and initial data analysis. Principles of framework analysis were used to generate predetermined and emergent key issues related to practitioners' personal models. Three types of personal model emerged. Sophisticated-Linear Model: 70% of practitioners recognized psoriasis as a complex condition but managed it as a skin condition. Mixed Model: 17% of practitioners recognized/managed some elements of psoriasis as complex and some as a skin condition. Sophisticated-Sophisticated Model: 13% recognized and managed psoriasis as a complex condition. Across the data set, five themes emerged illustrating key patterns underpinning these different models including (1) Recognising complexity, (2) Putting skin first, (3) Taking on the complexities of psoriasis with the patient, (4) Aiming for clearance, and (5) Affective experiences within psoriasis consultations. Health care providers recognized psoriasis as a complex condition but commonly reported managing psoriasis as a simple skin condition. Providers' beliefs and management approaches varied in the extent to which they were consistent with one another; and their emotional experiences during consultations may vary depending upon their personal model. Findings could inform future dermatology training programmes by highlighting the role of health care providers' illness representations in clinical management of the condition. What is already known on this subject? Health behaviour is predicted by underlying beliefs and emotions associated with an illness and its treatment. Few studies have examined health care providers' beliefs and emotions about the illnesses they manage in clinical practice. Many patients are dissatisfied with dermatology consultations and wish to be treated holistically. What does this study add? Qualitative exploration of health care providers' beliefs/emotions revealed their personal models of psoriasis. Providers' personal models of psoriasis vary in coherence and are often skin rather than whole person focused. Further investigation of health care providers' models of psoriasis and their impact on health outcomes is needed. © 2015 The British Psychological Society.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jernigan, Dann A.; Blanchat, Thomas K.
It is necessary to improve understanding and develop temporally- and spatially-resolved integral scale validation data of the heat flux incident to a complex object in addition to measuring the thermal response of said object located within the fire plume for the validation of the SIERRA/FUEGO/SYRINX fire and SIERRA/CALORE codes. To meet this objective, a complex calorimeter with sufficient instrumentation to allow validation of the coupling between FUEGO/SYRINX/CALORE has been designed, fabricated, and tested in the Fire Laboratory for Accreditation of Models and Experiments (FLAME) facility. Validation experiments are specifically designed for direct comparison with the computational predictions. Making meaningful comparisonmore » between the computational and experimental results requires careful characterization and control of the experimental features or parameters used as inputs into the computational model. Validation experiments must be designed to capture the essential physical phenomena, including all relevant initial and boundary conditions. This report presents the data validation steps and processes, the results of the penlight radiant heat experiments (for the purpose of validating the CALORE heat transfer modeling of the complex calorimeter), and the results of the fire tests in FLAME.« less
Vinnakota, Kalyan C; Beard, Daniel A; Dash, Ranjan K
2009-01-01
Identification of a complex biochemical system model requires appropriate experimental data. Models constructed on the basis of data from the literature often contain parameters that are not identifiable with high sensitivity and therefore require additional experimental data to identify those parameters. Here we report the application of a local sensitivity analysis to design experiments that will improve the identifiability of previously unidentifiable model parameters in a model of mitochondrial oxidative phosphorylation and tricaboxylic acid cycle. Experiments were designed based on measurable biochemical reactants in a dilute suspension of purified cardiac mitochondria with experimentally feasible perturbations to this system. Experimental perturbations and variables yielding the most number of parameters above a 5% sensitivity level are presented and discussed.
In this study, the calibration of subsurface batch and reactive-transport models involving complex biogeochemical processes was systematically evaluated. Two hypothetical nitrate biodegradation scenarios were developed and simulated in numerical experiments to evaluate the perfor...
NASA Astrophysics Data System (ADS)
Lange, Rolf
1989-07-01
The three-dimensional, diagnostic, particle-in-cell transport and diffusion model MATHEW/ADPIC is used to test its transferability from one site in complex terrain to another with different characteristics, under stable nighttime drainage flow conditions. The two sites were subject to extensive drainage flow tracer experiments under the multilaboratory Atmospheric Studies in Complex Terrain (ASCOT) program: the first being a valley in the Geysers geothermal region of northern California, and the second a canyon in western Colorado. The domain in each case is approximately 10 × 10 km. The 1980 Geysers model evaluation is only quoted. The 1984 Brush Creek model evaluation is described in detail.Results from comparing computed with measured concentrations from a variety of tracer releases indicate that 52% of the 4531 samples from five experiments in Brush Creek and 50% of the 831 samples from four experiments in the Geysers agreed within a factor of 5. When an angular 10° uncertainty, consistent with anemometer reliability limits in complex terrain, was allowed to be applied to the model results, model performance improved such that 78% of samples compared within a factor of 5 for Brush Creek and 77% for the Geysers. Looking at the range of other factors of concentration ratios, results indicate that the model is satisfactorily transferable without tuning it to a specific site.
Urban Modification of Convection and Rainfall in Complex Terrain
NASA Astrophysics Data System (ADS)
Freitag, B. M.; Nair, U. S.; Niyogi, D.
2018-03-01
Despite a globally growing proportion of cities located in regions of complex terrain, interactions between urbanization and complex terrain and their meteorological impacts are not well understood. We utilize numerical model simulations and satellite data products to investigate such impacts over San Miguel de Tucumán, Argentina. Numerical modeling experiments show urbanization results in 20-30% less precipitation downwind of the city and an eastward shift in precipitation upwind. Our experiments show that changes in surface energy, boundary layer dynamics, and thermodynamics induced by urbanization interact synergistically with the persistent forcing of atmospheric flow by complex terrain. With urbanization increasing in mountainous regions, land-atmosphere feedbacks can exaggerate meteorological forcings leading to weather impacts that require important considerations for sustainable development of urban regions within complex terrain.
Maximally Expressive Modeling of Operations Tasks
NASA Technical Reports Server (NTRS)
Jaap, John; Richardson, Lea; Davis, Elizabeth
2002-01-01
Planning and scheduling systems organize "tasks" into a timeline or schedule. The tasks are defined within the scheduling system in logical containers called models. The dictionary might define a model of this type as "a system of things and relations satisfying a set of rules that, when applied to the things and relations, produce certainty about the tasks that are being modeled." One challenging domain for a planning and scheduling system is the operation of on-board experiments for the International Space Station. In these experiments, the equipment used is among the most complex hardware ever developed, the information sought is at the cutting edge of scientific endeavor, and the procedures are intricate and exacting. Scheduling is made more difficult by a scarcity of station resources. The models to be fed into the scheduler must describe both the complexity of the experiments and procedures (to ensure a valid schedule) and the flexibilities of the procedures and the equipment (to effectively utilize available resources). Clearly, scheduling International Space Station experiment operations calls for a "maximally expressive" modeling schema.
Schlosser, Florian; Moskaleva, Lyudmila V; Kremleva, Alena; Krüger, Sven; Rösch, Notker
2010-06-28
With a relativistic all-electron density functional method, we studied two anionic uranium(VI) carbonate complexes that are important for uranium speciation and transport in aqueous medium, the mononuclear tris(carbonato) complex [UO(2)(CO(3))(3)](4-) and the trinuclear hexa(carbonato) complex [(UO(2))(3)(CO(3))(6)](6-). Focusing on the structures in solution, we applied for the first time a full solvation treatment to these complexes. We approximated short-range effects by explicit aqua ligands and described long-range electrostatic interactions via a polarizable continuum model. Structures and vibrational frequencies of "gas-phase" models with explicit aqua ligands agree best with experiment. This is accidental because the continuum model of the solvent to some extent overestimates the electrostatic interactions of these highly anionic systems with the bulk solvent. The calculated free energy change when three mono-nuclear complexes associate to the trinuclear complex, agrees well with experiment and supports the formation of the latter species upon acidification of a uranyl carbonate solution.
Experiment Design for Complex VTOL Aircraft with Distributed Propulsion and Tilt Wing
NASA Technical Reports Server (NTRS)
Murphy, Patrick C.; Landman, Drew
2015-01-01
Selected experimental results from a wind tunnel study of a subscale VTOL concept with distributed propulsion and tilt lifting surfaces are presented. The vehicle complexity and automated test facility were ideal for use with a randomized designed experiment. Design of Experiments and Response Surface Methods were invoked to produce run efficient, statistically rigorous regression models with minimized prediction error. Static tests were conducted at the NASA Langley 12-Foot Low-Speed Tunnel to model all six aerodynamic coefficients over a large flight envelope. This work supports investigations at NASA Langley in developing advanced configurations, simulations, and advanced control systems.
A Novel BA Complex Network Model on Color Template Matching
Han, Risheng; Yue, Guangxue; Ding, Hui
2014-01-01
A novel BA complex network model of color space is proposed based on two fundamental rules of BA scale-free network model: growth and preferential attachment. The scale-free characteristic of color space is discovered by analyzing evolving process of template's color distribution. And then the template's BA complex network model can be used to select important color pixels which have much larger effects than other color pixels in matching process. The proposed BA complex network model of color space can be easily integrated into many traditional template matching algorithms, such as SSD based matching and SAD based matching. Experiments show the performance of color template matching results can be improved based on the proposed algorithm. To the best of our knowledge, this is the first study about how to model the color space of images using a proper complex network model and apply the complex network model to template matching. PMID:25243235
A novel BA complex network model on color template matching.
Han, Risheng; Shen, Shigen; Yue, Guangxue; Ding, Hui
2014-01-01
A novel BA complex network model of color space is proposed based on two fundamental rules of BA scale-free network model: growth and preferential attachment. The scale-free characteristic of color space is discovered by analyzing evolving process of template's color distribution. And then the template's BA complex network model can be used to select important color pixels which have much larger effects than other color pixels in matching process. The proposed BA complex network model of color space can be easily integrated into many traditional template matching algorithms, such as SSD based matching and SAD based matching. Experiments show the performance of color template matching results can be improved based on the proposed algorithm. To the best of our knowledge, this is the first study about how to model the color space of images using a proper complex network model and apply the complex network model to template matching.
Real-time biomimetic Central Pattern Generators in an FPGA for hybrid experiments
Ambroise, Matthieu; Levi, Timothée; Joucla, Sébastien; Yvert, Blaise; Saïghi, Sylvain
2013-01-01
This investigation of the leech heartbeat neural network system led to the development of a low resources, real-time, biomimetic digital hardware for use in hybrid experiments. The leech heartbeat neural network is one of the simplest central pattern generators (CPG). In biology, CPG provide the rhythmic bursts of spikes that form the basis for all muscle contraction orders (heartbeat) and locomotion (walking, running, etc.). The leech neural network system was previously investigated and this CPG formalized in the Hodgkin–Huxley neural model (HH), the most complex devised to date. However, the resources required for a neural model are proportional to its complexity. In response to this issue, this article describes a biomimetic implementation of a network of 240 CPGs in an FPGA (Field Programmable Gate Array), using a simple model (Izhikevich) and proposes a new synapse model: activity-dependent depression synapse. The network implementation architecture operates on a single computation core. This digital system works in real-time, requires few resources, and has the same bursting activity behavior as the complex model. The implementation of this CPG was initially validated by comparing it with a simulation of the complex model. Its activity was then matched with pharmacological data from the rat spinal cord activity. This digital system opens the way for future hybrid experiments and represents an important step toward hybridization of biological tissue and artificial neural networks. This CPG network is also likely to be useful for mimicking the locomotion activity of various animals and developing hybrid experiments for neuroprosthesis development. PMID:24319408
Complex Dynamics in Nonequilibrium Economics and Chemistry
NASA Astrophysics Data System (ADS)
Wen, Kehong
Complex dynamics provides a new approach in dealing with economic complexity. We study interactively the empirical and theoretical aspects of business cycles. The way of exploring complexity is similar to that in the study of an oscillatory chemical system (BZ system)--a model for modeling complex behavior. We contribute in simulating qualitatively the complex periodic patterns observed from the controlled BZ experiments to narrow the gap between modeling and experiment. The gap between theory and reality is much wider in economics, which involves studies of human expectations and decisions, the essential difference from natural sciences. Our empirical and theoretical studies make substantial progress in closing this gap. With the help from the new development in nonequilibrium physics, i.e., the complex spectral theory, we advance our technique in detecting characteristic time scales from empirical economic data. We obtain correlation resonances, which give oscillating modes with decays for correlation decomposition, from different time series including S&P 500, M2, crude oil spot prices, and GNP. The time scales found are strikingly compatible with business experiences and other studies in business cycles. They reveal the non-Markovian nature of coherent markets. The resonances enhance the evidence of economic chaos obtained by using other tests. The evolving multi-humped distributions produced by the moving-time -window technique reveal the nonequilibrium nature of economic behavior. They reproduce the American economic history of booms and busts. The studies seem to provide a way out of the debate on chaos versus noise and unify the cyclical and stochastic approaches in explaining business fluctuations. Based on these findings and new expectation formulation, we construct a business cycle model which gives qualitatively compatible patterns to those found empirically. The soft-bouncing oscillator model provides a better alternative than the harmonic oscillator or the random walk model as the building block in business cycle theory. The mathematical structure of the model (delay differential equation) is studied analytically and numerically. The research pave the way toward sensible economic forecasting.
Slow dynamics in glasses: A comparison between theory and experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phillips, J. C.
Minimalist theories of complex systems are broadly of two kinds: mean field and axiomatic. So far, all theories of complex properties absent from simple systems and intrinsic to glasses are axiomatic. Stretched Exponential Relaxation (SER) is the prototypical complex temporal property of glasses, discovered by Kohlrausch 150 years ago, and now observed almost universally in microscopically homogeneous, complex nonequilibrium materials, including luminescent electronic Coulomb glasses. A critical comparison of alternative axiomatic theories with both numerical simulations and experiments strongly favors channeled dynamical trap models over static percolative or energy landscape models. The topics discussed cover those reported since the author'smore » review article in 1996, with an emphasis on parallels between channel bifurcation in electronic and molecular relaxation.« less
Modelling and Simulation as a Recognizing Method in Education
ERIC Educational Resources Information Center
Stoffa, Veronika
2004-01-01
Computer animation-simulation models of complex processes and events, which are the method of instruction, can be an effective didactic device. Gaining deeper knowledge about objects modelled helps to plan simulation experiments oriented on processes and events researched. Animation experiments realized on multimedia computers can aid easier…
Leder, Helmut
2017-01-01
Visual complexity is relevant for many areas ranging from improving usability of technical displays or websites up to understanding aesthetic experiences. Therefore, many attempts have been made to relate objective properties of images to perceived complexity in artworks and other images. It has been argued that visual complexity is a multidimensional construct mainly consisting of two dimensions: A quantitative dimension that increases complexity through number of elements, and a structural dimension representing order negatively related to complexity. The objective of this work is to study human perception of visual complexity utilizing two large independent sets of abstract patterns. A wide range of computational measures of complexity was calculated, further combined using linear models as well as machine learning (random forests), and compared with data from human evaluations. Our results confirm the adequacy of existing two-factor models of perceived visual complexity consisting of a quantitative and a structural factor (in our case mirror symmetry) for both of our stimulus sets. In addition, a non-linear transformation of mirror symmetry giving more influence to small deviations from symmetry greatly increased explained variance. Thus, we again demonstrate the multidimensional nature of human complexity perception and present comprehensive quantitative models of the visual complexity of abstract patterns, which might be useful for future experiments and applications. PMID:29099832
Sociality influences cultural complexity.
Muthukrishna, Michael; Shulman, Ben W; Vasilescu, Vlad; Henrich, Joseph
2014-01-07
Archaeological and ethnohistorical evidence suggests a link between a population's size and structure, and the diversity or sophistication of its toolkits or technologies. Addressing these patterns, several evolutionary models predict that both the size and social interconnectedness of populations can contribute to the complexity of its cultural repertoire. Some models also predict that a sudden loss of sociality or of population will result in subsequent losses of useful skills/technologies. Here, we test these predictions with two experiments that permit learners to access either one or five models (teachers). Experiment 1 demonstrates that naive participants who could observe five models, integrate this information and generate increasingly effective skills (using an image editing tool) over 10 laboratory generations, whereas those with access to only one model show no improvement. Experiment 2, which began with a generation of trained experts, shows how learners with access to only one model lose skills (in knot-tying) more rapidly than those with access to five models. In the final generation of both experiments, all participants with access to five models demonstrate superior skills to those with access to only one model. These results support theoretical predictions linking sociality to cumulative cultural evolution.
Sociality influences cultural complexity
Muthukrishna, Michael; Shulman, Ben W.; Vasilescu, Vlad; Henrich, Joseph
2014-01-01
Archaeological and ethnohistorical evidence suggests a link between a population's size and structure, and the diversity or sophistication of its toolkits or technologies. Addressing these patterns, several evolutionary models predict that both the size and social interconnectedness of populations can contribute to the complexity of its cultural repertoire. Some models also predict that a sudden loss of sociality or of population will result in subsequent losses of useful skills/technologies. Here, we test these predictions with two experiments that permit learners to access either one or five models (teachers). Experiment 1 demonstrates that naive participants who could observe five models, integrate this information and generate increasingly effective skills (using an image editing tool) over 10 laboratory generations, whereas those with access to only one model show no improvement. Experiment 2, which began with a generation of trained experts, shows how learners with access to only one model lose skills (in knot-tying) more rapidly than those with access to five models. In the final generation of both experiments, all participants with access to five models demonstrate superior skills to those with access to only one model. These results support theoretical predictions linking sociality to cumulative cultural evolution. PMID:24225461
Genetic algorithm learning in a New Keynesian macroeconomic setup.
Hommes, Cars; Makarewicz, Tomasz; Massaro, Domenico; Smits, Tom
2017-01-01
In order to understand heterogeneous behavior amongst agents, empirical data from Learning-to-Forecast (LtF) experiments can be used to construct learning models. This paper follows up on Assenza et al. (2013) by using a Genetic Algorithms (GA) model to replicate the results from their LtF experiment. In this GA model, individuals optimize an adaptive, a trend following and an anchor coefficient in a population of general prediction heuristics. We replicate experimental treatments in a New-Keynesian environment with increasing complexity and use Monte Carlo simulations to investigate how well the model explains the experimental data. We find that the evolutionary learning model is able to replicate the three different types of behavior, i.e. convergence to steady state, stable oscillations and dampened oscillations in the treatments using one GA model. Heterogeneous behavior can thus be explained by an adaptive, anchor and trend extrapolating component and the GA model can be used to explain heterogeneous behavior in LtF experiments with different types of complexity.
Tutoring and Multi-Agent Systems: Modeling from Experiences
ERIC Educational Resources Information Center
Bennane, Abdellah
2010-01-01
Tutoring systems become complex and are offering varieties of pedagogical software as course modules, exercises, simulators, systems online or offline, for single user or multi-user. This complexity motivates new forms and approaches to the design and the modelling. Studies and research in this field introduce emergent concepts that allow the…
Trends in modeling Biomedical Complex Systems
Milanesi, Luciano; Romano, Paolo; Castellani, Gastone; Remondini, Daniel; Liò, Petro
2009-01-01
In this paper we provide an introduction to the techniques for multi-scale complex biological systems, from the single bio-molecule to the cell, combining theoretical modeling, experiments, informatics tools and technologies suitable for biological and biomedical research, which are becoming increasingly multidisciplinary, multidimensional and information-driven. The most important concepts on mathematical modeling methodologies and statistical inference, bioinformatics and standards tools to investigate complex biomedical systems are discussed and the prominent literature useful to both the practitioner and the theoretician are presented. PMID:19828068
Configurations of base-pair complexes in solutions. [nucleotide chemistry
NASA Technical Reports Server (NTRS)
Egan, J. T.; Nir, S.; Rein, R.; Macelroy, R.
1978-01-01
A theoretical search for the most stable conformations (i.e., stacked or hydrogen bonded) of the base pairs A-U and G-C in water, CCl4, and CHCl3 solutions is presented. The calculations of free energies indicate a significant role of the solvent in determining the conformations of the base-pair complexes. The application of the continuum method yields preferred conformations in good agreement with experiment. Results of the calculations with this method emphasize the importance of both the electrostatic interactions between the two bases in a complex, and the dipolar interaction of the complex with the entire medium. In calculations with the solvation shell method, the last term, i.e., dipolar interaction of the complex with the entire medium, was added. With this modification the prediction of the solvation shell model agrees both with the continuum model and with experiment, i.e., in water the stacked conformation of the bases is preferred.
NASA Technical Reports Server (NTRS)
Sebok, Angelia; Wickens, Christopher; Sargent, Robert
2015-01-01
One human factors challenge is predicting operator performance in novel situations. Approaches such as drawing on relevant previous experience, and developing computational models to predict operator performance in complex situations, offer potential methods to address this challenge. A few concerns with modeling operator performance are that models need to realistic, and they need to be tested empirically and validated. In addition, many existing human performance modeling tools are complex and require that an analyst gain significant experience to be able to develop models for meaningful data collection. This paper describes an effort to address these challenges by developing an easy to use model-based tool, using models that were developed from a review of existing human performance literature and targeted experimental studies, and performing an empirical validation of key model predictions.
NASA Astrophysics Data System (ADS)
Svoray, Tal; Assouline, Shmuel; Katul, Gabriel
2015-11-01
Current literature provides large number of publications about ecohydrological processes and their effect on the biota in drylands. Given the limited laboratory and field experiments in such systems, many of these publications are based on mathematical models of varying complexity. The underlying implicit assumption is that the data set used to evaluate these models covers the parameter space of conditions that characterize drylands and that the models represent the actual processes with acceptable certainty. However, a question raised is to what extent these mathematical models are valid when confronted with observed ecosystem complexity? This Introduction reviews the 16 papers that comprise the Special Section on Eco-hydrology of Semiarid Environments: Confronting Mathematical Models with Ecosystem Complexity. The subjects studied in these papers include rainfall regime, infiltration and preferential flow, evaporation and evapotranspiration, annual net primary production, dispersal and invasion, and vegetation greening. The findings in the papers published in this Special Section show that innovative mathematical modeling approaches can represent actual field measurements. Hence, there are strong grounds for suggesting that mathematical models can contribute to greater understanding of ecosystem complexity through characterization of space-time dynamics of biomass and water storage as well as their multiscale interactions. However, the generality of the models and their low-dimensional representation of many processes may also be a "curse" that results in failures when particulars of an ecosystem are required. It is envisaged that the search for a unifying "general" model, while seductive, may remain elusive in the foreseeable future. It is for this reason that improving the merger between experiments and models of various degrees of complexity continues to shape the future research agenda.
NASA Astrophysics Data System (ADS)
Rautenbach, V.; Çöltekin, A.; Coetzee, S.
2015-08-01
In this paper we report results from a qualitative user experiment (n=107) designed to contribute to understanding the impact of various levels of complexity (mainly based on levels of detail, i.e., LoD) in 3D city models, specifically on the participants' orientation and cognitive (mental) maps. The experiment consisted of a number of tasks motivated by spatial cognition theory where participants (among other things) were given orientation tasks, and in one case also produced sketches of a path they `travelled' in a virtual environment. The experiments were conducted in groups, where individuals provided responses on an answer sheet. The preliminary results based on descriptive statistics and qualitative sketch analyses suggest that very little information (i.e., a low LoD model of a smaller area) might have a negative impact on the accuracy of cognitive maps constructed based on a virtual experience. Building an accurate cognitive map is an inherently desired effect of the visualizations in planning tasks, thus the findings are important for understanding how to develop better-suited 3D visualizations such as 3D city models. In this study, we specifically discuss the suitability of different levels of visual complexity for development planning (urban planning), one of the domains where 3D city models are most relevant.
Shen, Weifeng; Jiang, Libing; Zhang, Mao; Ma, Yuefeng; Jiang, Guanyu; He, Xiaojun
2014-01-01
To review the research methods of mass casualty incident (MCI) systematically and introduce the concept and characteristics of complexity science and artificial system, computational experiments and parallel execution (ACP) method. We searched PubMed, Web of Knowledge, China Wanfang and China Biology Medicine (CBM) databases for relevant studies. Searches were performed without year or language restrictions and used the combinations of the following key words: "mass casualty incident", "MCI", "research method", "complexity science", "ACP", "approach", "science", "model", "system" and "response". Articles were searched using the above keywords and only those involving the research methods of mass casualty incident (MCI) were enrolled. Research methods of MCI have increased markedly over the past few decades. For now, dominating research methods of MCI are theory-based approach, empirical approach, evidence-based science, mathematical modeling and computer simulation, simulation experiment, experimental methods, scenario approach and complexity science. This article provides an overview of the development of research methodology for MCI. The progresses of routine research approaches and complexity science are briefly presented in this paper. Furthermore, the authors conclude that the reductionism underlying the exact science is not suitable for MCI complex systems. And the only feasible alternative is complexity science. Finally, this summary is followed by a review that ACP method combining artificial systems, computational experiments and parallel execution provides a new idea to address researches for complex MCI.
Hahs-Vaughn, Debbie L; McWayne, Christine M; Bulotsky-Shearer, Rebecca J; Wen, Xiaoli; Faria, Ann-Marie
2011-06-01
Complex survey data are collected by means other than simple random samples. This creates two analytical issues: nonindependence and unequal selection probability. Failing to address these issues results in underestimated standard errors and biased parameter estimates. Using data from the nationally representative Head Start Family and Child Experiences Survey (FACES; 1997 and 2000 cohorts), three diverse multilevel models are presented that illustrate differences in results depending on addressing or ignoring the complex sampling issues. Limitations of using complex survey data are reported, along with recommendations for reporting complex sample results. © The Author(s) 2011
Complex terrain experiments in the New European Wind Atlas.
Mann, J; Angelou, N; Arnqvist, J; Callies, D; Cantero, E; Arroyo, R Chávez; Courtney, M; Cuxart, J; Dellwik, E; Gottschall, J; Ivanell, S; Kühn, P; Lea, G; Matos, J C; Palma, J M L M; Pauscher, L; Peña, A; Rodrigo, J Sanz; Söderberg, S; Vasiljevic, N; Rodrigues, C Veiga
2017-04-13
The New European Wind Atlas project will create a freely accessible wind atlas covering Europe and Turkey, develop the model chain to create the atlas and perform a series of experiments on flow in many different kinds of complex terrain to validate the models. This paper describes the experiments of which some are nearly completed while others are in the planning stage. All experiments focus on the flow properties that are relevant for wind turbines, so the main focus is the mean flow and the turbulence at heights between 40 and 300 m. Also extreme winds, wind shear and veer, and diurnal and seasonal variations of the wind are of interest. Common to all the experiments is the use of Doppler lidar systems to supplement and in some cases replace completely meteorological towers. Many of the lidars will be equipped with scan heads that will allow for arbitrary scan patterns by several synchronized systems. Two pilot experiments, one in Portugal and one in Germany, show the value of using multiple synchronized, scanning lidar, both in terms of the accuracy of the measurements and the atmospheric physical processes that can be studied. The experimental data will be used for validation of atmospheric flow models and will by the end of the project be freely available.This article is part of the themed issue 'Wind energy in complex terrains'. © 2017 The Authors.
Complex terrain experiments in the New European Wind Atlas
Angelou, N.; Callies, D.; Cantero, E.; Arroyo, R. Chávez; Courtney, M.; Cuxart, J.; Dellwik, E.; Gottschall, J.; Ivanell, S.; Kühn, P.; Lea, G.; Matos, J. C.; Palma, J. M. L. M.; Peña, A.; Rodrigo, J. Sanz; Söderberg, S.; Vasiljevic, N.; Rodrigues, C. Veiga
2017-01-01
The New European Wind Atlas project will create a freely accessible wind atlas covering Europe and Turkey, develop the model chain to create the atlas and perform a series of experiments on flow in many different kinds of complex terrain to validate the models. This paper describes the experiments of which some are nearly completed while others are in the planning stage. All experiments focus on the flow properties that are relevant for wind turbines, so the main focus is the mean flow and the turbulence at heights between 40 and 300 m. Also extreme winds, wind shear and veer, and diurnal and seasonal variations of the wind are of interest. Common to all the experiments is the use of Doppler lidar systems to supplement and in some cases replace completely meteorological towers. Many of the lidars will be equipped with scan heads that will allow for arbitrary scan patterns by several synchronized systems. Two pilot experiments, one in Portugal and one in Germany, show the value of using multiple synchronized, scanning lidar, both in terms of the accuracy of the measurements and the atmospheric physical processes that can be studied. The experimental data will be used for validation of atmospheric flow models and will by the end of the project be freely available. This article is part of the themed issue ‘Wind energy in complex terrains’. PMID:28265025
ERIC Educational Resources Information Center
Sulz, Lauren; Gibbons, Sandra; Naylor, Patti-Jean; Wharf Higgins, Joan
2016-01-01
Background: Comprehensive School Health models offer a promising strategy to elicit changes in student health behaviours. To maximise the effect of such models, the active involvement of teachers and students in the change process is recommended. Objective: The goal of this project was to gain insight into the experiences and motivations of…
ERIC Educational Resources Information Center
Carroll, Susanne E.
1995-01-01
Criticizes the computer modelling experiments conducted by Sokolik and Smith (1992), which involved the learning of French gender attribution using connectionist architecture. The article argues that the experiments greatly oversimplified the complexity of gender learning, in that they were designed in such a way that knowledge that must be…
Drawert, Brian; Trogdon, Michael; Toor, Salman; Petzold, Linda; Hellander, Andreas
2016-01-01
Computational experiments using spatial stochastic simulations have led to important new biological insights, but they require specialized tools and a complex software stack, as well as large and scalable compute and data analysis resources due to the large computational cost associated with Monte Carlo computational workflows. The complexity of setting up and managing a large-scale distributed computation environment to support productive and reproducible modeling can be prohibitive for practitioners in systems biology. This results in a barrier to the adoption of spatial stochastic simulation tools, effectively limiting the type of biological questions addressed by quantitative modeling. In this paper, we present PyURDME, a new, user-friendly spatial modeling and simulation package, and MOLNs, a cloud computing appliance for distributed simulation of stochastic reaction-diffusion models. MOLNs is based on IPython and provides an interactive programming platform for development of sharable and reproducible distributed parallel computational experiments.
Models of chromatin spatial organisation in the cell nucleus
NASA Astrophysics Data System (ADS)
Nicodemi, Mario
2014-03-01
In the cell nucleus chromosomes have a complex architecture serving vital functional purposes. Recent experiments have started unveiling the interaction map of DNA sites genome-wide, revealing different levels of organisation at different scales. The principles, though, which orchestrate such a complex 3D structure remain still mysterious. I will overview the scenario emerging from some classical polymer physics models of the general aspect of chromatin spatial organisation. The available experimental data, which can be rationalised in a single framework, support a picture where chromatin is a complex mixture of differently folded regions, self-organised across spatial scales according to basic physical mechanisms. I will also discuss applications to specific DNA loci, e.g. the HoxB locus, where models informed with biological details, and tested against targeted experiments, can help identifying the determinants of folding.
Tokunaga, Taisuke; Yatabe, Takeshi; Matsumoto, Takahiro; Ando, Tatsuya; Yoon, Ki-Seok; Ogo, Seiji
2017-01-01
We report the mechanistic investigation of catalytic H 2 evolution from formic acid in water using a formate-bridged dinuclear Ru complex as a formate hydrogen lyase model. The mechanistic study is based on isotope-labeling experiments involving hydrogen isotope exchange reaction.
The Complex Action Recognition via the Correlated Topic Model
Tu, Hong-bin; Xia, Li-min; Wang, Zheng-wu
2014-01-01
Human complex action recognition is an important research area of the action recognition. Among various obstacles to human complex action recognition, one of the most challenging is to deal with self-occlusion, where one body part occludes another one. This paper presents a new method of human complex action recognition, which is based on optical flow and correlated topic model (CTM). Firstly, the Markov random field was used to represent the occlusion relationship between human body parts in terms of an occlusion state variable. Secondly, the structure from motion (SFM) is used for reconstructing the missing data of point trajectories. Then, we can extract the key frame based on motion feature from optical flow and the ratios of the width and height are extracted by the human silhouette. Finally, we use the topic model of correlated topic model (CTM) to classify action. Experiments were performed on the KTH, Weizmann, and UIUC action dataset to test and evaluate the proposed method. The compared experiment results showed that the proposed method was more effective than compared methods. PMID:24574920
Near-optimal experimental design for model selection in systems biology.
Busetto, Alberto Giovanni; Hauser, Alain; Krummenacher, Gabriel; Sunnåker, Mikael; Dimopoulos, Sotiris; Ong, Cheng Soon; Stelling, Jörg; Buhmann, Joachim M
2013-10-15
Biological systems are understood through iterations of modeling and experimentation. Not all experiments, however, are equally valuable for predictive modeling. This study introduces an efficient method for experimental design aimed at selecting dynamical models from data. Motivated by biological applications, the method enables the design of crucial experiments: it determines a highly informative selection of measurement readouts and time points. We demonstrate formal guarantees of design efficiency on the basis of previous results. By reducing our task to the setting of graphical models, we prove that the method finds a near-optimal design selection with a polynomial number of evaluations. Moreover, the method exhibits the best polynomial-complexity constant approximation factor, unless P = NP. We measure the performance of the method in comparison with established alternatives, such as ensemble non-centrality, on example models of different complexity. Efficient design accelerates the loop between modeling and experimentation: it enables the inference of complex mechanisms, such as those controlling central metabolic operation. Toolbox 'NearOED' available with source code under GPL on the Machine Learning Open Source Software Web site (mloss.org).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jordan, Amy B.; Stauffer, Philip H.; Reed, Donald T.
The primary objective of the experimental effort described here is to aid in understanding the complex nature of liquid, vapor, and solid transport occurring around heated nuclear waste in bedded salt. In order to gain confidence in the predictive capability of numerical models, experimental validation must be performed to ensure that (a) hydrological and physiochemical parameters and (b) processes are correctly simulated. The experiments proposed here are designed to study aspects of the system that have not been satisfactorily quantified in prior work. In addition to exploring the complex coupled physical processes in support of numerical model validation, lessons learnedmore » from these experiments will facilitate preparations for larger-scale experiments that may utilize similar instrumentation techniques.« less
Inductive reasoning about causally transmitted properties.
Shafto, Patrick; Kemp, Charles; Bonawitz, Elizabeth Baraff; Coley, John D; Tenenbaum, Joshua B
2008-11-01
Different intuitive theories constrain and guide inferences in different contexts. Formalizing simple intuitive theories as probabilistic processes operating over structured representations, we present a new computational model of category-based induction about causally transmitted properties. A first experiment demonstrates undergraduates' context-sensitive use of taxonomic and food web knowledge to guide reasoning about causal transmission and shows good qualitative agreement between model predictions and human inferences. A second experiment demonstrates strong quantitative and qualitative fits to inferences about a more complex artificial food web. A third experiment investigates human reasoning about complex novel food webs where species have known taxonomic relations. Results demonstrate a double-dissociation between the predictions of our causal model and a related taxonomic model [Kemp, C., & Tenenbaum, J. B. (2003). Learning domain structures. In Proceedings of the 25th annual conference of the cognitive science society]: the causal model predicts human inferences about diseases but not genes, while the taxonomic model predicts human inferences about genes but not diseases. We contrast our framework with previous models of category-based induction and previous formal instantiations of intuitive theories, and outline challenges in developing a complete model of context-sensitive reasoning.
Hands-on Force Spectroscopy: Weird Springs and Protein Folding
ERIC Educational Resources Information Center
Euler, Manfred
2008-01-01
A force spectroscopy model experiment is presented using a low-cost tensile apparatus described earlier. Force-extension measurements of twisted rubber bands are obtained. They exhibit a complex nonlinear elastic behaviour that resembles atomic force spectroscopy investigations of molecules of titin, a muscle protein. The model experiments open up…
Comparisons of CTH simulations with measured wave profiles for simple flyer plate experiments
Thomas, S. A.; Veeser, L. R.; Turley, W. D.; ...
2016-06-13
We conducted detailed 2-dimensional hydrodynamics calculations to assess the quality of simulations commonly used to design and analyze simple shock compression experiments. Such simple shock experiments also contain data where dynamic properties of materials are integrated together. We wished to assess how well the chosen computer hydrodynamic code could do at capturing both the simple parts of the experiments and the integral parts. We began with very simple shock experiments, in which we examined the effects of the equation of state and the compressional and tensile strength models. We increased complexity to include spallation in copper and iron and amore » solid-solid phase transformation in iron to assess the quality of the damage and phase transformation simulations. For experiments with a window, the response of both the sample and the window are integrated together, providing a good test of the material models. While CTH physics models are not perfect and do not reproduce all experimental details well, we find the models are useful; the simulations are adequate for understanding much of the dynamic process and for planning experiments. However, higher complexity in the simulations, such as adding in spall, led to greater differences between simulation and experiment. Lastly, this comparison of simulation to experiment may help guide future development of hydrodynamics codes so that they better capture the underlying physics.« less
Buyel, Johannes Felix; Fischer, Rainer
2014-01-31
Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems.
Social learning modulates the lateralization of emotional valence.
Shamay-Tsoory, Simone G; Lavidor, Michal; Aharon-Peretz, Judith
2008-08-01
Although neuropsychological studies of lateralization of emotion have emphasized valence (positive vs. negative) or type (basic vs. complex) dimensions, the interaction between the two dimensions has yet to be elucidated. The purpose of the current study was to test the hypothesis that recognition of basic emotions is processed preferentially by the right prefrontal cortex (PFC), whereas recognition of complex social emotions is processed preferentially by the left PFC. Experiment 1 assessed the ability of healthy controls and patients with right and left PFC lesions to recognize basic and complex emotions. Experiment 2 modeled the patient's data of Experiment 1 on healthy participants under lateralized displays of the emotional stimuli. Both experiments support the Type as well as the Valence Hypotheses. However, our findings indicate that the Valence Hypothesis holds for basic but less so for complex emotions. It is suggested that, since social learning overrules the basic preference of valence in the hemispheres, the processing of complex emotions in the hemispheres is less affected by valence.
NASA Astrophysics Data System (ADS)
Koestner, Stefan
2009-09-01
With the increasing size and degree of complexity of today's experiments in high energy physics the required amount of work and complexity to integrate a complete subdetector into an experiment control system is often underestimated. We report here on the layered software structure and protocols used by the LHCb experiment to control its detectors and readout boards. The experiment control system of LHCb is based on the commercial SCADA system PVSS II. Readout boards which are outside the radiation area are accessed via embedded credit card sized PCs which are connected to a large local area network. The SPECS protocol is used for control of the front end electronics. Finite state machines are introduced to facilitate the control of a large number of electronic devices and to model the whole experiment at the level of an expert system.
General Blending Models for Data From Mixture Experiments
Brown, L.; Donev, A. N.; Bissett, A. C.
2015-01-01
We propose a new class of models providing a powerful unification and extension of existing statistical methodology for analysis of data obtained in mixture experiments. These models, which integrate models proposed by Scheffé and Becker, extend considerably the range of mixture component effects that may be described. They become complex when the studied phenomenon requires it, but remain simple whenever possible. This article has supplementary material online. PMID:26681812
Dong, Yadong; Sun, Yongqi; Qin, Chao
2018-01-01
The existing protein complex detection methods can be broadly divided into two categories: unsupervised and supervised learning methods. Most of the unsupervised learning methods assume that protein complexes are in dense regions of protein-protein interaction (PPI) networks even though many true complexes are not dense subgraphs. Supervised learning methods utilize the informative properties of known complexes; they often extract features from existing complexes and then use the features to train a classification model. The trained model is used to guide the search process for new complexes. However, insufficient extracted features, noise in the PPI data and the incompleteness of complex data make the classification model imprecise. Consequently, the classification model is not sufficient for guiding the detection of complexes. Therefore, we propose a new robust score function that combines the classification model with local structural information. Based on the score function, we provide a search method that works both forwards and backwards. The results from experiments on six benchmark PPI datasets and three protein complex datasets show that our approach can achieve better performance compared with the state-of-the-art supervised, semi-supervised and unsupervised methods for protein complex detection, occasionally significantly outperforming such methods.
Barton, C Michael; Ullah, Isaac I; Bergin, Sean
2010-11-28
The evolution of Mediterranean landscapes during the Holocene has been increasingly governed by the complex interactions of water and human land use. Different land-use practices change the amount of water flowing across the surface and infiltrating the soil, and change water's ability to move surface sediments. Conversely, water amplifies the impacts of human land use and extends the ecological footprint of human activities far beyond the borders of towns and fields. Advances in computational modelling offer new tools to study the complex feedbacks between land use, land cover, topography and surface water. The Mediterranean Landscape Dynamics project (MedLand) is building a modelling laboratory where experiments can be carried out on the long-term impacts of agropastoral land use, and whose results can be tested against the archaeological record. These computational experiments are providing new insights into the socio-ecological consequences of human decisions at varying temporal and spatial scales.
NASA Astrophysics Data System (ADS)
Long, Yoann; Charbouillot, Tiffany; Brigante, Marcello; Mailhot, Gilles; Delort, Anne-Marie; Chaumerliac, Nadine; Deguillaume, Laurent
2013-10-01
Currently, cloud chemistry models are including more detailed and explicit multiphase mechanisms based on laboratory experiments that determine such values as kinetic constants, stability constants of complexes and hydration constants. However, these models are still subject to many uncertainties related to the aqueous chemical mechanism they used. Particularly, the role of oxidants such as iron and hydrogen peroxide in the oxidative capacity of the cloud aqueous phase has typically never been validated against laboratory experimental data. To fill this gap, we adapted the M2C2 model (Model of Multiphase Cloud Chemistry) to simulate irradiation experiments on synthetic aqueous solutions under controlled conditions (e.g., pH, temperature, light intensity) and for actual cloud water samples. Various chemical compounds that purportedly contribute to the oxidative budget in cloud water (i.e., iron, oxidants, such as hydrogen peroxide: H2O2) were considered. Organic compounds (oxalic, formic and acetic acids) were taken into account as target species because they have the potential to form iron complexes and are good indicators of the oxidative capacity of the cloud aqueous phase via their oxidation in this medium. The range of concentrations for all of the chemical compounds evaluated was representative of in situ measurements. Numerical outputs were compared with experimental data that consisted of a time evolution of the concentrations of the target species. The chemical mechanism in the model describing the “oxidative engine” of the HxOy/iron (HxOy = H2O2, HO2rad /O2rad - and HOrad ) chemical system was consistent with laboratory measurements. Thus, the degradation of the carboxylic acids evaluated was closely reproduced by the model. However, photolysis of the Fe(C2O4)+ complex needs to be considered in cloud chemistry models for polluted conditions (i.e., acidic pH) to correctly reproduce oxalic acid degradation. We also show that iron and formic acid lead to a stable complex whose photoreactivity has currently not been investigated. The updated aqueous chemical mechanism was compared with data from irradiation experiments using natural cloud water. The new reactions considered in the model (i.e., iron complex formation with oxalic and formic acids) correctly reproduced the experimental observations.
The use of experimental design to find the operating maximum power point of PEM fuel cells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crăciunescu, Aurelian; Pătularu, Laurenţiu; Ciumbulea, Gloria
2015-03-10
Proton Exchange Membrane (PEM) Fuel Cells are difficult to model due to their complex nonlinear nature. In this paper, the development of a PEM Fuel Cells mathematical model based on the Design of Experiment methodology is described. The Design of Experiment provides a very efficient methodology to obtain a mathematical model for the studied multivariable system with only a few experiments. The obtained results can be used for optimization and control of the PEM Fuel Cells systems.
Interaction of the sea breeze with a river breeze in an area of complex coastal heating
NASA Technical Reports Server (NTRS)
Zhong, Shiyuan; Takle, Eugene S.; Leone, John M., Jr.
1991-01-01
The interaction of the sea-breeze circulation with a river-breeze circulation in an area of complex coastal heating (east coast of Florida) was studied using a 3D finite-element mesoscale model. The model simulations are compared with temperature and wind fields observed on a typical fall day during the Kennedy Space Center Atmospheric Boundary Layer Experiment. The results from numerical experiments designed to isolate the effect of the river breeze indicate that the convergence in the sea-breeze front is suppressed when it passes over the cooler surface of the rivers.
NASA Astrophysics Data System (ADS)
Zerkle, Ronald D.; Prakash, Chander
1995-03-01
This viewgraph presentation summarizes some CFD experience at GE Aircraft Engines for flows in the primary gaspath of a gas turbine engine and in turbine blade cooling passages. It is concluded that application of the standard k-epsilon turbulence model with wall functions is not adequate for accurate CFD simulation of aerodynamic performance and heat transfer in the primary gas path of a gas turbine engine. New models are required in the near-wall region which include more physics than wall functions. The two-layer modeling approach appears attractive because of its computational complexity. In addition, improved CFD simulation of film cooling and turbine blade internal cooling passages will require anisotropic turbulence models. New turbulence models must be practical in order to have a significant impact on the engine design process. A coordinated turbulence modeling effort between NASA centers would be beneficial to the gas turbine industry.
NASA Technical Reports Server (NTRS)
Zerkle, Ronald D.; Prakash, Chander
1995-01-01
This viewgraph presentation summarizes some CFD experience at GE Aircraft Engines for flows in the primary gaspath of a gas turbine engine and in turbine blade cooling passages. It is concluded that application of the standard k-epsilon turbulence model with wall functions is not adequate for accurate CFD simulation of aerodynamic performance and heat transfer in the primary gas path of a gas turbine engine. New models are required in the near-wall region which include more physics than wall functions. The two-layer modeling approach appears attractive because of its computational complexity. In addition, improved CFD simulation of film cooling and turbine blade internal cooling passages will require anisotropic turbulence models. New turbulence models must be practical in order to have a significant impact on the engine design process. A coordinated turbulence modeling effort between NASA centers would be beneficial to the gas turbine industry.
Drawert, Brian; Trogdon, Michael; Toor, Salman; Petzold, Linda; Hellander, Andreas
2017-01-01
Computational experiments using spatial stochastic simulations have led to important new biological insights, but they require specialized tools and a complex software stack, as well as large and scalable compute and data analysis resources due to the large computational cost associated with Monte Carlo computational workflows. The complexity of setting up and managing a large-scale distributed computation environment to support productive and reproducible modeling can be prohibitive for practitioners in systems biology. This results in a barrier to the adoption of spatial stochastic simulation tools, effectively limiting the type of biological questions addressed by quantitative modeling. In this paper, we present PyURDME, a new, user-friendly spatial modeling and simulation package, and MOLNs, a cloud computing appliance for distributed simulation of stochastic reaction-diffusion models. MOLNs is based on IPython and provides an interactive programming platform for development of sharable and reproducible distributed parallel computational experiments. PMID:28190948
A category adjustment approach to memory for spatial location in natural scenes.
Holden, Mark P; Curby, Kim M; Newcombe, Nora S; Shipley, Thomas F
2010-05-01
Memories for spatial locations often show systematic errors toward the central value of the surrounding region. This bias has been explained using a Bayesian model in which fine-grained and categorical information are combined (Huttenlocher, Hedges, & Duncan, 1991). However, experiments testing this model have largely used locations contained in simple geometric shapes. Use of this paradigm raises 2 issues. First, do results generalize to the complex natural world? Second, what types of information might be used to segment complex spaces into constituent categories? Experiment 1 addressed the 1st question by showing a bias toward prototypical values in memory for spatial locations in complex natural scenes. Experiment 2 addressed the 2nd question by manipulating the availability of basic visual cues (using color negatives) or of semantic information about the scene (using inverted images). Error patterns suggest that both perceptual and conceptual information are involved in segmentation. The possible neurological foundations of location memory of this kind are discussed. PsycINFO Database Record (c) 2010 APA, all rights reserved.
The advantage of flexible neuronal tunings in neural network models for motor learning
Marongelli, Ellisha N.; Thoroughman, Kurt A.
2013-01-01
Human motor adaptation to novel environments is often modeled by a basis function network that transforms desired movement properties into estimated forces. This network employs a layer of nodes that have fixed broad tunings that generalize across the input domain. Learning is achieved by updating the weights of these nodes in response to training experience. This conventional model is unable to account for rapid flexibility observed in human spatial generalization during motor adaptation. However, added plasticity in the widths of the basis function tunings can achieve this flexibility, and several neurophysiological experiments have revealed flexibility in tunings of sensorimotor neurons. We found a model, Locally Weighted Projection Regression (LWPR), which uniquely possesses the structure of a basis function network in which both the weights and tuning widths of the nodes are updated incrementally during adaptation. We presented this LWPR model with training functions of different spatial complexities and monitored incremental updates to receptive field widths. An inverse pattern of dependence of receptive field adaptation on experienced error became evident, underlying both a relationship between generalization and complexity, and a unique behavior in which generalization always narrows after a sudden switch in environmental complexity. These results implicate a model that is flexible in both basis function widths and weights, like LWPR, as a viable alternative model for human motor adaptation that can account for previously observed plasticity in spatial generalization. This theory can be tested by using the behaviors observed in our experiments as novel hypotheses in human studies. PMID:23888141
Developing an Animal Model of Human Amnesia: The Role of the Hippocampus
ERIC Educational Resources Information Center
Kesner, Raymond P.; Goodrich-Hunsaker, Naomi J.
2010-01-01
This review summarizes a series of experiments aimed at answering the question whether the hippocampus in rats can serve as an animal model of amnesia. It is recognized that a comparison of the functions of the rat hippocampus with human hippocampus is difficult, because of differences in methodology, differences in complexity of life experiences,…
Complexity in Soil Systems: What Does It Mean and How Should We Proceed?
NASA Astrophysics Data System (ADS)
Faybishenko, B.; Molz, F. J.; Brodie, E.; Hubbard, S. S.
2015-12-01
The complex soil systems approach is needed fundamentally for the development of integrated, interdisciplinary methods to measure and quantify the physical, chemical and biological processes taking place in soil, and to determine the role of fine-scale heterogeneities. This presentation is aimed at a review of the concepts and observations concerning complexity and complex systems theory, including terminology, emergent complexity and simplicity, self-organization and a general approach to the study of complex systems using the Weaver (1948) concept of "organized complexity." These concepts are used to provide understanding of complex soil systems, and to develop experimental and mathematical approaches to soil microbiological processes. The results of numerical simulations, observations and experiments are presented that indicate the presence of deterministic chaotic dynamics in soil microbial systems. So what are the implications for the scientists who wish to develop mathematical models in the area of organized complexity or to perform experiments to help clarify an aspect of an organized complex system? The modelers have to deal with coupled systems having at least three dependent variables, and they have to forgo making linear approximations to nonlinear phenomena. The analogous rule for experimentalists is that they need to perform experiments that involve measurement of at least three interacting entities (variables depending on time, space, and each other). These entities could be microbes in soil penetrated by roots. If a process being studied in a soil affects the soil properties, like biofilm formation, then this effect has to be measured and included. The mathematical implications of this viewpoint are examined, and results of numerical solutions to a system of equations demonstrating deterministic chaotic behavior are also discussed using time series and the 3D strange attractors.
NASA Astrophysics Data System (ADS)
Tournassat, C.; Tinnacher, R. M.; Grangeon, S.; Davis, J. A.
2018-01-01
The prediction of U(VI) adsorption onto montmorillonite clay is confounded by the complexities of: (1) the montmorillonite structure in terms of adsorption sites on basal and edge surfaces, and the complex interactions between the electrical double layers at these surfaces, and (2) U(VI) solution speciation, which can include cationic, anionic and neutral species. Previous U(VI)-montmorillonite adsorption and modeling studies have typically expanded classical surface complexation modeling approaches, initially developed for simple oxides, to include both cation exchange and surface complexation reactions. However, previous models have not taken into account the unique characteristics of electrostatic surface potentials that occur at montmorillonite edge sites, where the electrostatic surface potential of basal plane cation exchange sites influences the surface potential of neighboring edge sites ('spillover' effect). A series of U(VI) - Na-montmorillonite batch adsorption experiments was conducted as a function of pH, with variable U(VI), Ca, and dissolved carbonate concentrations. Based on the experimental data, a new type of surface complexation model (SCM) was developed for montmorillonite, that specifically accounts for the spillover effect using the edge surface speciation model by Tournassat et al. (2016a). The SCM allows for a prediction of U(VI) adsorption under varying chemical conditions with a minimum number of fitting parameters, not only for our own experimental results, but also for a number of published data sets. The model agreed well with many of these datasets without introducing a second site type or including the formation of ternary U(VI)-carbonato surface complexes. The model predictions were greatly impacted by utilizing analytical measurements of dissolved inorganic carbon (DIC) concentrations in individual sample solutions rather than assuming solution equilibration with a specific partial pressure of CO2, even when the gas phase was laboratory air. Because of strong aqueous U(VI)-carbonate solution complexes, the measurement of DIC concentrations was even important for systems set up in the 'absence' of CO2, due to low levels of CO2 contamination during the experiment.
NASA Astrophysics Data System (ADS)
Nigmatullin, Raoul R.; Maione, Guido; Lino, Paolo; Saponaro, Fabrizio; Zhang, Wei
2017-01-01
In this paper, we suggest a general theory that enables to describe experiments associated with reproducible or quasi-reproducible data reflecting the dynamical and self-similar properties of a wide class of complex systems. Under complex system we understand a system when the model based on microscopic principles and suppositions about the nature of the matter is absent. This microscopic model is usually determined as ;the best fit" model. The behavior of the complex system relatively to a control variable (time, frequency, wavelength, etc.) can be described in terms of the so-called intermediate model (IM). One can prove that the fitting parameters of the IM are associated with the amplitude-frequency response of the segment of the Prony series. The segment of the Prony series including the set of the decomposition coefficients and the set of the exponential functions (with k = 1,2,…,K) is limited by the final mode K. The exponential functions of this decomposition depend on time and are found by the original algorithm described in the paper. This approach serves as a logical continuation of the results obtained earlier in paper [Nigmatullin RR, W. Zhang and Striccoli D. General theory of experiment containing reproducible data: The reduction to an ideal experiment. Commun Nonlinear Sci Numer Simul, 27, (2015), pp 175-192] for reproducible experiments and includes the previous results as a partial case. In this paper, we consider a more complex case when the available data can create short samplings or exhibit some instability during the process of measurements. We give some justified evidences and conditions proving the validity of this theory for the description of a wide class of complex systems in terms of the reduced set of the fitting parameters belonging to the segment of the Prony series. The elimination of uncontrollable factors expressed in the form of the apparatus function is discussed. To illustrate how to apply the theory and take advantage of its benefits, we consider the experimental data associated with typical working conditions of the injection system in a common rail diesel engine. In particular, the flow rate of the injected fuel is considered at different reference rail pressures. The measured data are treated by the proposed algorithm to verify the adherence to the proposed general theory. The obtained results demonstrate the undoubted effectiveness of the proposed theory.
Models for Liquid Impact Onboard Sloshsat FLEVO
NASA Technical Reports Server (NTRS)
Vreeburg, Jan P. B.; Chato, David J.
2000-01-01
Orbital experiments on the behavior of liquid in spacecraft are planned. The Sloshsat free-flyer is described. Preparation of the experiments, and later evaluation, are supported by models of varying complexity. The characteristics of the models are discussed. Particular attention is given to the momentum transfer between the liquid and the spacecraft, in connection with the liquid impact that may occur at the end of a reorientation maneuver of the spacecraft.
ERIC Educational Resources Information Center
Thompson, Jennifer Jo; Conaway, Evan; Dolan, Erin L.
2016-01-01
Recent calls for reform in undergraduate biology education have emphasized integrating research experiences into the learning experiences of all undergraduates. Contemporary science research increasingly demands collaboration across disciplines and institutions to investigate complex research questions, providing new contexts and models for…
Simplified process model discovery based on role-oriented genetic mining.
Zhao, Weidong; Liu, Xi; Dai, Weihui
2014-01-01
Process mining is automated acquisition of process models from event logs. Although many process mining techniques have been developed, most of them are based on control flow. Meanwhile, the existing role-oriented process mining methods focus on correctness and integrity of roles while ignoring role complexity of the process model, which directly impacts understandability and quality of the model. To address these problems, we propose a genetic programming approach to mine the simplified process model. Using a new metric of process complexity in terms of roles as the fitness function, we can find simpler process models. The new role complexity metric of process models is designed from role cohesion and coupling, and applied to discover roles in process models. Moreover, the higher fitness derived from role complexity metric also provides a guideline for redesigning process models. Finally, we conduct case study and experiments to show that the proposed method is more effective for streamlining the process by comparing with related studies.
A dynamic auditory-cognitive system supports speech-in-noise perception in older adults
Anderson, Samira; White-Schwoch, Travis; Parbery-Clark, Alexandra; Kraus, Nina
2013-01-01
Understanding speech in noise is one of the most complex activities encountered in everyday life, relying on peripheral hearing, central auditory processing, and cognition. These abilities decline with age, and so older adults are often frustrated by a reduced ability to communicate effectively in noisy environments. Many studies have examined these factors independently; in the last decade, however, the idea of the auditory-cognitive system has emerged, recognizing the need to consider the processing of complex sounds in the context of dynamic neural circuits. Here, we use structural equation modeling to evaluate interacting contributions of peripheral hearing, central processing, cognitive ability, and life experiences to understanding speech in noise. We recruited 120 older adults (ages 55 to 79) and evaluated their peripheral hearing status, cognitive skills, and central processing. We also collected demographic measures of life experiences, such as physical activity, intellectual engagement, and musical training. In our model, central processing and cognitive function predicted a significant proportion of variance in the ability to understand speech in noise. To a lesser extent, life experience predicted hearing-in-noise ability through modulation of brainstem function. Peripheral hearing levels did not significantly contribute to the model. Previous musical experience modulated the relative contributions of cognitive ability and lifestyle factors to hearing in noise. Our models demonstrate the complex interactions required to hear in noise and the importance of targeting cognitive function, lifestyle, and central auditory processing in the management of individuals who are having difficulty hearing in noise. PMID:23541911
Probing eukaryotic cell mechanics via mesoscopic simulations
NASA Astrophysics Data System (ADS)
Pivkin, Igor V.; Lykov, Kirill; Nematbakhsh, Yasaman; Shang, Menglin; Lim, Chwee Teck
2017-11-01
We developed a new mesoscopic particle based eukaryotic cell model which takes into account cell membrane, cytoskeleton and nucleus. The breast epithelial cells were used in our studies. To estimate the viscoelastic properties of cells and to calibrate the computational model, we performed micropipette aspiration experiments. The model was then validated using data from microfluidic experiments. Using the validated model, we probed contributions of sub-cellular components to whole cell mechanics in micropipette aspiration and microfluidics experiments. We believe that the new model will allow to study in silico numerous problems in the context of cell biomechanics in flows in complex domains, such as capillary networks and microfluidic devices.
Forecasting USAF JP-8 Fuel Needs
2009-03-01
versus complex ones. When we consider long -term forecasts, 5-years in this case, multiple regression outperforms ANN modeling within the specified...with more simple and easy-to-implement methods, versus complex ones. When we consider long -term 5-year forecasts, our multiple regression model...effort. The insight and experience was certainly appreciated. Special thanks to my Turkish peers for their continuous support and help during this long
DEM Calibration Approach: design of experiment
NASA Astrophysics Data System (ADS)
Boikov, A. V.; Savelev, R. V.; Payor, V. A.
2018-05-01
The problem of DEM models calibration is considered in the article. It is proposed to divide models input parameters into those that require iterative calibration and those that are recommended to measure directly. A new method for model calibration based on the design of the experiment for iteratively calibrated parameters is proposed. The experiment is conducted using a specially designed stand. The results are processed with technical vision algorithms. Approximating functions are obtained and the error of the implemented software and hardware complex is estimated. The prospects of the obtained results are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romander, C. M.; Cagliostro, D. J.
Five experiments were performed to help evaluate the structural integrity of the reactor vessel and head design and to verify code predictions. In the first experiment (SM 1), a detailed model of the head was loaded statically to determine its stiffness. In the remaining four experiments (SM 2 to SM 5), models of the vessel and head were loaded dynamically under a simulated 661 MW-sec hypothetical core disruptive accident (HCDA). Models SM 2 to SM 4, each of increasing complexity, systematically showed the effects of upper internals structures, a thermal liner, core support platform, and torospherical bottom on vessel response.more » Model SM 5, identical to SM 4 but more heavily instrumented, demonstrated experimental reproducibility and provided more comprehensive data. The models consisted of a Ni 200 vessel and core barrel, a head with shielding and simulated component masses, an upper internals structure (UIS), and, in the more complex models SM 4 and SM 5, a Ni 200 thermal liner and core support structure. Water simulated the liquid sodium coolant and a low-density explosive simulated the HCDA loads.« less
Surface complexation modeling of proton and Cd adsorption onto an algal cell wall.
Kaulbach, Emily S; Szymanowski, Jennifer E S; Fein, Jeremy B
2005-06-01
This study quantifies Cd adsorption onto the cell wall of the algal species Pseudokirchneriella subcapitata by applying a surface complexation approach to model the observed adsorption behavior. We use potentiometric titrations to determine deprotonation constants and site concentrations for the functional groups on the algal cell wall. Adsorption and desorption kinetics experiments illustrate that adsorption of Cd onto the cell wall is rapid and reversible, except under low pH conditions. Adsorption experiments conducted as a function of pH and total Cd concentration yield the stoichiometry and site-specific stability constants for the important Cd-algal surface complexes. We model the acid/base properties of the algal cell wall by invoking four discrete surface functional group types, with pKa values of 3.9 +/- 0.3, 5.4 +/- 0.1, 7.6 +/- 0.3, and 9.6 +/- 0.4. The results of the Cd adsorption experiments indicate that the first, third, and fourth sites contribute to Cd adsorption under the experimental conditions, with calculated log stability constant values of 4.1 +/- 0.5, 5.4 +/- 0.5, and 6.1 +/- 0.4, respectively. Our results suggest that the stabilities of the Cd-surface complexes are high enough for algal adsorption to affect the fate and transport of Cd under some conditions and that on a per gram basis, algae and bacteria exhibit broadly similar extents of Cd adsorption.
Pessêgo, Márcia; Basílio, Nuno; Muñiz, M Carmen; García-Río, Luis
2016-07-06
Counterion competitive complexation is a background process currently ignored by using ionic hosts. Consequently, guest binding constants are strongly affected by the design of the titration experiments in such a way that the results are dependent on the guest concentration and on the presence of added salts, usually buffers. In the present manuscript we show that these experimental difficulties can be overcome by just considering the counterion competitive complexation. Moreover a single titration allows us to obtain not only the true binding constants but also the stoichiometry of the complex showing the formation of 1 : 1 : 1 (host : guest : counterion) complexes. The detection of high stoichiometry complexes is not restricted to a single titration experiment but also to a displacement assay where both competitive and competitive-cooperative complexation models are taken into consideration.
A general mechanism for competitor-induced dissociation of molecular complexes
Paramanathan, Thayaparan; Reeves, Daniel; Friedman, Larry J.; Kondev, Jane; Gelles, Jeff
2014-01-01
The kinetic stability of non-covalent macromolecular complexes controls many biological phenomena. Here we find that physical models of complex dissociation predict that competitor molecules will in general accelerate the breakdown of isolated bimolecular complexes by occluding rapid rebinding of the two binding partners. This prediction is largely independent of molecular details. We confirm the prediction with single-molecule fluorescence experiments on a well-characterized DNA strand dissociation reaction. Contrary to common assumptions, competitor–induced acceleration of dissociation can occur in biologically relevant competitor concentration ranges and does not necessarily implyternary association of competitor with the bimolecular complex. Thus, occlusion of complex rebinding may play a significant role in a variety of biomolecular processes. The results also show that single-molecule colocalization experiments can accurately measure dissociation rates despite their limited spatio temporal resolution. PMID:25342513
NASA Astrophysics Data System (ADS)
Georgiou, K.; Abramoff, R. Z.; Harte, J.; Riley, W. J.; Torn, M. S.
2016-12-01
As global temperatures and atmospheric CO2 concentrations continue to increase, soil microbial activity and decomposition of soil organic matter (SOM) are expected to follow suit, potentially limiting soil carbon storage. Traditional global- and ecosystem-scale models simulate SOM decomposition using linear kinetics, which are inherently unable to reproduce carbon-concentration feedbacks, such as priming of native SOM at elevated CO2 concentrations. Recent studies using nonlinear microbial models of SOM decomposition seek to capture these interactions, and several groups are currently integrating these microbial models into Earth System Models (ESMs). However, despite their widespread ability to exhibit nonlinear responses, these models vary tremendously in complexity and, consequently, dynamics. In this study, we explore, both analytically and numerically, the emergent oscillatory behavior and insensitivity of SOM stocks to carbon inputs that have been deemed `unrealistic' in recent microbial models. We discuss the sources of instability in four models of varying complexity, by sequentially reducing complexity of a detailed model that includes microbial physiology, a mineral sorption isotherm, and enzyme dynamics. We also present an alternative representation of microbial turnover that limits population sizes and, thus, reduces oscillations. We compare these models to several long-term carbon input manipulations, including the Detritus Input and Removal Treatment (DIRT) experiments, to show that there are clear metrics that can be used to distinguish and validate the inherent dynamics of each model structure. We find that traditional linear and nonlinear models cannot readily capture the range of long-term responses observed across the DIRT experiments as a direct consequence of their model structures, and that modifying microbial turnover results in more realistic predictions. Finally, we discuss our findings in the context of improving microbial model behavior for inclusion in ESMs.
Flexible Space-Filling Designs for Complex System Simulations
2013-06-01
interior of the experimental region and cannot fit higher-order models. We present a genetic algorithm that constructs space-filling designs with...Computer Experiments, Design of Experiments, Genetic Algorithm , Latin Hypercube, Response Surface Methodology, Nearly Orthogonal 15. NUMBER OF PAGES 147...experimental region and cannot fit higher-order models. We present a genetic algorithm that constructs space-filling designs with minimal correlations
NASA Technical Reports Server (NTRS)
McNeill, Justin
1995-01-01
The Multimission Image Processing Subsystem (MIPS) at the Jet Propulsion Laboratory (JPL) has managed transitions of application software sets from one operating system and hardware platform to multiple operating systems and hardware platforms. As a part of these transitions, cost estimates were generated from the personal experience of in-house developers and managers to calculate the total effort required for such projects. Productivity measures have been collected for two such transitions, one very large and the other relatively small in terms of source lines of code. These estimates used a cost estimation model similar to the Software Engineering Laboratory (SEL) Effort Estimation Model. Experience in transitioning software within JPL MIPS have uncovered a high incidence of interface complexity. Interfaces, both internal and external to individual software applications, have contributed to software transition project complexity, and thus to scheduling difficulties and larger than anticipated design work on software to be ported.
Modeling protein complexes with BiGGER.
Krippahl, Ludwig; Moura, José J; Palma, P Nuno
2003-07-01
This article describes the method and results of our participation in the Critical Assessment of PRediction of Interactions (CAPRI) experiment, using the protein docking program BiGGER (Bimolecular complex Generation with Global Evaluation and Ranking) (Palma et al., Proteins 2000;39:372-384). Of five target complexes (CAPRI targets 2, 4, 5, 6, and 7), only one was successfully predicted (target 6), but BiGGER generated reasonable models for targets 4, 5, and 7, which could have been identified if additional biochemical information had been available. Copyright 2003 Wiley-Liss, Inc.
U.S. Geological Survey Groundwater Modeling Software: Making Sense of a Complex Natural Resource
Provost, Alden M.; Reilly, Thomas E.; Harbaugh, Arlen W.; Pollock, David W.
2009-01-01
Computer models of groundwater systems simulate the flow of groundwater, including water levels, and the transport of chemical constituents and thermal energy. Groundwater models afford hydrologists a framework on which to organize their knowledge and understanding of groundwater systems, and they provide insights water-resources managers need to plan effectively for future water demands. Building on decades of experience, the U.S. Geological Survey (USGS) continues to lead in the development and application of computer software that allows groundwater models to address scientific and management questions of increasing complexity.
Observation-Driven Configuration of Complex Software Systems
NASA Astrophysics Data System (ADS)
Sage, Aled
2010-06-01
The ever-increasing complexity of software systems makes them hard to comprehend, predict and tune due to emergent properties and non-deterministic behaviour. Complexity arises from the size of software systems and the wide variety of possible operating environments: the increasing choice of platforms and communication policies leads to ever more complex performance characteristics. In addition, software systems exhibit different behaviour under different workloads. Many software systems are designed to be configurable so that policies can be chosen to meet the needs of various stakeholders. For complex software systems it can be difficult to accurately predict the effects of a change and to know which configuration is most appropriate. This thesis demonstrates that it is useful to run automated experiments that measure a selection of system configurations. Experiments can find configurations that meet the stakeholders' needs, find interesting behavioural characteristics, and help produce predictive models of the system's behaviour. The design and use of ACT (Automated Configuration Tool) for running such experiments is described, in combination a number of search strategies for deciding on the configurations to measure. Design Of Experiments (DOE) is discussed, with emphasis on Taguchi Methods. These statistical methods have been used extensively in manufacturing, but have not previously been used for configuring software systems. The novel contribution here is an industrial case study, applying the combination of ACT and Taguchi Methods to DC-Directory, a product from Data Connection Ltd (DCL). The case study investigated the applicability of Taguchi Methods for configuring complex software systems. Taguchi Methods were found to be useful for modelling and configuring DC- Directory, making them a valuable addition to the techniques available to system administrators and developers.
Klotz, K H; Benz, R
1993-01-01
Stationary and kinetic experiments were performed on lipid bilayer membranes to study the mechanism of iodine- and bromine-mediated halide transport in detail. The stationary conductance data suggested that four different 1:1 complexes between I2 and Br2 and the halides I- and Br- were responsible for the observed conductance increase by iodine and bromine (I3-, I2Br-, Br2I-, and Br3-). Charge pulse experiments allowed the further elucidation of the transport mechanism. Only two of three exponential voltage relaxations predicted by the Läuger model could be resolved under all experimental conditions. This means that either the heterogeneous complexation reactions kR (association) and kD (dissociation) were too fast to be resolved or that the neutral carriers were always in equilibrium within the membrane. Experiments at different carrier and halide concentrations suggested that the translocation of the neutral carrier is much faster than the other processes involved in carrier-mediated ion transport. The model was modified accordingly. From the charge pulse data at different halide concentrations, the translocation rate constant of the complexed carriers, kAS, the dissociation constant, kD, and the total surface concentration of charged carriers, NAS, could be evaluated from one single charge pulse experiment. The association rate of the complex, kR, could be obtained in some cases from the plot of the stationary conductance data as a function of the halide concentration in the aqueous phase. The translocation rate constant, kAS, of the different complexes is a function of the image force and of the Born charging energy. It increases 5000-fold from Br3- to I3- because of an enlarged ion radius. PMID:8312500
McGovern, Eimear; Kelleher, Eoin; Snow, Aisling; Walsh, Kevin; Gadallah, Bassem; Kutty, Shelby; Redmond, John M; McMahon, Colin J
2017-09-01
In recent years, three-dimensional printing has demonstrated reliable reproducibility of several organs including hearts with complex congenital cardiac anomalies. This represents the next step in advanced image processing and can be used to plan surgical repair. In this study, we describe three children with complex univentricular hearts and abnormal systemic or pulmonary venous drainage, in whom three-dimensional printed models based on CT data assisted with preoperative planning. For two children, after group discussion and examination of the models, a decision was made not to proceed with surgery. We extend the current clinical experience with three-dimensional printed modelling and discuss the benefits of such models in the setting of managing complex surgical problems in children with univentricular circulation and abnormal systemic or pulmonary venous drainage.
Complex Plasmas under free fall conditions aboard the International Space Station
NASA Astrophysics Data System (ADS)
Konopka, Uwe; Thomas, Edward, Jr.; Funk, Dylan; Doyle, Brandon; Williams, Jeremiah; Knapek, Christina; Thomas, Hubertus
2017-10-01
Complex Plasmas are dynamically dominated by massive, highly negatively charged, micron-sized particles. They are usually strongly coupled and as a result can show fluid-like behavior or undergo phase transitions to form crystalline structures. The dynamical time scale of these systems is easily accessible in experiments because of the relatively high mass/inertia of the particles. However, the high mass also leads to sedimentation effects and as a result prevents the conduction of large scale, fully three dimensional experiments that are necessary to utilize complex plasmas as model systems in the transition to continuous media. To reduce sedimentation influences it becomes necessary to perform experiments in a free-fall (``microgravity'') environment, such as the ISS based experiment facility ``Plasma-Kristall-4'' (``PK-4''). In our paper we will present our recently started research activities to investigate the basic properties of complex plasmas by utilizing the PK-4 experiment facility aboard the ISS. We further give an overview of developments towards the next generation experiment facility ``Ekoplasma'' (formerly named ``PlasmaLab'') and discuss potential additional small-scale space-based experiment scenarios. This work was supported by the JPL/NASA (JPL-RSA 1571699), the US Dept. of Energy (DE-SC0016330) and the NSF (PHY-1613087).
An analysis of electrical conductivity model in saturated porous media
NASA Astrophysics Data System (ADS)
Cai, J.; Wei, W.; Qin, X.; Hu, X.
2017-12-01
Electrical conductivity of saturated porous media has numerous applications in many fields. In recent years, the number of theoretical methods to model electrical conductivity of complex porous media has dramatically increased. Nevertheless, the process of modeling the spatial conductivity distributed function continues to present challenges when these models used in reservoirs, particularly in porous media with strongly heterogeneous pore-space distributions. Many experiments show a more complex distribution of electrical conductivity data than the predictions derived from the experiential model. Studies have observed anomalously-high electrical conductivity of some low-porosity (tight) formations compared to more- porous reservoir rocks, which indicates current flow in porous media is complex and difficult to predict. Moreover, the change of electrical conductivity depends not only on the pore volume fraction but also on several geometric properties of the more extensive pore network, including pore interconnection and tortuosity. In our understanding of electrical conductivity models in porous media, we study the applicability of several well-known methods/theories to electrical characteristics of porous rocks as a function of pore volume, tortuosity and interconnection, to estimate electrical conductivity based on the micro-geometrical properties of rocks. We analyze the state of the art of scientific knowledge and practice for modeling porous structural systems, with the purpose of identifying current limitations and defining a blueprint for future modeling advances. We compare conceptual descriptions of electrical current flow processes in pore space considering several distinct modeling approaches. Approaches to obtaining more reasonable electrical conductivity models are discussed. Experiments suggest more complex relationships between electrical conductivity and porosity than experiential models, particularly in low-porosity formations. However, the available theoretical models combined with simulations do provide insight to how microscale physics affects macroscale electrical conductivity in porous media.
Designing To Learn about Complex Systems.
ERIC Educational Resources Information Center
Hmelo, Cindy E.; Holton, Douglas L.; Kolodner, Janet L.
2000-01-01
Indicates the presence of complex structural, behavioral, and functional relations to understanding. Reports on a design experiment in which 6th grade children learned about the human respiratory system by designing artificial lungs and building partial working models. Makes suggestions for successful learning from design activities. (Contains 44…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stevens, Garrison N.; Atamturktur, Sez; Brown, D. Andrew
Rapid advancements in parallel computing over the last two decades have enabled simulations of complex, coupled systems through partitioning. In partitioned analysis, independently developed constituent models communicate, representing dependencies between multiple physical phenomena that occur in the full system. Figure 1 schematically demonstrates a coupled system with two constituent models, each resolving different physical behavior. In this figure, the constituent model, denoted as the “consumer,” relies upon some input parameter that is being provided by the constituent model acting as a “feeder”. The role of the feeder model is to map operating conditions (i.e. those that are stimulating the process)more » to consumer inputs, thus providing functional inputs to the consumer model*. Problems arise if the feeder model cannot be built–a challenge that is prevalent for highly complex systems in extreme operational conditions that push the limits of our understanding of underlying physical behavior. Often, these are also the situations where separate-effect experiments isolating the physical phenomena are not available; meaning that experimentally determining the unknown constituent behavior is not possible (Bauer and Holland, 1995; Unal et al., 2013), and that integral-effect experiments that reflect the behavior of the complete system tend to be the only available observations. In this paper, the authors advocate for the usefulness of integral-effect experiments in furthering a model developer’s knowledge of the physics principles governing the system behavior of interest.« less
Stevens, Garrison N.; Atamturktur, Sez; Brown, D. Andrew; ...
2018-04-16
Rapid advancements in parallel computing over the last two decades have enabled simulations of complex, coupled systems through partitioning. In partitioned analysis, independently developed constituent models communicate, representing dependencies between multiple physical phenomena that occur in the full system. Figure 1 schematically demonstrates a coupled system with two constituent models, each resolving different physical behavior. In this figure, the constituent model, denoted as the “consumer,” relies upon some input parameter that is being provided by the constituent model acting as a “feeder”. The role of the feeder model is to map operating conditions (i.e. those that are stimulating the process)more » to consumer inputs, thus providing functional inputs to the consumer model*. Problems arise if the feeder model cannot be built–a challenge that is prevalent for highly complex systems in extreme operational conditions that push the limits of our understanding of underlying physical behavior. Often, these are also the situations where separate-effect experiments isolating the physical phenomena are not available; meaning that experimentally determining the unknown constituent behavior is not possible (Bauer and Holland, 1995; Unal et al., 2013), and that integral-effect experiments that reflect the behavior of the complete system tend to be the only available observations. In this paper, the authors advocate for the usefulness of integral-effect experiments in furthering a model developer’s knowledge of the physics principles governing the system behavior of interest.« less
Maximally Expressive Task Modeling
NASA Technical Reports Server (NTRS)
Japp, John; Davis, Elizabeth; Maxwell, Theresa G. (Technical Monitor)
2002-01-01
Planning and scheduling systems organize "tasks" into a timeline or schedule. The tasks are defined within the scheduling system in logical containers called models. The dictionary might define a model of this type as "a system of things and relations satisfying a set of rules that, when applied to the things and relations, produce certainty about the tasks that are being modeled." One challenging domain for a planning and scheduling system is the operation of on-board experiment activities for the Space Station. The equipment used in these experiments is some of the most complex hardware ever developed by mankind, the information sought by these experiments is at the cutting edge of scientific endeavor, and the procedures for executing the experiments are intricate and exacting. Scheduling is made more difficult by a scarcity of space station resources. The models to be fed into the scheduler must describe both the complexity of the experiments and procedures (to ensure a valid schedule) and the flexibilities of the procedures and the equipment (to effectively utilize available resources). Clearly, scheduling space station experiment operations calls for a "maximally expressive" modeling schema. Modeling even the simplest of activities cannot be automated; no sensor can be attached to a piece of equipment that can discern how to use that piece of equipment; no camera can quantify how to operate a piece of equipment. Modeling is a human enterprise-both an art and a science. The modeling schema should allow the models to flow from the keyboard of the user as easily as works of literature flowed from the pen of Shakespeare. The Ground Systems Department at the Marshall Space Flight Center has embarked on an effort to develop a new scheduling engine that is highlighted by a maximally expressive modeling schema. This schema, presented in this paper, is a synergy of technological advances and domain-specific innovations.
Bifurcation analysis and phase diagram of a spin-string model with buckled states.
Ruiz-Garcia, M; Bonilla, L L; Prados, A
2017-12-01
We analyze a one-dimensional spin-string model, in which string oscillators are linearly coupled to their two nearest neighbors and to Ising spins representing internal degrees of freedom. String-spin coupling induces a long-range ferromagnetic interaction among spins that competes with a spin-spin antiferromagnetic coupling. As a consequence, the complex phase diagram of the system exhibits different flat rippled and buckled states, with first or second order transition lines between states. This complexity translates to the two-dimensional version of the model, whose numerical solution has been recently used to explain qualitatively the rippled to buckled transition observed in scanning tunneling microscopy experiments with suspended graphene sheets. Here we describe in detail the phase diagram of the simpler one-dimensional model and phase stability using bifurcation theory. This gives additional insight into the physical mechanisms underlying the different phases and the behavior observed in experiments.
Bifurcation analysis and phase diagram of a spin-string model with buckled states
NASA Astrophysics Data System (ADS)
Ruiz-Garcia, M.; Bonilla, L. L.; Prados, A.
2017-12-01
We analyze a one-dimensional spin-string model, in which string oscillators are linearly coupled to their two nearest neighbors and to Ising spins representing internal degrees of freedom. String-spin coupling induces a long-range ferromagnetic interaction among spins that competes with a spin-spin antiferromagnetic coupling. As a consequence, the complex phase diagram of the system exhibits different flat rippled and buckled states, with first or second order transition lines between states. This complexity translates to the two-dimensional version of the model, whose numerical solution has been recently used to explain qualitatively the rippled to buckled transition observed in scanning tunneling microscopy experiments with suspended graphene sheets. Here we describe in detail the phase diagram of the simpler one-dimensional model and phase stability using bifurcation theory. This gives additional insight into the physical mechanisms underlying the different phases and the behavior observed in experiments.
Laghari, Samreen; Niazi, Muaz A
2016-01-01
Computer Networks have a tendency to grow at an unprecedented scale. Modern networks involve not only computers but also a wide variety of other interconnected devices ranging from mobile phones to other household items fitted with sensors. This vision of the "Internet of Things" (IoT) implies an inherent difficulty in modeling problems. It is practically impossible to implement and test all scenarios for large-scale and complex adaptive communication networks as part of Complex Adaptive Communication Networks and Environments (CACOONS). The goal of this study is to explore the use of Agent-based Modeling as part of the Cognitive Agent-based Computing (CABC) framework to model a Complex communication network problem. We use Exploratory Agent-based Modeling (EABM), as part of the CABC framework, to develop an autonomous multi-agent architecture for managing carbon footprint in a corporate network. To evaluate the application of complexity in practical scenarios, we have also introduced a company-defined computer usage policy. The conducted experiments demonstrated two important results: Primarily CABC-based modeling approach such as using Agent-based Modeling can be an effective approach to modeling complex problems in the domain of IoT. Secondly, the specific problem of managing the Carbon footprint can be solved using a multiagent system approach.
A dynamic auditory-cognitive system supports speech-in-noise perception in older adults.
Anderson, Samira; White-Schwoch, Travis; Parbery-Clark, Alexandra; Kraus, Nina
2013-06-01
Understanding speech in noise is one of the most complex activities encountered in everyday life, relying on peripheral hearing, central auditory processing, and cognition. These abilities decline with age, and so older adults are often frustrated by a reduced ability to communicate effectively in noisy environments. Many studies have examined these factors independently; in the last decade, however, the idea of an auditory-cognitive system has emerged, recognizing the need to consider the processing of complex sounds in the context of dynamic neural circuits. Here, we used structural equation modeling to evaluate the interacting contributions of peripheral hearing, central processing, cognitive ability, and life experiences to understanding speech in noise. We recruited 120 older adults (ages 55-79) and evaluated their peripheral hearing status, cognitive skills, and central processing. We also collected demographic measures of life experiences, such as physical activity, intellectual engagement, and musical training. In our model, central processing and cognitive function predicted a significant proportion of variance in the ability to understand speech in noise. To a lesser extent, life experience predicted hearing-in-noise ability through modulation of brainstem function. Peripheral hearing levels did not significantly contribute to the model. Previous musical experience modulated the relative contributions of cognitive ability and lifestyle factors to hearing in noise. Our models demonstrate the complex interactions required to hear in noise and the importance of targeting cognitive function, lifestyle, and central auditory processing in the management of individuals who are having difficulty hearing in noise. Copyright © 2013 Elsevier B.V. All rights reserved.
Reassessing Geophysical Models of the Bushveld Complex in 3D
NASA Astrophysics Data System (ADS)
Cole, J.; Webb, S. J.; Finn, C.
2012-12-01
Conceptual geophysical models of the Bushveld Igneous Complex show three possible geometries for its mafic component: 1) Separate intrusions with vertical feeders for the eastern and western lobes (Cousins, 1959) 2) Separate dipping sheets for the two lobes (Du Plessis and Kleywegt, 1987) 3) A single saucer-shaped unit connected at depth in the central part between the two lobes (Cawthorn et al, 1998) Model three incorporates isostatic adjustment of the crust in response to the weight of the dense mafic material. The model was corroborated by results of a broadband seismic array over southern Africa, known as the Southern African Seismic Experiment (SASE) (Nguuri, et al, 2001; Webb et al, 2004). This new information about the crustal thickness only became available in the last decade and could not be considered in the earlier models. Nevertheless, there is still on-going debate as to which model is correct. All of the models published up to now have been done in 2 or 2.5 dimensions. This is not well suited to modelling the complex geometry of the Bushveld intrusion. 3D modelling takes into account effects of variations in geometry and geophysical properties of lithologies in a full three dimensional sense and therefore affects the shape and amplitude of calculated fields. The main question is how the new knowledge of the increased crustal thickness, as well as the complexity of the Bushveld Complex, will impact on the gravity fields calculated for the existing conceptual models, when modelling in 3D. The three published geophysical models were remodelled using full 3Dl potential field modelling software, and including crustal thickness obtained from the SASE. The aim was not to construct very detailed models, but to test the existing conceptual models in an equally conceptual way. Firstly a specific 2D model was recreated in 3D, without crustal thickening, to establish the difference between 2D and 3D results. Then the thicker crust was added. Including the less dense, thicker crust underneath the Bushveld Complex necessitates the presence of dense material in the central area between the eastern and western lobes. The simplest way to achieve this is to model the mafic component of the Bushveld Complex as a single intrusion. This is similar to what the first students of the Bushveld Complex suggested. Conceptual models are by definition simplified versions of the real situation, and the geometry of the Bushveld Complex is expected to be much more intricate. References Cawthorn, R.G., Cooper, G.R.J., Webb, S.J. (1998). Connectivity between the western and eastern limbs of the Bushveld Complex. S Afr J Geol, 101, 291-298. Cousins, C.A. (1959). The structure of the mafic portion of the Bushveld Igneous Complex. Trans Geol Soc S Afr, 62, 179-189. Du Plessis, A., Kleywegt, R.J. (1987). A dipping sheet model for the mafic lobes of the Bushveld Complex. S Afr J Geol, 90, 1-6. Nguuri, T.K., Gore, J., James, D.E., Webb, S.J., Wright, C., Zengeni, T.G., Gwavava, O., Snoke, J.A. and Kaapvaal Seismic Group. (2001). Crustal structure beneath southern Africa and its implications for the formation and evolution of the Kaapvaal and Zimbabwe cratons. Geoph Res Lett, 28, 2501-2504. Webb, S.J., Cawthorn, R.G., Nguuri, T., James, D. (2004). Gravity modelling of Bushveld Complex connectivity supported by Southern African Seismic Experiment results, S Afr J Geol, 107, 207-218.
Command-line cellular electrophysiology for conventional and real-time closed-loop experiments.
Linaro, Daniele; Couto, João; Giugliano, Michele
2014-06-15
Current software tools for electrophysiological experiments are limited in flexibility and rarely offer adequate support for advanced techniques such as dynamic clamp and hybrid experiments, which are therefore limited to laboratories with a significant expertise in neuroinformatics. We have developed lcg, a software suite based on a command-line interface (CLI) that allows performing both standard and advanced electrophysiological experiments. Stimulation protocols for classical voltage and current clamp experiments are defined by a concise and flexible meta description that allows representing complex waveforms as a piece-wise parametric decomposition of elementary sub-waveforms, abstracting the stimulation hardware. To perform complex experiments lcg provides a set of elementary building blocks that can be interconnected to yield a large variety of experimental paradigms. We present various cellular electrophysiological experiments in which lcg has been employed, ranging from the automated application of current clamp protocols for characterizing basic electrophysiological properties of neurons, to dynamic clamp, response clamp, and hybrid experiments. We finally show how the scripting capabilities behind a CLI are suited for integrating experimental trials into complex workflows, where actual experiment, online data analysis and computational modeling seamlessly integrate. We compare lcg with two open source toolboxes, RTXI and RELACS. We believe that lcg will greatly contribute to the standardization and reproducibility of both simple and complex experiments. Additionally, on the long run the increased efficiency due to a CLI will prove a great benefit for the experimental community. Copyright © 2014 Elsevier B.V. All rights reserved.
Notes about COOL: Analysis and Highlights of Complex View in Education
ERIC Educational Resources Information Center
de Oliveira, C. A.
2012-01-01
Purpose: The purpose of this paper is to present principles from the complex approach in education and describe some practical pedagogic experiences enhancing how "real world" perspectives have influenced and contributed to curriculum development. Design/methodology/approach: Necessity of integration in terms of knowledge modeling is an…
Kiley, Erin M; Yakovlev, Vadim V; Ishizaki, Kotaro; Vaucher, Sebastien
2012-01-01
Microwave thermal processing of metal powders has recently been a topic of a substantial interest; however, experimental data on the physical properties of mixtures involving metal particles are often unavailable. In this paper, we perform a systematic analysis of classical and contemporary models of complex permittivity of mixtures and discuss the use of these models for determining effective permittivity of dielectric matrices with metal inclusions. Results from various mixture and core-shell mixture models are compared to experimental data for a titanium/stearic acid mixture and a boron nitride/graphite mixture (both obtained through the original measurements), and for a tungsten/Teflon mixture (from literature). We find that for certain experiments, the average error in determining the effective complex permittivity using Lichtenecker's, Maxwell Garnett's, Bruggeman's, Buchelnikov's, and Ignatenko's models is about 10%. This suggests that, for multiphysics computer models describing the processing of metal powder in the full temperature range, input data on effective complex permittivity obtained from direct measurement has, up to now, no substitute.
ERIC Educational Resources Information Center
Bell, Adam Patrick
2017-01-01
What does it mean to experience disability in music? Based on interviews with Patrick Anderson--arguably the greatest wheelchair basketball player of all time--this article presents insights into the complexities of the experience of disability in sports and music. Contrasted with music education's tendency to adhere to a medicalized model of…
ERIC Educational Resources Information Center
de Morais, Camilo de L. M.; Silva, Se´rgio R. B.; Vieira, Davi S.; Lima, Ka´ssio M. G.
2016-01-01
The binding constant and stoichiometry ratio for the formation of iron(II)-(1,10-phenanthroline) or iron(II)-o-phenanthroline complexes has been determined by a combination of a low-cost analytical method using a smartphone and a molecular modeling method as a laboratory experiment designed for analytical and physical chemistry courses. Intensity…
NASA Astrophysics Data System (ADS)
Tinio, Pablo P. L.
2017-07-01
The Vienna Integrated Model of Art Perception (VIMAP; [5]) is the most comprehensive model of the art experience today. The model incorporates bottom-up and top-down cognitive processes and accounts for different outcomes of the art experience, such as aesthetic evaluations, emotions, and physiological and neurological responses to art. In their presentation of the model, Pelowski et al. also present hypotheses that are amenable to empirical testing. These features make the VIMAP an ambitious model that attempts to explain how meaningful, complex, and profound aspects of the art experience come about, which is a significant extension of previous models of the art experience (e.g., [1-3,10]), and which gives the VIMAP good explanatory power.
Li, Tsung-Lung; Lu, Wen-Cai
2015-10-05
In this work, Koopmans' theorem for Kohn-Sham density functional theory (KS-DFT) is applied to the photoemission spectra (PES) modeling over the entire valence-band. To examine the validity of this application, a PES modeling scheme is developed to facilitate a full valence-band comparison of theoretical PES spectra with experiments. The PES model incorporates the variations of electron ionization cross-sections over atomic orbitals and a linear dispersion of spectral broadening widths. KS-DFT simulations of pristine rubrene (5,6,11,12-tetraphenyltetracene) and potassium-rubrene complex are performed, and the simulation results are used as the input to the PES models. Two conclusions are reached. First, decompositions of the theoretical total spectra show that the dissociated electron of the potassium mainly remains on the backbone and has little effect on the electronic structures of phenyl side groups. This and other electronic-structure results deduced from the spectral decompositions have been qualitatively obtained with the anionic approximation to potassium-rubrene complexes. The qualitative validity of the anionic approximation is thus verified. Second, comparison of the theoretical PES with the experiments shows that the full-scale simulations combined with the PES modeling methods greatly enhance the agreement on spectral shapes over the anionic approximation. This agreement of the theoretical PES spectra with the experiments over the full valence-band can be regarded, to some extent, as a collective validation of the application of Koopmans' theorem for KS-DFT to valence-band PES, at least, for this hydrocarbon and its alkali-adsorbed complex. Copyright © 2015 Elsevier B.V. All rights reserved.
Turbulence model development and application at Lockheed Fort Worth Company
NASA Technical Reports Server (NTRS)
Smith, Brian R.
1995-01-01
This viewgraph presentation demonstrates that computationally efficient k-l and k-kl turbulence models have been developed and implemented at Lockheed Fort Worth Company. Many years of experience have been gained applying two equation turbulence models to complex three-dimensional flows for design and analysis.
Yin, J.; Haggerty, R.; Stoliker, D.L.; Kent, D.B.; Istok, J.D.; Greskowiak, J.; Zachara, J.M.
2011-01-01
In the 300 Area of a U(VI)-contaminated aquifer at Hanford, Washington, USA, inorganic carbon and major cations, which have large impacts on U(VI) transport, change on an hourly and seasonal basis near the Columbia River. Batch and column experiments were conducted to investigate the factors controlling U(VI) adsorption/desorption by changing chemical conditions over time. Low alkalinity and low Ca concentrations (Columbia River water) enhanced adsorption and reduced aqueous concentrations. Conversely, high alkalinity and high Ca concentrations (Hanford groundwater) reduced adsorption and increased aqueous concentrations of U(VI). An equilibrium surface complexation model calibrated using laboratory batch experiments accounted for the decrease in U(VI) adsorption observed with increasing (bi)carbonate concentrations and other aqueous chemical conditions. In the column experiment, alternating pulses of river and groundwater caused swings in aqueous U(VI) concentration. A multispecies multirate surface complexation reactive transport model simulated most of the major U(VI) changes in two column experiments. The modeling results also indicated that U(VI) transport in the studied sediment could be simulated by using a single kinetic rate without loss of accuracy in the simulations. Moreover, the capability of the model to predict U(VI) transport in Hanford groundwater under transient chemical conditions depends significantly on the knowledge of real-time change of local groundwater chemistry. Copyright 2011 by the American Geophysical Union.
Yin, Jun; Haggerty, Roy; Stoliker, Deborah L.; Kent, Douglas B.; Istok, Jonathan D.; Greskowiak, Janek; Zachara, John M.
2011-01-01
In the 300 Area of a U(VI)-contaminated aquifer at Hanford, Washington, USA, inorganic carbon and major cations, which have large impacts on U(VI) transport, change on an hourly and seasonal basis near the Columbia River. Batch and column experiments were conducted to investigate the factors controlling U(VI) adsorption/desorption by changing chemical conditions over time. Low alkalinity and low Ca concentrations (Columbia River water) enhanced adsorption and reduced aqueous concentrations. Conversely, high alkalinity and high Ca concentrations (Hanford groundwater) reduced adsorption and increased aqueous concentrations of U(VI). An equilibrium surface complexation model calibrated using laboratory batch experiments accounted for the decrease in U(VI) adsorption observed with increasing (bi)carbonate concentrations and other aqueous chemical conditions. In the column experiment, alternating pulses of river and groundwater caused swings in aqueous U(VI) concentration. A multispecies multirate surface complexation reactive transport model simulated most of the major U(VI) changes in two column experiments. The modeling results also indicated that U(VI) transport in the studied sediment could be simulated by using a single kinetic rate without loss of accuracy in the simulations. Moreover, the capability of the model to predict U(VI) transport in Hanford groundwater under transient chemical conditions depends significantly on the knowledge of real-time change of local groundwater chemistry.
Optimising electron microscopy experiment through electron optics simulation.
Kubo, Y; Gatel, C; Snoeck, E; Houdellier, F
2017-04-01
We developed a new type of electron trajectories simulation inside a complete model of a modern transmission electron microscope (TEM). Our model incorporates the precise and real design of each element constituting a TEM, i.e. the field emission (FE) cathode, the extraction optic and acceleration stages of a 300kV cold field emission gun, the illumination lenses, the objective lens, the intermediate and projection lenses. Full trajectories can be computed using magnetically saturated or non-saturated round lenses, magnetic deflectors and even non-cylindrical symmetry elements like electrostatic biprism. This multi-scale model gathers nanometer size components (FE tip) with parts of meter length (illumination and projection systems). We demonstrate that non-trivial TEM experiments requiring specific and complex optical configurations can be simulated and optimized prior to any experiment using such model. We show that all the currents set in all optical elements of the simulated column can be implemented in the real column (I2TEM in CEMES) and used as starting alignment for the requested experiment. We argue that the combination of such complete electron trajectory simulations in the whole TEM column with automatic optimization of the microscope parameters for optimal experimental data (images, diffraction, spectra) allows drastically simplifying the implementation of complex experiments in TEM and will facilitate the development of advanced use of the electron microscope in the near future. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
McMillen, Laura M.; Vavylonis, Dimitrios
2016-12-01
Cell protrusion through polymerization of actin filaments at the leading edge of motile cells may be influenced by spatial gradients of diffuse actin and regulators. Here we study the distribution of two of the most important regulators, capping protein and Arp2/3 complex, which regulate actin polymerization in the lamellipodium through capping and nucleation of free barbed ends. We modeled their kinetics using data from prior single molecule microscopy experiments on XTC cells. These experiments have provided evidence for a broad distribution of diffusion coefficients of both capping protein and Arp2/3 complex. The slowly diffusing proteins appear as extended ‘clouds’ while proteins bound to the actin filament network appear as speckles that undergo retrograde flow. Speckle appearance and disappearance events correspond to assembly and dissociation from the actin filament network and speckle lifetimes correspond to the dissociation rate. The slowly diffusing capping protein could represent severed capped actin filament fragments or membrane-bound capping protein. Prior evidence suggests that slowly diffusing Apr2/3 complex associates with the membrane. We use the measured rates and estimates of diffusion coefficients of capping protein and Arp2/3 complex in a Monte Carlo simulation that includes particles in association with a filament network and diffuse in the cytoplasm. We consider two separate pools of diffuse proteins, representing fast and slowly diffusing species. We find a steady state with concentration gradients involving a balance of diffusive flow of fast and slow species with retrograde flow. We show that simulations of FRAP are consistent with prior experiments performed on different cell types. We provide estimates for the ratio of bound to diffuse complexes and calculate conditions where Arp2/3 complex recycling by diffusion may become limiting. We discuss the implications of slowly diffusing populations and suggest experiments to distinguish among mechanisms that influence long range transport.
Psychological distance reduces literal imitation: Evidence from an imitation-learning paradigm.
Hansen, Jochim; Alves, Hans; Trope, Yaacov
2016-03-01
The present experiments tested the hypothesis that observers engage in more literal imitation of a model when the model is psychologically near to (vs. distant from) the observer. Participants learned to fold a dog out of towels by watching a model performing this task. Temporal (Experiment 1) and spatial (Experiment 2) distance from the model were manipulated. As predicted, participants copied more of the model's specific movements when the model was near (vs. distant). Experiment 3 replicated this finding with a paper-folding task, suggesting that distance from a model also affects imitation of less complex tasks. Perceived task difficulty, motivation, and the quality of the end product were not affected by distance. We interpret the findings as reflecting different levels of construal of the model's performance: When the model is psychologically distant, social learners focus more on the model's goal and devise their own means for achieving the goal, and as a result show less literal imitation of the model. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Capillary Discharge Thruster Experiments and Modeling (Briefing Charts)
2016-06-01
Martin1 ERC INC.1, IN-SPACE PROPULSION BRANCH, AIR FORCE RESEARCH LABORATORY EDWARDS AIR FORCE BASE, CA USA Electric propulsion systems June 2016... PROPULSION MODELS & EXPERIMENTS Spacecraft Propulsion Relevant Plasma: From hall thrusters to plumes and fluxes on components Complex reaction physics i.e... Propulsion Plumes FRC Chamber Environment R.S. MARTIN (ERC INC.) DISTRIBUTION A: APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED; PA# 16279 3 / 30 ELECTRIC
PeTTSy: a computational tool for perturbation analysis of complex systems biology models.
Domijan, Mirela; Brown, Paul E; Shulgin, Boris V; Rand, David A
2016-03-10
Over the last decade sensitivity analysis techniques have been shown to be very useful to analyse complex and high dimensional Systems Biology models. However, many of the currently available toolboxes have either used parameter sampling, been focused on a restricted set of model observables of interest, studied optimisation of a objective function, or have not dealt with multiple simultaneous model parameter changes where the changes can be permanent or temporary. Here we introduce our new, freely downloadable toolbox, PeTTSy (Perturbation Theory Toolbox for Systems). PeTTSy is a package for MATLAB which implements a wide array of techniques for the perturbation theory and sensitivity analysis of large and complex ordinary differential equation (ODE) based models. PeTTSy is a comprehensive modelling framework that introduces a number of new approaches and that fully addresses analysis of oscillatory systems. It examines sensitivity analysis of the models to perturbations of parameters, where the perturbation timing, strength, length and overall shape can be controlled by the user. This can be done in a system-global setting, namely, the user can determine how many parameters to perturb, by how much and for how long. PeTTSy also offers the user the ability to explore the effect of the parameter perturbations on many different types of outputs: period, phase (timing of peak) and model solutions. PeTTSy can be employed on a wide range of mathematical models including free-running and forced oscillators and signalling systems. To enable experimental optimisation using the Fisher Information Matrix it efficiently allows one to combine multiple variants of a model (i.e. a model with multiple experimental conditions) in order to determine the value of new experiments. It is especially useful in the analysis of large and complex models involving many variables and parameters. PeTTSy is a comprehensive tool for analysing large and complex models of regulatory and signalling systems. It allows for simulation and analysis of models under a variety of environmental conditions and for experimental optimisation of complex combined experiments. With its unique set of tools it makes a valuable addition to the current library of sensitivity analysis toolboxes. We believe that this software will be of great use to the wider biological, systems biology and modelling communities.
Unger, Bertram J; Kraut, Jay; Rhodes, Charlotte; Hochman, Jordan
2014-01-01
Physical models of complex bony structures can be used for surgical skills training. Current models focus on surface rendering but suffer from a lack of internal accuracy due to limitations in the manufacturing process. We describe a technique for generating internally accurate rapid-prototyped anatomical models with solid and hollow structures from clinical and microCT data using a 3D printer. In a face validation experiment, otolaryngology residents drilled a cadaveric bone and its corresponding printed model. The printed bone models were deemed highly realistic representations across all measured parameters and the educational value of the models was strongly appreciated.
Wang, Zimeng; Lee, Sung-Woo; Catalano, Jeffrey G; Lezama-Pacheco, Juan S; Bargar, John R; Tebo, Bradley M; Giammar, Daniel E
2013-01-15
The mobility of hexavalent uranium in soil and groundwater is strongly governed by adsorption to mineral surfaces. As strong naturally occurring adsorbents, manganese oxides may significantly influence the fate and transport of uranium. Models for U(VI) adsorption over a broad range of chemical conditions can improve predictive capabilities for uranium transport in the subsurface. This study integrated batch experiments of U(VI) adsorption to synthetic and biogenic MnO(2), surface complexation modeling, ζ-potential analysis, and molecular-scale characterization of adsorbed U(VI) with extended X-ray absorption fine structure (EXAFS) spectroscopy. The surface complexation model included inner-sphere monodentate and bidentate surface complexes and a ternary uranyl-carbonato surface complex, which was consistent with the EXAFS analysis. The model could successfully simulate adsorption results over a broad range of pH and dissolved inorganic carbon concentrations. U(VI) adsorption to synthetic δ-MnO(2) appears to be stronger than to biogenic MnO(2), and the differences in adsorption affinity and capacity are not associated with any substantial difference in U(VI) coordination.
Evidence of complex contagion of information in social media: An experiment using Twitter bots.
Mønsted, Bjarke; Sapieżyński, Piotr; Ferrara, Emilio; Lehmann, Sune
2017-01-01
It has recently become possible to study the dynamics of information diffusion in techno-social systems at scale, due to the emergence of online platforms, such as Twitter, with millions of users. One question that systematically recurs is whether information spreads according to simple or complex dynamics: does each exposure to a piece of information have an independent probability of a user adopting it (simple contagion), or does this probability depend instead on the number of sources of exposure, increasing above some threshold (complex contagion)? Most studies to date are observational and, therefore, unable to disentangle the effects of confounding factors such as social reinforcement, homophily, limited attention, or network community structure. Here we describe a novel controlled experiment that we performed on Twitter using 'social bots' deployed to carry out coordinated attempts at spreading information. We propose two Bayesian statistical models describing simple and complex contagion dynamics, and test the competing hypotheses. We provide experimental evidence that the complex contagion model describes the observed information diffusion behavior more accurately than simple contagion. Future applications of our results include more effective defenses against malicious propaganda campaigns on social media, improved marketing and advertisement strategies, and design of effective network intervention techniques.
The Implications of Literacy Teaching Models
ERIC Educational Resources Information Center
Gunawardena, Maya
2017-01-01
First year students often experience a culture shock as certain literacy practices at the university level are different from their experiences in high schools. Some major challenges that students encounter include students' ability to maintain academic integrity practices in their studies, to comprehend complex academic texts to outline key…
Online Learner's "Flow" Experience: An Empirical Study
ERIC Educational Resources Information Center
Shin, Namin
2006-01-01
This study is concerned with online learners' "low" experiences. On the basis of Csikszentmihalyi's theory of flow, flow was conceptualised as a complex, multimentional, reflective construct composing of "enjoyment", "telepresence", "focused attention", "engagement" and "time distortion" on the part of learners. A flow model was put forward with…
Recording information on protein complexes in an information management system
Savitsky, Marc; Diprose, Jonathan M.; Morris, Chris; Griffiths, Susanne L.; Daniel, Edward; Lin, Bill; Daenke, Susan; Bishop, Benjamin; Siebold, Christian; Wilson, Keith S.; Blake, Richard; Stuart, David I.; Esnouf, Robert M.
2011-01-01
The Protein Information Management System (PiMS) is a laboratory information management system (LIMS) designed for use with the production of proteins in a research environment. The software is distributed under the CCP4 licence, and so is available free of charge to academic laboratories. Like most LIMS, the underlying PiMS data model originally had no support for protein–protein complexes. To support the SPINE2-Complexes project the developers have extended PiMS to meet these requirements. The modifications to PiMS, described here, include data model changes, additional protocols, some user interface changes and functionality to detect when an experiment may have formed a complex. Example data are shown for the production of a crystal of a protein complex. Integration with SPINE2-Complexes Target Tracker application is also described. PMID:21605682
Recording information on protein complexes in an information management system.
Savitsky, Marc; Diprose, Jonathan M; Morris, Chris; Griffiths, Susanne L; Daniel, Edward; Lin, Bill; Daenke, Susan; Bishop, Benjamin; Siebold, Christian; Wilson, Keith S; Blake, Richard; Stuart, David I; Esnouf, Robert M
2011-08-01
The Protein Information Management System (PiMS) is a laboratory information management system (LIMS) designed for use with the production of proteins in a research environment. The software is distributed under the CCP4 licence, and so is available free of charge to academic laboratories. Like most LIMS, the underlying PiMS data model originally had no support for protein-protein complexes. To support the SPINE2-Complexes project the developers have extended PiMS to meet these requirements. The modifications to PiMS, described here, include data model changes, additional protocols, some user interface changes and functionality to detect when an experiment may have formed a complex. Example data are shown for the production of a crystal of a protein complex. Integration with SPINE2-Complexes Target Tracker application is also described. Copyright © 2011 Elsevier Inc. All rights reserved.
Panoramic imaging and virtual reality — filling the gaps between the lines
NASA Astrophysics Data System (ADS)
Chapman, David; Deacon, Andrew
Close range photogrammetry projects rely upon a clear and unambiguous specification of end-user requirements to inform decisions relating to the format, coverage, accuracy and complexity of the final deliverable. Invariably such deliverables will be a partial and incomplete abstraction of the real world where the benefits of higher accuracy and increased complexity must be traded against the cost of the project. As photogrammetric technologies move into the digital era, computerisation offers opportunities for the photogrammetrist to revisit established mapping traditions in order to explore new markets. One such market is that for three-dimensional Virtual Reality (VR) models for clients who have previously had little exposure to the capabilities, and limitations, of photogrammetry and may have radically different views on the cost/benefit trade-offs in producing geometric models. This paper will present some examples of the authors' recent experience of such markets, drawn from a number of research and commercial projects directed towards the modelling of complex man-made objects. This experience seems to indicate that suitably configured digital image archives may form an important deliverable for a wide range of photogrammetric projects and supplement, or even replace, more traditional CAD models.
NASA Astrophysics Data System (ADS)
Tourret, Damien; Clarke, Amy J.; Imhoff, Seth D.; Gibbs, Paul J.; Gibbs, John W.; Karma, Alain
2015-08-01
We present a three-dimensional extension of the multiscale dendritic needle network (DNN) model. This approach enables quantitative simulations of the unsteady dynamics of complex hierarchical networks in spatially extended dendritic arrays. We apply the model to directional solidification of Al-9.8 wt.%Si alloy and directly compare the model predictions with measurements from experiments with in situ x-ray imaging. We focus on the dynamical selection of primary spacings over a range of growth velocities, and the influence of sample geometry on the selection of spacings. Simulation results show good agreement with experiments. The computationally efficient DNN model opens new avenues for investigating the dynamics of large dendritic arrays at scales relevant to solidification experiments and processes.
ERIC Educational Resources Information Center
Mendes, De´bora C.; Ramamurthy, Vaidhyanathan; Da Silva, Jose´ P.
2015-01-01
In this laboratory experiment, students follow a step-by-step procedure to prepare and study guest-host complexes in the gas phase using electrospray ionization-mass spectrometry (ESI-MS). Model systems are the complexes of hosts cucurbit[7]uril (CB7) and cucurbit[8]uril (CB8) with the guest 4-styrylpyridine (SP). Aqueous solutions of CB7 or CB8…
Tsamandouras, Nikolaos; Rostami-Hodjegan, Amin; Aarons, Leon
2015-01-01
Pharmacokinetic models range from being entirely exploratory and empirical, to semi-mechanistic and ultimately complex physiologically based pharmacokinetic (PBPK) models. This choice is conditional on the modelling purpose as well as the amount and quality of the available data. The main advantage of PBPK models is that they can be used to extrapolate outside the studied population and experimental conditions. The trade-off for this advantage is a complex system of differential equations with a considerable number of model parameters. When these parameters cannot be informed from in vitro or in silico experiments they are usually optimized with respect to observed clinical data. Parameter estimation in complex models is a challenging task associated with many methodological issues which are discussed here with specific recommendations. Concepts such as structural and practical identifiability are described with regards to PBPK modelling and the value of experimental design and sensitivity analyses is sketched out. Parameter estimation approaches are discussed, while we also highlight the importance of not neglecting the covariance structure between model parameters and the uncertainty and population variability that is associated with them. Finally the possibility of using model order reduction techniques and minimal semi-mechanistic models that retain the physiological-mechanistic nature only in the parts of the model which are relevant to the desired modelling purpose is emphasized. Careful attention to all the above issues allows us to integrate successfully information from in vitro or in silico experiments together with information deriving from observed clinical data and develop mechanistically sound models with clinical relevance. PMID:24033787
The complexity of role balance: support for the Model of Juggling Occupations.
Evans, Kiah L; Millsteed, Jeannine; Richmond, Janet E; Falkmer, Marita; Falkmer, Torbjorn; Girdler, Sonya J
2014-09-01
This pilot study aimed to establish the appropriateness of the Model of Juggling Occupations in exploring the complex experience of role balance amongst working women with family responsibilities living in Perth, Australia. In meeting this aim, an evaluation was conducted of a case study design, where data were collected through a questionnaire, time diary, and interview. Overall role balance varied over time and across participants. Positive indicators of role balance occurred frequently in the questionnaires and time diaries, despite the interviews revealing a predominance of negative evaluations of role balance. Between-role balance was achieved through compatible role overlap, buffering, and renewal. An exploration of within-role balance factors demonstrated that occupational participation, values, interests, personal causation, and habits were related to role balance. This pilot study concluded that the Model of Juggling Occupations is an appropriate conceptual framework to explore the complex and dynamic experience of role balance amongst working women with family responsibilities. It was also confirmed that the case study design, including the questionnaire, time diary, and interview methods, is suitable for researching role balance from this perspective.
Spinks, Jean; Mortimer, Duncan
2016-02-03
The provision of additional information is often assumed to improve consumption decisions, allowing consumers to more accurately weigh the costs and benefits of alternatives. However, increasing the complexity of decision problems may prompt changes in information processing. This is particularly relevant for experimental methods such as discrete choice experiments (DCEs) where the researcher can manipulate the complexity of the decision problem. The primary aims of this study are (i) to test whether consumers actually process additional information in an already complex decision problem, and (ii) consider the implications of any such 'complexity-driven' changes in information processing for design and analysis of DCEs. A discrete choice experiment (DCE) is used to simulate a complex decision problem; here, the choice between complementary and conventional medicine for different health conditions. Eye-tracking technology is used to capture the number of times and the duration that a participant looks at any part of a computer screen during completion of DCE choice sets. From this we can analyse what has become known in the DCE literature as 'attribute non-attendance' (ANA). Using data from 32 participants, we model the likelihood of ANA as a function of choice set complexity and respondent characteristics using fixed and random effects models to account for repeated choice set completion. We also model whether participants are consistent with regard to which characteristics (attributes) they consider across choice sets. We find that complexity is the strongest predictor of ANA when other possible influences, such as time pressure, ordering effects, survey specific effects and socio-demographic variables (including proxies for prior experience with the decision problem) are considered. We also find that most participants do not apply a consistent information processing strategy across choice sets. Eye-tracking technology shows promise as a way of obtaining additional information from consumer research, improving DCE design, and informing the design of policy measures. With regards to DCE design, results from the present study suggest that eye-tracking data can identify the point at which adding complexity (and realism) to DCE choice scenarios becomes self-defeating due to unacceptable increases in ANA. Eye-tracking data therefore has clear application in the construction of guidelines for DCE design and during piloting of DCE choice scenarios. With regards to design of policy measures such as labelling requirements for CAM and conventional medicines, the provision of additional information has the potential to make difficult decisions even harder and may not have the desired effect on decision-making.
Yang, Changbing; Dai, Zhenxue; Romanak, Katherine D; Hovorka, Susan D; Treviño, Ramón H
2014-01-01
This study developed a multicomponent geochemical model to interpret responses of water chemistry to introduction of CO2 into six water-rock batches with sedimentary samples collected from representative potable aquifers in the Gulf Coast area. The model simulated CO2 dissolution in groundwater, aqueous complexation, mineral reactions (dissolution/precipitation), and surface complexation on clay mineral surfaces. An inverse method was used to estimate mineral surface area, the key parameter for describing kinetic mineral reactions. Modeling results suggested that reductions in groundwater pH were more significant in the carbonate-poor aquifers than in the carbonate-rich aquifers, resulting in potential groundwater acidification. Modeled concentrations of major ions showed overall increasing trends, depending on mineralogy of the sediments, especially carbonate content. The geochemical model confirmed that mobilization of trace metals was caused likely by mineral dissolution and surface complexation on clay mineral surfaces. Although dissolved inorganic carbon and pH may be used as indicative parameters in potable aquifers, selection of geochemical parameters for CO2 leakage detection is site-specific and a stepwise procedure may be followed. A combined study of the geochemical models with the laboratory batch experiments improves our understanding of the mechanisms that dominate responses of water chemistry to CO2 leakage and also provides a frame of reference for designing monitoring strategy in potable aquifers.
Smad Signaling Dynamics: Insights from a Parsimonious Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wiley, H. S.; Shankaran, Harish
2008-09-09
The molecular mechanisms that transmit information from cell surface receptors to the nucleus are exceedingly complex; thus, much effort has been expended in developing computational models to understand these processes. A recent study on modeling the nuclear-cytoplasmic shuttling of Smad2-Smad4 complexes in response to transforming growth factor β (TGF-β) receptor activation has provided substantial insight into how this signaling network translates the degree of TGF-β receptor activation (input) into the amount of nuclear Smad2-Smad4 complexes (output). The study addressed this question by combining a simple, mechanistic model with targeted experiments, an approach that proved particularly powerful for exploring the fundamentalmore » properties of a complex signaling network. The mathematical model revealed that Smad nuclear-cytoplasmic dynamics enables a proportional, but time-delayed coupling between the input and the output. As a result, the output can faithfully track gradual changes in the input, while the rapid input fluctuations that constitute signaling noise are dampened out.« less
ERIC Educational Resources Information Center
Deed, Craig; Alterator, Scott
2017-01-01
Evaluating informal learning spaces in higher education institutions needs to respond to the complex conceptual orientation underpinning their intention and design. This article outlines a model of participatory analysis that accounts for the conceptual complexity, lived experience and broad intentions of informal learning space. Further, the…
Observing System Simulation Experiments for Fun and Profit
NASA Technical Reports Server (NTRS)
Prive, Nikki C.
2015-01-01
Observing System Simulation Experiments can be powerful tools for evaluating and exploring both the behavior of data assimilation systems and the potential impacts of future observing systems. With great power comes great responsibility - given a pure modeling framework, how can we be sure our results are meaningful? The challenges and pitfalls of OSSE calibration and validation will be addressed, as well as issues of incestuousness, selection of appropriate metrics, and experiment design. The use of idealized observational networks to investigate theoretical ideas in a fully complex modeling framework will also be discussed
Linskell, Jeremy; Bouamrane, Matt-Mouley
2012-09-01
An assisted living space (ALS) is a technology-enabled environment designed to allow people with complex health or social care needs to remain, and live independently, in their own home for longer. However, many challenges remain in order to deliver usable systems acceptable to a diverse range of stakeholders, including end-users, and their families and carers, as well as health and social care services. ALSs need to support activities of daily-living while allowing end-users to maintain important social connections. They must be dynamic, flexible and adaptable living environments. In this article, we provide an overview of the technological landscape of assisted-living technology (ALT) and recent policies to promote an increased adoption of ALT in Scotland. We discuss our experiences in implementing technology-supported ALSs and emphasise key lessons. Finally, we propose an iterative and pragmatic user-centred implementation model for delivering ALSs in complex-needs scenarios. This empirical model is derived from our past ALS implementations. The proposed model allows project stakeholders to identify requirements, allocate tasks and responsibilities, and identify appropriate technological solutions for the delivery of functional ALS systems. The model is generic and makes no assumptions on needs or technology solutions, nor on the technical knowledge, skills and experience of the stakeholders involved in the ALS design process.
Interaction between S100P and the anti-allergy drug cromolyn
DOE Office of Scientific and Technical Information (OSTI.GOV)
Penumutchu, Srinivasa R.; Chou, Ruey-Hwang; Department of Biotechnology, Asia University, Taichung 413, Taiwan
2014-11-21
Highlights: • The interaction between S100P–cromolyn was investigated by fluorescence spectroscopy. • The interfacial residues on S100P and cromolyn contact surface were mapped by {sup 1}H-{sup 15}N HSQC experiments. • S100P–cromolyn complex model was generated from NMR restraints using HADDOCK program. • The stability of the S100P–cromolyn complex was studied using molecular dynamics simulations. - Abstract: The S100P protein has been known to mediate cell proliferation by binding the receptor for advanced glycation end products (RAGE) to activate signaling pathways, such as the extracellular regulated kinase (ERK) and nuclear factor kappa-light-chain-enhancer of activated B cells (NF-κB) pathways. S100P/RAGE signaling ismore » involved in a variety of diseases, such as cancer, metastasis, and diabetes. Cromolyn is an anti-allergy drug that binds S100P to block the interaction between S100P and RAGE. In the present study, we characterized the properties of the binding between cromolyn and calcium-bound S100P using various biophysical techniques. The binding affinity for S100P and cromolyn was measured to be in the millimolar range by fluorescence spectroscopy. NMR-HSQC titration experiments and HADDOCK modeling was employed to determine the spatial structure of the proposed heterotetramer model of the S100P–cromolyn complex. Additional MD simulation results revealed the important properties in the complex stability and conformational flexibility of the S100P–cromolyn complex. This proposed model has provided an understanding of the molecular level interactions of S100P–cromolyn complex.« less
NASA Astrophysics Data System (ADS)
Nir, A.; Doughty, C.; Tsang, C. F.
Validation methods which developed in the context of deterministic concepts of past generations often cannot be directly applied to environmental problems, which may be characterized by limited reproducibility of results and highly complex models. Instead, validation is interpreted here as a series of activities, including both theoretical and experimental tests, designed to enhance our confidence in the capability of a proposed model to describe some aspect of reality. We examine the validation process applied to a project concerned with heat and fluid transport in porous media, in which mathematical modeling, simulation, and results of field experiments are evaluated in order to determine the feasibility of a system for seasonal thermal energy storage in shallow unsaturated soils. Technical details of the field experiments are not included, but appear in previous publications. Validation activities are divided into three stages. The first stage, carried out prior to the field experiments, is concerned with modeling the relevant physical processes, optimization of the heat-exchanger configuration and the shape of the storage volume, and multi-year simulation. Subjects requiring further theoretical and experimental study are identified at this stage. The second stage encompasses the planning and evaluation of the initial field experiment. Simulations are made to determine the experimental time scale and optimal sensor locations. Soil thermal parameters and temperature boundary conditions are estimated using an inverse method. Then results of the experiment are compared with model predictions using different parameter values and modeling approximations. In the third stage, results of an experiment performed under different boundary conditions are compared to predictions made by the models developed in the second stage. Various aspects of this theoretical and experimental field study are described as examples of the verification and validation procedure. There is no attempt to validate a specific model, but several models of increasing complexity are compared with experimental results. The outcome is interpreted as a demonstration of the paradigm proposed by van der Heijde, 26 that different constituencies have different objectives for the validation process and therefore their acceptance criteria differ also.
2016-01-01
Background Computer Networks have a tendency to grow at an unprecedented scale. Modern networks involve not only computers but also a wide variety of other interconnected devices ranging from mobile phones to other household items fitted with sensors. This vision of the "Internet of Things" (IoT) implies an inherent difficulty in modeling problems. Purpose It is practically impossible to implement and test all scenarios for large-scale and complex adaptive communication networks as part of Complex Adaptive Communication Networks and Environments (CACOONS). The goal of this study is to explore the use of Agent-based Modeling as part of the Cognitive Agent-based Computing (CABC) framework to model a Complex communication network problem. Method We use Exploratory Agent-based Modeling (EABM), as part of the CABC framework, to develop an autonomous multi-agent architecture for managing carbon footprint in a corporate network. To evaluate the application of complexity in practical scenarios, we have also introduced a company-defined computer usage policy. Results The conducted experiments demonstrated two important results: Primarily CABC-based modeling approach such as using Agent-based Modeling can be an effective approach to modeling complex problems in the domain of IoT. Secondly, the specific problem of managing the Carbon footprint can be solved using a multiagent system approach. PMID:26812235
How to Make Data a Blessing to Parametric Uncertainty Quantification and Reduction?
NASA Astrophysics Data System (ADS)
Ye, M.; Shi, X.; Curtis, G. P.; Kohler, M.; Wu, J.
2013-12-01
In a Bayesian point of view, probability of model parameters and predictions are conditioned on data used for parameter inference and prediction analysis. It is critical to use appropriate data for quantifying parametric uncertainty and its propagation to model predictions. However, data are always limited and imperfect. When a dataset cannot properly constrain model parameters, it may lead to inaccurate uncertainty quantification. While in this case data appears to be a curse to uncertainty quantification, a comprehensive modeling analysis may help understand the cause and characteristics of parametric uncertainty and thus turns data into a blessing. In this study, we illustrate impacts of data on uncertainty quantification and reduction using an example of surface complexation model (SCM) developed to simulate uranyl (U(VI)) adsorption. The model includes two adsorption sites, referred to as strong and weak sites. The amount of uranium adsorption on these sites determines both the mean arrival time and the long tail of the breakthrough curves. There is one reaction on the weak site but two reactions on the strong site. The unknown parameters include fractions of the total surface site density of the two sites and surface complex formation constants of the three reactions. A total of seven experiments were conducted with different geochemical conditions to estimate these parameters. The experiments with low initial concentration of U(VI) result in a large amount of parametric uncertainty. A modeling analysis shows that it is because the experiments cannot distinguish the relative adsorption affinity of the strong and weak sites on uranium adsorption. Therefore, the experiments with high initial concentration of U(VI) are needed, because in the experiments the strong site is nearly saturated and the weak site can be determined. The experiments with high initial concentration of U(VI) are a blessing to uncertainty quantification, and the experiments with low initial concentration help modelers turn a curse into a blessing. The data impacts on uncertainty quantification and reduction are quantified using probability density functions of model parameters obtained from Markov Chain Monte Carlo simulation using the DREAM algorithm. This study provides insights to model calibration, uncertainty quantification, experiment design, and data collection in groundwater reactive transport modeling and other environmental modeling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hinerman, Jennifer M.; Dignam, J. David; Mueser, Timothy C.
2012-04-05
The bacteriophage T4 gp59 helicase assembly protein (gp59) is required for loading of gp41 replicative helicase onto DNA protected by gp32 single-stranded DNA-binding protein. The gp59 protein recognizes branched DNA structures found at replication and recombination sites. Binding of gp32 protein (full-length and deletion constructs) to gp59 protein measured by isothermal titration calorimetry demonstrates that the gp32 protein C-terminal A-domain is essential for protein-protein interaction in the absence of DNA. Sedimentation velocity experiments with gp59 protein and gp32ΔB protein (an N-terminal B-domain deletion) show that these proteins are monomers but form a 1:1 complex with a dissociation constant comparable withmore » that determined by isothermal titration calorimetry. Small angle x-ray scattering (SAXS) studies indicate that the gp59 protein is a prolate monomer, consistent with the crystal structure and hydrodynamic properties determined from sedimentation velocity experiments. SAXS experiments also demonstrate that gp32ΔB protein is a prolate monomer with an elongated A-domain protruding from the core. Moreover, fitting structures of gp59 protein and the gp32 core into the SAXS-derived molecular envelope supports a model for the gp59 protein-gp32ΔB protein complex. Our earlier work demonstrated that gp59 protein attracts full-length gp32 protein to pseudo-Y junctions. A model of the gp59 protein-DNA complex, modified to accommodate new SAXS data for the binary complex together with mutational analysis of gp59 protein, is presented in the accompanying article (Dolezal, D., Jones, C. E., Lai, X., Brister, J. R., Mueser, T. C., Nossal, N. G., and Hinton, D. M. (2012) J. Biol. Chem. 287, 18596–18607).« less
Auditory sensitivity of seals and sea lions in complex listening scenarios.
Cunningham, Kane A; Southall, Brandon L; Reichmuth, Colleen
2014-12-01
Standard audiometric data, such as audiograms and critical ratios, are often used to inform marine mammal noise-exposure criteria. However, these measurements are obtained using simple, artificial stimuli-i.e., pure tones and flat-spectrum noise-while natural sounds typically have more complex structure. In this study, detection thresholds for complex signals were measured in (I) quiet and (II) masked conditions for one California sea lion (Zalophus californianus) and one harbor seal (Phoca vitulina). In Experiment I, detection thresholds in quiet conditions were obtained for complex signals designed to isolate three common features of natural sounds: Frequency modulation, amplitude modulation, and harmonic structure. In Experiment II, detection thresholds were obtained for the same complex signals embedded in two types of masking noise: Synthetic flat-spectrum noise and recorded shipping noise. To evaluate how accurately standard hearing data predict detection of complex sounds, the results of Experiments I and II were compared to predictions based on subject audiograms and critical ratios combined with a basic hearing model. Both subjects exhibited greater-than-predicted sensitivity to harmonic signals in quiet and masked conditions, as well as to frequency-modulated signals in masked conditions. These differences indicate that the complex features of naturally occurring sounds enhance detectability relative to simple stimuli.
1998-09-30
model (simplest) to a fluid saturated poroelastic model (most complex). Based on the results of the theoretical treatment, a laboratory experiment has...are widely separated for each model . Finally, if a sediment is modeled by Biot theory, which describes wave propagation in a saturated poroelastic ...application of Biot theory to sediment acoustics . The predicted resonance behavior under each model is distinct, so an optical extinction measurement may
Modeling of protein binary complexes using structural mass spectrometry data
Kamal, J.K. Amisha; Chance, Mark R.
2008-01-01
In this article, we describe a general approach to modeling the structure of binary protein complexes using structural mass spectrometry data combined with molecular docking. In the first step, hydroxyl radical mediated oxidative protein footprinting is used to identify residues that experience conformational reorganization due to binding or participate in the binding interface. In the second step, a three-dimensional atomic structure of the complex is derived by computational modeling. Homology modeling approaches are used to define the structures of the individual proteins if footprinting detects significant conformational reorganization as a function of complex formation. A three-dimensional model of the complex is constructed from these binary partners using the ClusPro program, which is composed of docking, energy filtering, and clustering steps. Footprinting data are used to incorporate constraints—positive and/or negative—in the docking step and are also used to decide the type of energy filter—electrostatics or desolvation—in the successive energy-filtering step. By using this approach, we examine the structure of a number of binary complexes of monomeric actin and compare the results to crystallographic data. Based on docking alone, a number of competing models with widely varying structures are observed, one of which is likely to agree with crystallographic data. When the docking steps are guided by footprinting data, accurate models emerge as top scoring. We demonstrate this method with the actin/gelsolin segment-1 complex. We also provide a structural model for the actin/cofilin complex using this approach which does not have a crystal or NMR structure. PMID:18042684
Evaluation of the cognitive effects of travel technique in complex real and virtual environments.
Suma, Evan A; Finkelstein, Samantha L; Reid, Myra; V Babu, Sabarish; Ulinski, Amy C; Hodges, Larry F
2010-01-01
We report a series of experiments conducted to investigate the effects of travel technique on information gathering and cognition in complex virtual environments. In the first experiment, participants completed a non-branching multilevel 3D maze at their own pace using either real walking or one of two virtual travel techniques. In the second experiment, we constructed a real-world maze with branching pathways and modeled an identical virtual environment. Participants explored either the real or virtual maze for a predetermined amount of time using real walking or a virtual travel technique. Our results across experiments suggest that for complex environments requiring a large number of turns, virtual travel is an acceptable substitute for real walking if the goal of the application involves learning or reasoning based on information presented in the virtual world. However, for applications that require fast, efficient navigation or travel that closely resembles real-world behavior, real walking has advantages over common joystick-based virtual travel techniques.
Simple Climate Model Evaluation Using Impulse Response Tests
NASA Astrophysics Data System (ADS)
Schwarber, A.; Hartin, C.; Smith, S. J.
2017-12-01
Simple climate models (SCMs) are central tools used to incorporate climate responses into human-Earth system modeling. SCMs are computationally inexpensive, making them an ideal tool for a variety of analyses, including consideration of uncertainty. Despite their wide use, many SCMs lack rigorous testing of their fundamental responses to perturbations. Here, following recommendations of a recent National Academy of Sciences report, we compare several SCMs (Hector-deoclim, MAGICC 5.3, MAGICC 6.0, and the IPCC AR5 impulse response function) to diagnose model behavior and understand the fundamental system responses within each model. We conduct stylized perturbations (emissions and forcing/concentration) of three different chemical species: CO2, CH4, and BC. We find that all 4 models respond similarly in terms of overall shape, however, there are important differences in the timing and magnitude of the responses. For example, the response to a BC pulse differs over the first 20 years after the pulse among the models, a finding that is due to differences in model structure. Such perturbation experiments are difficult to conduct in complex models due to internal model noise, making a direct comparison with simple models challenging. We can, however, compare the simplified model response from a 4xCO2 step experiment to the same stylized experiment carried out by CMIP5 models, thereby testing the ability of SCMs to emulate complex model results. This work allows an assessment of how well current understanding of Earth system responses are incorporated into multi-model frameworks by way of simple climate models.
Single- and two-phase flow in microfluidic porous media analogs based on Voronoi tessellation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Mengjie; Xiao, Feng; Johnson-Paben, Rebecca
2012-01-01
The objective of this study was to create a microfluidic model of complex porous media for studying single and multiphase flows. Most experimental porous media models consist of periodic geometries that lend themselves to comparison with well-developed theoretical predictions. However, most real porous media such as geological formations and biological tissues contain a degree of randomness and complexity that is not adequately represented in periodic geometries. To design an experimental tool to study these complex geometries, we created microfluidic models of random homogeneous and heterogeneous networks based on Voronoi tessellations. These networks consisted of approximately 600 grains separated by amore » highly connected network of channels with an overall porosity of 0.11 0.20. We found that introducing heterogeneities in the form of large cavities within the network changed the permeability in a way that cannot be predicted by the classical porosity-permeability relationship known as the Kozeny equation. The values of permeability found in experiments were in excellent agreement with those calculated from three-dimensional lattice Boltzmann simulations. In two-phase flow experiments of oil displacement with water we found that the surface energy of channel walls determined the pattern of water invasion, while the network topology determined the residual oil saturation. These results suggest that complex network topologies lead to fluid flow behavior that is difficult to predict based solely on porosity. The microfluidic models developed in this study using a novel geometry generation algorithm based on Voronoi tessellation are a new experimental tool for studying fluid and solute transport problems within complex porous media.« less
Communication: Introducing prescribed biases in out-of-equilibrium Markov models
NASA Astrophysics Data System (ADS)
Dixit, Purushottam D.
2018-03-01
Markov models are often used in modeling complex out-of-equilibrium chemical and biochemical systems. However, many times their predictions do not agree with experiments. We need a systematic framework to update existing Markov models to make them consistent with constraints that are derived from experiments. Here, we present a framework based on the principle of maximum relative path entropy (minimum Kullback-Leibler divergence) to update Markov models using stationary state and dynamical trajectory-based constraints. We illustrate the framework using a biochemical model network of growth factor-based signaling. We also show how to find the closest detailed balanced Markov model to a given Markov model. Further applications and generalizations are discussed.
Mock Data Challenge for the MPD/NICA Experiment on the HybriLIT Cluster
NASA Astrophysics Data System (ADS)
Gertsenberger, Konstantin; Rogachevsky, Oleg
2018-02-01
Simulation of data processing before receiving first experimental data is an important issue in high-energy physics experiments. This article presents the current Event Data Model and the Mock Data Challenge for the MPD experiment at the NICA accelerator complex which uses ongoing simulation studies to exercise in a stress-testing the distributed computing infrastructure and experiment software in the full production environment from simulated data through the physical analysis.
Accuracy and Calibration of High Explosive Thermodynamic Equations of State
2010-08-01
physics descriptions, but can also mean increased calibration complexity. A generalized extent of aluminum reaction, the Jones-Wilkins-Lee ( JWL ) based...predictions compared to experiments 3 3 PAX-30 JWL and JWLB cylinder test predictions compared to experiments 4 4 PAX-29 JWL and JWLB cylinder test...predictions compared to experiments 5 5 Experiment and modeling comparisons for HMX/AI 85/15 7 TABLES 1 LX-14 JWL and JWLB cylinder test velocity
Extracting Models in Single Molecule Experiments
NASA Astrophysics Data System (ADS)
Presse, Steve
2013-03-01
Single molecule experiments can now monitor the journey of a protein from its assembly near a ribosome to its proteolytic demise. Ideally all single molecule data should be self-explanatory. However data originating from single molecule experiments is particularly challenging to interpret on account of fluctuations and noise at such small scales. Realistically, basic understanding comes from models carefully extracted from the noisy data. Statistical mechanics, and maximum entropy in particular, provide a powerful framework for accomplishing this task in a principled fashion. Here I will discuss our work in extracting conformational memory from single molecule force spectroscopy experiments on large biomolecules. One clear advantage of this method is that we let the data tend towards the correct model, we do not fit the data. I will show that the dynamical model of the single molecule dynamics which emerges from this analysis is often more textured and complex than could otherwise come from fitting the data to a pre-conceived model.
The Design of Large-Scale Complex Engineered Systems: Present Challenges and Future Promise
NASA Technical Reports Server (NTRS)
Bloebaum, Christina L.; McGowan, Anna-Maria Rivas
2012-01-01
Model-Based Systems Engineering techniques are used in the SE community to address the need for managing the development of complex systems. A key feature of the MBSE approach is the use of a model to capture the requirements, architecture, behavior, operating environment and other key aspects of the system. The focus on the model differentiates MBSE from traditional SE techniques that may have a document centric approach. In an effort to assess the benefit of utilizing MBSE on its flight projects, NASA Langley has implemented a pilot program to apply MBSE techniques during the early phase of the Materials International Space Station Experiment-X (MISSE-X). MISSE-X is a Technology Demonstration Mission being developed by the NASA Office of the Chief Technologist i . Designed to be installed on the exterior of the International Space Station (ISS), MISSE-X will host experiments that advance the technology readiness of materials and devices needed for future space exploration. As a follow-on to the highly successful series of previous MISSE experiments on ISS, MISSE-X benefits from a significant interest by the
A Hardware Model Validation Tool for Use in Complex Space Systems
NASA Technical Reports Server (NTRS)
Davies, Misty Dawn; Gundy-Burlet, Karen L.; Limes, Gregory L.
2010-01-01
One of the many technological hurdles that must be overcome in future missions is the challenge of validating as-built systems against the models used for design. We propose a technique composed of intelligent parameter exploration in concert with automated failure analysis as a scalable method for the validation of complex space systems. The technique is impervious to discontinuities and linear dependencies in the data, and can handle dimensionalities consisting of hundreds of variables over tens of thousands of experiments.
A new decision sciences for complex systems.
Lempert, Robert J
2002-05-14
Models of complex systems can capture much useful information but can be difficult to apply to real-world decision-making because the type of information they contain is often inconsistent with that required for traditional decision analysis. New approaches, which use inductive reasoning over large ensembles of computational experiments, now make possible systematic comparison of alternative policy options using models of complex systems. This article describes Computer-Assisted Reasoning, an approach to decision-making under conditions of deep uncertainty that is ideally suited to applying complex systems to policy analysis. The article demonstrates the approach on the policy problem of global climate change, with a particular focus on the role of technology policies in a robust, adaptive strategy for greenhouse gas abatement.
Simulation-based modeling of building complexes construction management
NASA Astrophysics Data System (ADS)
Shepelev, Aleksandr; Severova, Galina; Potashova, Irina
2018-03-01
The study reported here examines the experience in the development and implementation of business simulation games based on network planning and management of high-rise construction. Appropriate network models of different types and levels of detail have been developed; a simulation model including 51 blocks (11 stages combined in 4 units) is proposed.
NASA Astrophysics Data System (ADS)
Marsac, R.; Davranche, M.; Gruau, G.; Dia, A.
2009-04-01
In natural organic-rich waters, rare earth elements (REE) speciation is mainly controlled by organic colloids such as humic acid (HA). Different series of REE-HA complexation experiments performed at several metal loading (REE/C) displayed two pattern shapes (i) at high metal loading, a middle-REE (MREE) downward concavity, and (ii) at low metal loading, a regular increase from La to Lu (e.g. Sonke and Salters, 2006; Pourret et al., 2007). Both REE patterns might be related to REE binding with different surface sites on HA. To understand REE-HA binding, REE-HA complexation experiments at various metals loading were carried out using ultrafiltration combined with ICP-MS measurements, for the 14 REE simultaneously. The patterns of the apparent coefficients of REE partition between HA and the inorganic solution (log Kd) evolved regularly according to the metal loading. The REE patterns presented a MREE downward concavity at low loading and a regular increase from La to Lu at high loading. The dataset was modelled with Model VI by adjusting two specific parameters, log KMA, the apparent complexation constant of HA low affinity sites and DLK2, the parameter increasing high affinity sites binding strength. Experiments and modelling provided evidence that HA high affinity sites controlled the REE binding with HA at low metal loading. The REE-HA complex could be as multidentate complexes with carboxylic or phenolic sites or potentially with sites constituted of N, P or S as donor atoms. Moreover, these high affinity sites could be different for light and heavy REE, because heavy REE have higher affinity for these sites, in low density, and could saturate them. These new Model VI parameter sets allowed the prediction of the REE-HA pattern shape evolution on a large range of pH and metal loading. According to the metal loading, the evolution of the calculated REE patterns was similar to the various REE pattern observed in natural acidic organic-rich waters (pH<7 and DOC>10 mg L-1). As a consequence, the metal loading could be the key parameter controlling the REE pattern in organic-rich waters.
NASA Astrophysics Data System (ADS)
Chen, X.; Zachara, J. M.; Vermeul, V. R.; Freshley, M.; Hammond, G. E.
2015-12-01
The behavior of a persistent uranium plume in an extended groundwater- river water (GW-SW) interaction zone at the DOE Hanford site is dominantly controlled by river stage fluctuations in the adjacent Columbia River. The plume behavior is further complicated by substantial heterogeneity in physical and geochemical properties of the host aquifer sediments. Multi-scale field and laboratory experiments and reactive transport modeling were integrated to understand the complex plume behavior influenced by highly variable hydrologic and geochemical conditions in time and space. In this presentation we (1) describe multiple data sets from field-scale uranium adsorption and desorption experiments performed at our experimental well-field, (2) develop a reactive transport model that incorporates hydrologic and geochemical heterogeneities characterized from multi-scale and multi-type datasets and a surface complexation reaction network based on laboratory studies, and (3) compare the modeling and observation results to provide insights on how to refine the conceptual model and reduce prediction uncertainties. The experimental results revealed significant spatial variability in uranium adsorption/desorption behavior, while modeling demonstrated that ambient hydrologic and geochemical conditions and heterogeneities in sediment physical and chemical properties both contributed to complex plume behavior and its persistence. Our analysis provides important insights into the characterization, understanding, modeling, and remediation of groundwater contaminant plumes influenced by surface water and groundwater interactions.
Modeling Structure and Dynamics of Protein Complexes with SAXS Profiles
Schneidman-Duhovny, Dina; Hammel, Michal
2018-01-01
Small-angle X-ray scattering (SAXS) is an increasingly common and useful technique for structural characterization of molecules in solution. A SAXS experiment determines the scattering intensity of a molecule as a function of spatial frequency, termed SAXS profile. SAXS profiles can be utilized in a variety of molecular modeling applications, such as comparing solution and crystal structures, structural characterization of flexible proteins, assembly of multi-protein complexes, and modeling of missing regions in the high-resolution structure. Here, we describe protocols for modeling atomic structures based on SAXS profiles. The first protocol is for comparing solution and crystal structures including modeling of missing regions and determination of the oligomeric state. The second protocol performs multi-state modeling by finding a set of conformations and their weights that fit the SAXS profile starting from a single-input structure. The third protocol is for protein-protein docking based on the SAXS profile of the complex. We describe the underlying software, followed by demonstrating their application on interleukin 33 (IL33) with its primary receptor ST2 and DNA ligase IV-XRCC4 complex. PMID:29605933
NASA Astrophysics Data System (ADS)
Scudeler, Carlotta; Pangle, Luke; Pasetto, Damiano; Niu, Guo-Yue; Volkmann, Till; Paniconi, Claudio; Putti, Mario; Troch, Peter
2016-10-01
This paper explores the challenges of model parameterization and process representation when simulating multiple hydrologic responses from a highly controlled unsaturated flow and transport experiment with a physically based model. The experiment, conducted at the Landscape Evolution Observatory (LEO), involved alternate injections of water and deuterium-enriched water into an initially very dry hillslope. The multivariate observations included point measures of water content and tracer concentration in the soil, total storage within the hillslope, and integrated fluxes of water and tracer through the seepage face. The simulations were performed with a three-dimensional finite element model that solves the Richards and advection-dispersion equations. Integrated flow, integrated transport, distributed flow, and distributed transport responses were successively analyzed, with parameterization choices at each step supported by standard model performance metrics. In the first steps of our analysis, where seepage face flow, water storage, and average concentration at the seepage face were the target responses, an adequate match between measured and simulated variables was obtained using a simple parameterization consistent with that from a prior flow-only experiment at LEO. When passing to the distributed responses, it was necessary to introduce complexity to additional soil hydraulic parameters to obtain an adequate match for the point-scale flow response. This also improved the match against point measures of tracer concentration, although model performance here was considerably poorer. This suggests that still greater complexity is needed in the model parameterization, or that there may be gaps in process representation for simulating solute transport phenomena in very dry soils.
Transdimensional Seismic Tomography
NASA Astrophysics Data System (ADS)
Bodin, T.; Sambridge, M.
2009-12-01
In seismic imaging the degree of model complexity is usually determined by manually tuning damping parameters within a fixed parameterization chosen in advance. Here we present an alternative methodology for seismic travel time tomography where the model complexity is controlled automatically by the data. In particular we use a variable parametrization consisting of Voronoi cells with mobile geometry, shape and number, all treated as unknowns in the inversion. The reversible jump algorithm is used to sample the transdimensional model space within a Bayesian framework which avoids global damping procedures and the need to tune regularisation parameters. The method is an ensemble inference approach, as many potential solutions are generated with variable numbers of cells. Information is extracted from the ensemble as a whole by performing Monte Carlo integration to produce the expected Earth model. The ensemble of models can also be used to produce velocity uncertainty estimates and experiments with synthetic data suggest they represent actual uncertainty surprisingly well. In a transdimensional approach, the level of data uncertainty directly determines the model complexity needed to satisfy the data. Intriguingly, the Bayesian formulation can be extended to the case where data uncertainty is also uncertain. Experiments show that it is possible to recover data noise estimate while at the same time controlling model complexity in an automated fashion. The method is tested on synthetic data in a 2-D application and compared with a more standard matrix based inversion scheme. The method has also been applied to real data obtained from cross correlation of ambient noise where little is known about the size of the errors associated with the travel times. As an example, a tomographic image of Rayleigh wave group velocity for the Australian continent is constructed for 5s data together with uncertainty estimates.
Tourret, Damien; Clarke, Amy J.; Imhoff, Seth D.; ...
2015-05-27
We present a three-dimensional extension of the multiscale dendritic needle network (DNN) model. This approach enables quantitative simulations of the unsteady dynamics of complex hierarchical networks in spatially extended dendritic arrays. We apply the model to directional solidification of Al-9.8 wt.%Si alloy and directly compare the model predictions with measurements from experiments with in situ x-ray imaging. The focus is on the dynamical selection of primary spacings over a range of growth velocities, and the influence of sample geometry on the selection of spacings. Simulation results show good agreement with experiments. The computationally efficient DNN model opens new avenues formore » investigating the dynamics of large dendritic arrays at scales relevant to solidification experiments and processes.« less
A Large Class Engagement (LCE) Model Based on Service-Dominant Logic (SDL) and Flipped Classrooms
ERIC Educational Resources Information Center
Jarvis, Wade; Halvorson, Wade; Sadeque, Saalem; Johnston, Shannon
2014-01-01
Ensuring that university graduates are ready for their professional futures is a complex undertaking that includes, but is not limited to, the development of their professional knowledge and skills, and the provision of empowering learning experiences established through their own contributions. One way to draw these complex processes together for…
ERIC Educational Resources Information Center
Stevens, Catherine; Gallagher, Melinda
2004-01-01
This experiment investigated relational complexity and relational shift in judgments of auditory patterns. Pitch and duration values were used to construct two-note perceptually similar sequences (unary relations) and four-note relationally similar sequences (binary relations). It was hypothesized that 5-, 8- and 11-year-old children would perform…
Job Loss: An Individual Level Review and Model.
ERIC Educational Resources Information Center
DeFrank, Richard S.; Ivancevich, John M.
1986-01-01
Reviews behavioral, medical, and social science literature to illustrate the complexity and multidisciplinary nature of the job loss experience and provides a conceptual model to examine individual responses to job loss. Emphasizes the importance of including organizational-relevant variables in individual level conceptualizations and proposed…
Experimental Simulations of Lunar Magma Ocean Crystallization: The Plot (But Not the Crust) Thickens
NASA Technical Reports Server (NTRS)
Draper, D. S.; Rapp, J. F.; Elardo, S. M.; Shearer, C. K., Jr.; Neal, C. R.
2016-01-01
Numerical models of differentiation of a global-scale lunar magma ocean (LMO) have raised as many questions as they have answered. Recent orbital missions and sample studies have provided new context for a large range of lithologies, from the comparatively magnesian "purest anorthosite" reported by to Si-rich domes and spinel-rich clasts with widespread areal distributions. In addition, the GRAIL mission provided strong constraints on lunar crustal density and average thickness. Can this increasingly complex geology be accounted for via the formation and evolution of the LMO? We have in recent years been conducting extensive sets of petrologic experiments designed to fully simulate LMO crystallization, which had not been attempted previously. Here we review the key results from these experiments, which show that LMO differentiation is more complex than initial models suggested. Several important features expected from LMO crystallization models have yet to be reproduced experimentally; combined modelling and experimental work by our group is ongoing.
Phosphate effects on copper(II) and lead(II) sorption to ferrihydrite
NASA Astrophysics Data System (ADS)
Tiberg, Charlotta; Sjöstedt, Carin; Persson, Ingmar; Gustafsson, Jon Petter
2013-11-01
Transport of lead(II) and copper(II) ions in soil is affected by the soil phosphorus status. Part of the explanation may be that phosphate increases the adsorption of copper(II) and lead(II) to iron (hydr)oxides in soil, but the details of these interactions are poorly known. Knowledge about such mechanisms is important, for example, in risk assessments of contaminated sites and development of remediation methods. We used a combination of batch experiments, extended X-ray absorption fine structure (EXAFS) spectroscopy and surface complexation modeling with the three-plane CD-MUSIC model to study the effect of phosphate on sorption of copper(II) and lead(II) to ferrihydrite. The aim was to identify the surface complexes formed and to derive constants for the surface complexation reactions. In the batch experiments phosphate greatly enhanced the adsorption of copper(II) and lead(II) to ferrihydrite at pH < 6. The largest effects were seen for lead(II).
Surfing on Protein Waves: Proteophoresis as a Mechanism for Bacterial Genome Partitioning
NASA Astrophysics Data System (ADS)
Walter, J.-C.; Dorignac, J.; Lorman, V.; Rech, J.; Bouet, J.-Y.; Nollmann, M.; Palmeri, J.; Parmeggiani, A.; Geniet, F.
2017-07-01
Efficient bacterial chromosome segregation typically requires the coordinated action of a three-component machinery, fueled by adenosine triphosphate, called the partition complex. We present a phenomenological model accounting for the dynamic activity of this system that is also relevant for the physics of catalytic particles in active environments. The model is obtained by coupling simple linear reaction-diffusion equations with a proteophoresis, or "volumetric" chemophoresis, force field that arises from protein-protein interactions and provides a physically viable mechanism for complex translocation. This minimal description captures most known experimental observations: dynamic oscillations of complex components, complex separation, and subsequent symmetrical positioning. The predictions of our model are in phenomenological agreement with and provide substantial insight into recent experiments. From a nonlinear physics view point, this system explores the active separation of matter at micrometric scales with a dynamical instability between static positioning and traveling wave regimes triggered by the dynamical spontaneous breaking of rotational symmetry.
The Role of Air-sea Coupling in the Response of Climate Extremes to Aerosols
NASA Astrophysics Data System (ADS)
Mahajan, S.
2017-12-01
Air-sea interactions dominate the climate of surrounding regions and thus also modulate the climate response to local and remote aerosol forcings. To clearly isolate the role of air-sea coupling in the climate response to aerosols, we conduct experiments with a full complexity atmosphere model that is coupled to a series of ocean models progressively increasing in complexity. The ocean models range from a data ocean model with prescribed SSTs, to a slab ocean model that only allows thermodynamic interactions, to a full dynamic ocean model. In a preliminary study, we have conducted single forcing experiments with black carbon aerosols in an atmosphere GCM coupled to a data ocean model and a slab ocean model. We find that while black carbon aerosols can intensify mean and extreme summer monsoonal precipitation over the Indian sub-continent, air-sea coupling can dramatically modulate this response. Black carbon aerosols in the vicinity of the Arabian Sea result in an increase of sea surface temperatures there in the slab ocean model, which intensify the low-level Somali Jet. The associated increase in moisture transport into Western India enhances the mean as well as extreme precipitation. In prescribed SST experiments, where SSTs are not allowed to respond BC aerosols, the response is muted. We will present results from a hierarchy of GCM simulations that investigate the role of air-sea coupling in the climate response to aerosols in more detail.
Intelligent classifier for dynamic fault patterns based on hidden Markov model
NASA Astrophysics Data System (ADS)
Xu, Bo; Feng, Yuguang; Yu, Jinsong
2006-11-01
It's difficult to build precise mathematical models for complex engineering systems because of the complexity of the structure and dynamics characteristics. Intelligent fault diagnosis introduces artificial intelligence and works in a different way without building the analytical mathematical model of a diagnostic object, so it's a practical approach to solve diagnostic problems of complex systems. This paper presents an intelligent fault diagnosis method, an integrated fault-pattern classifier based on Hidden Markov Model (HMM). This classifier consists of dynamic time warping (DTW) algorithm, self-organizing feature mapping (SOFM) network and Hidden Markov Model. First, after dynamic observation vector in measuring space is processed by DTW, the error vector including the fault feature of being tested system is obtained. Then a SOFM network is used as a feature extractor and vector quantization processor. Finally, fault diagnosis is realized by fault patterns classifying with the Hidden Markov Model classifier. The importing of dynamic time warping solves the problem of feature extracting from dynamic process vectors of complex system such as aeroengine, and makes it come true to diagnose complex system by utilizing dynamic process information. Simulating experiments show that the diagnosis model is easy to extend, and the fault pattern classifier is efficient and is convenient to the detecting and diagnosing of new faults.
A toolbox for discrete modelling of cell signalling dynamics.
Paterson, Yasmin Z; Shorthouse, David; Pleijzier, Markus W; Piterman, Nir; Bendtsen, Claus; Hall, Benjamin A; Fisher, Jasmin
2018-06-18
In an age where the volume of data regarding biological systems exceeds our ability to analyse it, many researchers are looking towards systems biology and computational modelling to help unravel the complexities of gene and protein regulatory networks. In particular, the use of discrete modelling allows generation of signalling networks in the absence of full quantitative descriptions of systems, which are necessary for ordinary differential equation (ODE) models. In order to make such techniques more accessible to mainstream researchers, tools such as the BioModelAnalyzer (BMA) have been developed to provide a user-friendly graphical interface for discrete modelling of biological systems. Here we use the BMA to build a library of discrete target functions of known canonical molecular interactions, translated from ordinary differential equations (ODEs). We then show that these BMA target functions can be used to reconstruct complex networks, which can correctly predict many known genetic perturbations. This new library supports the accessibility ethos behind the creation of BMA, providing a toolbox for the construction of complex cell signalling models without the need for extensive experience in computer programming or mathematical modelling, and allows for construction and simulation of complex biological systems with only small amounts of quantitative data.
Evaluation of the Community Experiences for Career Education Program.
ERIC Educational Resources Information Center
Owens, Thomas R.; Fehrenbacher, Harry L.
The Experience-Based Career Education (EBCE) model being developed and tested in four regions of the United States, under the sponsorship of the National Institute of Education, reflects a nationwide interest in discovering new ways to help adolescents handle the psychological, social, and economic complexities of modern life. This paper reports…
Urbina, Angel; Mahadevan, Sankaran; Paez, Thomas L.
2012-03-01
Here, performance assessment of complex systems is ideally accomplished through system-level testing, but because they are expensive, such tests are seldom performed. On the other hand, for economic reasons, data from tests on individual components that are parts of complex systems are more readily available. The lack of system-level data leads to a need to build computational models of systems and use them for performance prediction in lieu of experiments. Because their complexity, models are sometimes built in a hierarchical manner, starting with simple components, progressing to collections of components, and finally, to the full system. Quantification of uncertainty inmore » the predicted response of a system model is required in order to establish confidence in the representation of actual system behavior. This paper proposes a framework for the complex, but very practical problem of quantification of uncertainty in system-level model predictions. It is based on Bayes networks and uses the available data at multiple levels of complexity (i.e., components, subsystem, etc.). Because epistemic sources of uncertainty were shown to be secondary, in this application, aleatoric only uncertainty is included in the present uncertainty quantification. An example showing application of the techniques to uncertainty quantification of measures of response of a real, complex aerospace system is included.« less
Zschocke, Nina
2012-08-01
In 1972, Michael Baxandal characterizes the processes responsible for the cultural relativism of art experience as highly complex and unknown in their physiological detail. While art history still shows considerable interest in the brain sciences forty years later, most cross-disciplinary studies today are referring to the neurosciences in an attempt to seek scientific legitimization of variations of a generalized and largely deterministic model of perception, reducing interaction between a work of art and its observers to a set of biological automatisms. I will challenge such an approach and take up art theory's interest in the historico-cultural and situational dimensions of art experience. Looking at two examples of large-scale installation and sculptural post-war American art, I will explore instable perceptions of depth and changing experiences of space that indicate complex interactions between perceptual and higher cognitive processes. The argument will draw on recent theories describing neuronal processes underlying multistable phenomena, eye movement, visual attention and decision-making. As I will show a large number of neuroscientific studies provide theoretical models that help us analyse not the anthropological constants but the influence of cultural, individual and situational variables on aesthetic experience.
A Practical Philosophy of Complex Climate Modelling
NASA Technical Reports Server (NTRS)
Schmidt, Gavin A.; Sherwood, Steven
2014-01-01
We give an overview of the practice of developing and using complex climate models, as seen from experiences in a major climate modelling center and through participation in the Coupled Model Intercomparison Project (CMIP).We discuss the construction and calibration of models; their evaluation, especially through use of out-of-sample tests; and their exploitation in multi-model ensembles to identify biases and make predictions. We stress that adequacy or utility of climate models is best assessed via their skill against more naive predictions. The framework we use for making inferences about reality using simulations is naturally Bayesian (in an informal sense), and has many points of contact with more familiar examples of scientific epistemology. While the use of complex simulations in science is a development that changes much in how science is done in practice, we argue that the concepts being applied fit very much into traditional practices of the scientific method, albeit those more often associated with laboratory work.
Haase, S J; Fisk, G
2001-01-01
The present experiments extend the scope of the independent observation model based on signal detection theory (Macmillan & Creelman, 1991) to complex (word) stimulus sets. In the first experiment, the model predicts the relationship between uncertain detection and subsequent correct identification, thereby providing an alternative interpretation to a phenomenon often described as unconscious perception. Our second experiment used an exclusion task (Jacoby, Toth, & Yonelinas, 1993), which, according to theories of unconscious perception, should show qualitative differences in performance based on stimulus detection accuracy and provide a relative measure of conscious versus unconscious influences (Merikle, Joordens, & Stoltz, 1995). Exclusion performance was also explained by the model, suggesting that undetected words did not unconsciously influence identification responses.
1999-01-01
chemistry tells us why it is inevitable, pervasive, and won’t go away. Fortunately, there is the companion new science of complexity, rooted in...article. 30llachinski, Land Warfare and Complexity, Part II, 43. 32 Ilachinski, 43. Etymologically , metaphor (the Greek metafora, "carry over") means...anthropology, chemistry , economics, military and political science among others. Santa Fe Institute, http://www.santafe.edu/sfl/research. 53SAIC
Complexation behavior of oppositely charged polyelectrolytes: Effect of charge distribution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Mingtian; Li, Baohui, E-mail: dliang@pku.edu.cn, E-mail: baohui@nankai.edu.cn; Zhou, Jihan
Complexation behavior of oppositely charged polyelectrolytes in a solution is investigated using a combination of computer simulations and experiments, focusing on the influence of polyelectrolyte charge distributions along the chains on the structure of the polyelectrolyte complexes. The simulations are performed using Monte Carlo with the replica-exchange algorithm for three model systems where each system is composed of a mixture of two types of oppositely charged model polyelectrolyte chains (EGEG){sub 5}/(KGKG){sub 5}, (EEGG){sub 5}/(KKGG){sub 5}, and (EEGG){sub 5}/(KGKG){sub 5}, in a solution including explicit solvent molecules. Among the three model systems, only the charge distributions along the chains are notmore » identical. Thermodynamic quantities are calculated as a function of temperature (or ionic strength), and the microscopic structures of complexes are examined. It is found that the three systems have different transition temperatures, and form complexes with different sizes, structures, and densities at a given temperature. Complex microscopic structures with an alternating arrangement of one monolayer of E/K monomers and one monolayer of G monomers, with one bilayer of E and K monomers and one bilayer of G monomers, and with a mixture of monolayer and bilayer of E/K monomers in a box shape and a trilayer of G monomers inside the box are obtained for the three mixture systems, respectively. The experiments are carried out for three systems where each is composed of a mixture of two types of oppositely charged peptide chains. Each peptide chain is composed of Lysine (K) and glycine (G) or glutamate (E) and G, in solution, and the chain length and amino acid sequences, and hence the charge distribution, are precisely controlled, and all of them are identical with those for the corresponding model chain. The complexation behavior and complex structures are characterized through laser light scattering and atomic force microscopy measurements. The order of the apparent weight-averaged molar mass and the order of density of complexes observed from the three experimental systems are qualitatively in agreement with those predicted from the simulations.« less
SIGMA--A Graphical Approach to Teaching Simulation.
ERIC Educational Resources Information Center
Schruben, Lee W.
1992-01-01
SIGMA (Simulation Graphical Modeling and Analysis) is a computer graphics environment for building, testing, and experimenting with discrete event simulation models on personal computers. It uses symbolic representations (computer animation) to depict the logic of large, complex discrete event systems for easier understanding and has proven itself…
A Comparison of Metamodeling Techniques via Numerical Experiments
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2016-01-01
This paper presents a comparative analysis of a few metamodeling techniques using numerical experiments for the single input-single output case. These experiments enable comparing the models' predictions with the phenomenon they are aiming to describe as more data is made available. These techniques include (i) prediction intervals associated with a least squares parameter estimate, (ii) Bayesian credible intervals, (iii) Gaussian process models, and (iv) interval predictor models. Aspects being compared are computational complexity, accuracy (i.e., the degree to which the resulting prediction conforms to the actual Data Generating Mechanism), reliability (i.e., the probability that new observations will fall inside the predicted interval), sensitivity to outliers, extrapolation properties, ease of use, and asymptotic behavior. The numerical experiments describe typical application scenarios that challenge the underlying assumptions supporting most metamodeling techniques.
NASA Technical Reports Server (NTRS)
Stern, Jennifer C.; Foustoukos, Dionysis I.; Sonke, Jeroen E.; Salters, Vincent J. M.
2014-01-01
The mobility of metals in soils and subsurface aquifers is strongly affected by sorption and complexation with dissolved organic matter, oxyhydroxides, clay minerals, and inorganic ligands. Humic substances (HS) are organic macromolecules with functional groups that have a strong affinity for binding metals, such as actinides. Thorium, often studied as an analog for tetravalent actinides, has also been shown to strongly associate with dissolved and colloidal HS in natural waters. The effects of HS on the mobilization dynamics of actinides are of particular interest in risk assessment of nuclear waste repositories. Here, we present conditional equilibrium binding constants (Kc, MHA) of thorium, hafnium, and zirconium-humic acid complexes from ligand competition experiments using capillary electrophoresis coupled with ICP-MS (CE- ICP-MS). Equilibrium dialysis ligand exchange (EDLE) experiments using size exclusion via a 1000 Damembrane were also performed to validate the CE-ICP-MS analysis. Experiments were performed at pH 3.5-7 with solutions containing one tetravalent metal (Th, Hf, or Zr), Elliot soil humic acid (EHA) or Pahokee peat humic acid (PHA), and EDTA. CE-ICP-MS and EDLE experiments yielded nearly identical binding constants for the metal- humic acid complexes, indicating that both methods are appropriate for examining metal speciation at conditions lower than neutral pH. We find that tetravalent metals form strong complexes with humic acids, with Kc, MHA several orders of magnitude above REE-humic complexes. Experiments were conducted at a range of dissolved HA concentrations to examine the effect of [HA]/[Th] molar ratio on Kc, MHA. At low metal loading conditions (i.e. elevated [HA]/[Th] ratios) the ThHA binding constant reached values that were not affected by the relative abundance of humic acid and thorium. The importance of [HA]/[Th] molar ratios on constraining the equilibrium of MHA complexation is apparent when our estimated Kc, MHA values attained at very low metal loading conditions are compared to existing literature data. Overall, experimental data suggest that the tetravalent transition metal/-actinide-humic acid complexation is important over a wide range of pH values, including mildly acidic conditions, and thus, these complexes should be included in speciation models.
Lehnert, Teresa; Timme, Sandra; Pollmächer, Johannes; Hünniger, Kerstin; Kurzai, Oliver; Figge, Marc Thilo
2015-01-01
Opportunistic fungal pathogens can cause bloodstream infection and severe sepsis upon entering the blood stream of the host. The early immune response in human blood comprises the elimination of pathogens by antimicrobial peptides and innate immune cells, such as neutrophils or monocytes. Mathematical modeling is a predictive method to examine these complex processes and to quantify the dynamics of pathogen-host interactions. Since model parameters are often not directly accessible from experiment, their estimation is required by calibrating model predictions with experimental data. Depending on the complexity of the mathematical model, parameter estimation can be associated with excessively high computational costs in terms of run time and memory. We apply a strategy for reliable parameter estimation where different modeling approaches with increasing complexity are used that build on one another. This bottom-up modeling approach is applied to an experimental human whole-blood infection assay for Candida albicans. Aiming for the quantification of the relative impact of different routes of the immune response against this human-pathogenic fungus, we start from a non-spatial state-based model (SBM), because this level of model complexity allows estimating a priori unknown transition rates between various system states by the global optimization method simulated annealing. Building on the non-spatial SBM, an agent-based model (ABM) is implemented that incorporates the migration of interacting cells in three-dimensional space. The ABM takes advantage of estimated parameters from the non-spatial SBM, leading to a decreased dimensionality of the parameter space. This space can be scanned using a local optimization approach, i.e., least-squares error estimation based on an adaptive regular grid search, to predict cell migration parameters that are not accessible in experiment. In the future, spatio-temporal simulations of whole-blood samples may enable timely stratification of sepsis patients by distinguishing hyper-inflammatory from paralytic phases in immune dysregulation. PMID:26150807
Lehnert, Teresa; Timme, Sandra; Pollmächer, Johannes; Hünniger, Kerstin; Kurzai, Oliver; Figge, Marc Thilo
2015-01-01
Opportunistic fungal pathogens can cause bloodstream infection and severe sepsis upon entering the blood stream of the host. The early immune response in human blood comprises the elimination of pathogens by antimicrobial peptides and innate immune cells, such as neutrophils or monocytes. Mathematical modeling is a predictive method to examine these complex processes and to quantify the dynamics of pathogen-host interactions. Since model parameters are often not directly accessible from experiment, their estimation is required by calibrating model predictions with experimental data. Depending on the complexity of the mathematical model, parameter estimation can be associated with excessively high computational costs in terms of run time and memory. We apply a strategy for reliable parameter estimation where different modeling approaches with increasing complexity are used that build on one another. This bottom-up modeling approach is applied to an experimental human whole-blood infection assay for Candida albicans. Aiming for the quantification of the relative impact of different routes of the immune response against this human-pathogenic fungus, we start from a non-spatial state-based model (SBM), because this level of model complexity allows estimating a priori unknown transition rates between various system states by the global optimization method simulated annealing. Building on the non-spatial SBM, an agent-based model (ABM) is implemented that incorporates the migration of interacting cells in three-dimensional space. The ABM takes advantage of estimated parameters from the non-spatial SBM, leading to a decreased dimensionality of the parameter space. This space can be scanned using a local optimization approach, i.e., least-squares error estimation based on an adaptive regular grid search, to predict cell migration parameters that are not accessible in experiment. In the future, spatio-temporal simulations of whole-blood samples may enable timely stratification of sepsis patients by distinguishing hyper-inflammatory from paralytic phases in immune dysregulation.
Yiqi Luo; Dieter Gerten; Guerric Le Maire; William J. Parton; Ensheng Weng; Xuhui Zhou; Cindy Keough; Claus Beier; Philippe Ciais; Wolfgang Cramer; Jeffrey S. Dukes; Bridget Emmett; Paul J. Hanson; Alan Knapp; Sune Linder; Dan Nepstad; Lindsey. Rustad
2008-01-01
Interactive effects of multiple global change factors on ecosystem processes are complex. It is relatively expensive to explore those interactions in manipulative experiments. We conducted a modeling analysis to identify potentially important interactions and to stimulate hypothesis formulation for experimental research. Four models were used to quantify interactive...
Importance of fish behaviour in modelling conservation problems: food limitation as an example
Steven Railsback; Bret Harvey
2011-01-01
Simulation experiments using the inSTREAM individual-based brown trout Salmo trutta population model explored the role of individual adaptive behaviour in food limitation, as an example of how behaviour can affect managersâ understanding of conservation problems. The model includes many natural complexities in habitat (spatial and temporal variation in characteristics...
USDA-ARS?s Scientific Manuscript database
Soil carbon (C) models are important tools for examining complex interactions between climate, crop and soil management practices, and to evaluate the long-term effects of management practices on C-storage potential in soils. CQESTR is a process-based carbon balance model that relates crop residue a...
The Mentoring Relationship as a Complex Adaptive System: Finding a Model for Our Experience
ERIC Educational Resources Information Center
Jones, Rachel; Brown, Dot
2011-01-01
Mentoring theory and practice has evolved significantly during the past 40 years. Early mentoring models were characterized by the top-down flow of information and benefits to the protege. This framework was reconceptualized as a reciprocal model when scholars realized mentoring was a mutually beneficial process. Recently, in response to rapidly…
NASA Astrophysics Data System (ADS)
Swanson, Ryan David
The advection-dispersion equation (ADE) fails to describe non-Fickian solute transport breakthrough curves (BTCs) in saturated porous media in both laboratory and field experiments, necessitating the use of other models. The dual-domain mass transfer (DDMT) model partitions the total porosity into mobile and less-mobile domains with an exchange of mass between the two domains, and this model can reproduce better fits to BTCs in many systems than ADE-based models. However, direct experimental estimation of DDMT model parameters remains elusive and model parameters are often calculated a posteriori by an optimization procedure. Here, we investigate the use of geophysical tools (direct-current resistivity, nuclear magnetic resonance, and complex conductivity) to estimate these model parameters directly. We use two different samples of the zeolite clinoptilolite, a material shown to demonstrate solute mass transfer due to a significant internal porosity, and provide the first evidence that direct-current electrical methods can track solute movement into and out of a less-mobile pore space in controlled laboratory experiments. We quantify the effects of assuming single-rate DDMT for multirate mass transfer systems. We analyze pore structures using material characterization methods (mercury porosimetry, scanning electron microscopy, and X-ray computer tomography), and compare these observations to geophysical measurements. Nuclear magnetic resonance in conjunction with direct-current resistivity measurements can constrain mobile and less-mobile porosities, but complex conductivity may have little value in relation to mass transfer despite the hypothesis that mass transfer and complex conductivity lengths scales are related. Finally, we conduct a geoelectrical monitored tracer test at the Macrodispersion Experiment (MADE) site in Columbus, MS. We relate hydraulic and electrical conductivity measurements to generate a 3D hydraulic conductivity field, and compare to hydraulic conductivity fields estimated through ordinary kriging and sequential Gaussian simulation. Time-lapse electrical measurements are used to verify or dismiss aspects of breakthrough curves for different hydraulic conductivity fields. Our results quantify the potential for geophysical measurements to infer on single-rate DDMT parameters, show site-specific relations between hydraulic and electrical conductivity, and track solute exchange into and out of less-mobile domains.
Complex interactions between diapirs and 4-D subduction driven mantle wedge circulation.
NASA Astrophysics Data System (ADS)
Sylvia, R. T.; Kincaid, C. R.
2015-12-01
Analogue laboratory experiments generate 4-D flow of mantle wedge fluid and capture the evolution of buoyant mesoscale diapirs. The mantle is modeled with viscous glucose syrup with an Arrhenius type temperature dependent viscosity. To characterize diapir evolution we experiment with a variety of fluids injected from multiple point sources. Diapirs interact with kinematically induced flow fields forced by subducting plate motions replicating a range of styles observed in dynamic subduction models (e.g., rollback, steepening, gaps). Data is collected using high definition timelapse photography and quantified using image velocimetry techniques. While many studies assume direct vertical connections between the volcanic arc and the deeper mantle source region, our experiments demonstrate the difficulty of creating near vertical conduits. Results highlight extreme curvature of diapir rise paths. Trench-normal deflection occurs as diapirs are advected downward away from the trench before ascending into wedge apex directed return flow. Trench parallel deflections up to 75% of trench length are seen in all cases, exacerbated by complex geometry and rollback motion. Interdiapir interaction is also important; upwellings with similar trajectory coalesce and rapidly accelerate. Moreover, we observe a new mode of interaction whereby recycled diapir material is drawn down along the slab surface and then initiates rapid fluid migration updip along the slab-wedge interface. Variability in trajectory and residence time leads to complex petrologic inferences. Material from disparate source regions can surface at the same location, mix in the wedge, or become fully entrained in creeping flow adding heterogeneity to the mantle. Active diapirism or any other vertical fluid flux mechanism employing rheological weakening lowers viscosity in the recycling mantle wedge affecting both solid and fluid flow characteristics. Many interesting and insightful results have been presented based upon 2-D, steady-state thermal and flow regimes. We reiterate the importance of 4-D time evolution in subduction models. Analogue experiments allow added feedbacks and complexity improving intuition and providing insight for further investigation.
NASA Astrophysics Data System (ADS)
Markauskaite, Lina; Kelly, Nick; Jacobson, Michael J.
2017-12-01
This paper gives a grounded cognition account of model-based learning of complex scientific knowledge related to socio-scientific issues, such as climate change. It draws on the results from a study of high school students learning about the carbon cycle through computational agent-based models and investigates two questions: First, how do students ground their understanding about the phenomenon when they learn and solve problems with computer models? Second, what are common sources of mistakes in students' reasoning with computer models? Results show that students ground their understanding in computer models in five ways: direct observation, straight abstraction, generalisation, conceptualisation, and extension. Students also incorporate into their reasoning their knowledge and experiences that extend beyond phenomena represented in the models, such as attitudes about unsustainable carbon emission rates, human agency, external events, and the nature of computational models. The most common difficulties of the students relate to seeing the modelled scientific phenomenon and connecting results from the observations with other experiences and understandings about the phenomenon in the outside world. An important contribution of this study is the constructed coding scheme for establishing different ways of grounding, which helps to understand some challenges that students encounter when they learn about complex phenomena with agent-based computer models.
Zhou, Jingyu; Tian, Shulin; Yang, Chenglin
2014-01-01
Few researches pay attention to prediction about analog circuits. The few methods lack the correlation with circuit analysis during extracting and calculating features so that FI (fault indicator) calculation often lack rationality, thus affecting prognostic performance. To solve the above problem, this paper proposes a novel prediction method about single components of analog circuits based on complex field modeling. Aiming at the feature that faults of single components hold the largest number in analog circuits, the method starts with circuit structure, analyzes transfer function of circuits, and implements complex field modeling. Then, by an established parameter scanning model related to complex field, it analyzes the relationship between parameter variation and degeneration of single components in the model in order to obtain a more reasonable FI feature set via calculation. According to the obtained FI feature set, it establishes a novel model about degeneration trend of analog circuits' single components. At last, it uses particle filter (PF) to update parameters for the model and predicts remaining useful performance (RUP) of analog circuits' single components. Since calculation about the FI feature set is more reasonable, accuracy of prediction is improved to some extent. Finally, the foregoing conclusions are verified by experiments.
Tracer Flux Balance at an Urban Canyon Intersection
NASA Astrophysics Data System (ADS)
Carpentieri, Matteo; Robins, Alan G.
2010-05-01
Despite their importance for pollutant dispersion in urban areas, the special features of dispersion at street intersections are rarely taken into account by operational air quality models. Several previous studies have demonstrated the complex flow patterns that occur at street intersections, even with simple geometry. This study presents results from wind-tunnel experiments on a reduced scale model of a complex but realistic urban intersection, located in central London. Tracer concentration measurements were used to derive three-dimensional maps of the concentration field within the intersection. In combination with a previous study (Carpentieri et al., Boundary-Layer Meteorol 133:277-296, 2009) where the velocity field was measured in the same model, a methodology for the calculation of the mean tracer flux balance at the intersection was developed and applied. The calculation highlighted several limitations of current state-of-the-art canyon dispersion models, arising mainly from the complex geometry of the intersection. Despite its limitations, the proposed methodology could be further developed in order to derive, assess and implement street intersection dispersion models for complex urban areas.
Atmospheric stability and complex terrain: comparing measurements and CFD
NASA Astrophysics Data System (ADS)
Koblitz, T.; Bechmann, A.; Berg, J.; Sogachev, A.; Sørensen, N.; Réthoré, P.-E.
2014-12-01
For wind resource assessment, the wind industry is increasingly relying on Computational Fluid Dynamics models that focus on modeling the airflow in a neutrally stratified surface layer. So far, physical processes that are specific to the atmospheric boundary layer, for example the Coriolis force, buoyancy forces and heat transport, are mostly ignored in state-of-the-art flow solvers. In order to decrease the uncertainty of wind resource assessment, the effect of thermal stratification on the atmospheric boundary layer should be included in such models. The present work focuses on non-neutral atmospheric flow over complex terrain including physical processes like stability and Coriolis force. We examine the influence of these effects on the whole atmospheric boundary layer using the DTU Wind Energy flow solver EllipSys3D. To validate the flow solver, measurements from Benakanahalli hill, a field experiment that took place in India in early 2010, are used. The experiment was specifically designed to address the combined effects of stability and Coriolis force over complex terrain, and provides a dataset to validate flow solvers. Including those effects into EllipSys3D significantly improves the predicted flow field when compared against the measurements.
Radiation damage to DNA in DNA-protein complexes.
Spotheim-Maurizot, M; Davídková, M
2011-06-03
The most aggressive product of water radiolysis, the hydroxyl (OH) radical, is responsible for the indirect effect of ionizing radiations on DNA in solution and aerobic conditions. According to radiolytic footprinting experiments, the resulting strand breaks and base modifications are inhomogeneously distributed along the DNA molecule irradiated free or bound to ligands (polyamines, thiols, proteins). A Monte-Carlo based model of simulation of the reaction of OH radicals with the macromolecules, called RADACK, allows calculating the relative probability of damage of each nucleotide of DNA irradiated alone or in complexes with proteins. RADACK calculations require the knowledge of the three dimensional structure of DNA and its complexes (determined by X-ray crystallography, NMR spectroscopy or molecular modeling). The confrontation of the calculated values with the results of the radiolytic footprinting experiments together with molecular modeling calculations show that: (1) the extent and location of the lesions are strongly dependent on the structure of DNA, which in turns is modulated by the base sequence and by the binding of proteins and (2) the regions in contact with the protein can be protected against the attack by the hydroxyl radicals via masking of the binding site and by scavenging of the radicals. 2011 Elsevier B.V. All rights reserved.
Monaural and binaural processing of complex waveforms
NASA Astrophysics Data System (ADS)
Trahiotis, Constantine; Bernstein, Leslie R.
1992-01-01
Our research concerned the manners by which the monaural and binaural auditory systems process information in complex sounds. Substantial progress was made in three areas, consistent with the ojectives outlined in the original proposal. (1) New electronic equipment, including a NeXT computer was purchased, installed and interfaced with the existing laboratory. Software was developed for generating the necessary complex digital stimuli and for running behavioral experiments utilizing those stimuli. (2) Monaural experiments showed that the CMR is not obtained successively and is reduced or non-existent when the flanking bands are pulsed rather than presented continuously. Binaural investigations revealed that the detectability of a tonal target in a masking level difference paradigm could be degraded by the presence of a spectrally remote interfering tone. (3) In collaboration with Dr. Richard Stem, theoretical efforts included the explication and evaluation of a weighted-image model of binaural hearing, attempts to extend the Stern-Colbum position-variable model to account for many crucial lateralization and localization data gathered over the past 50 years, and the continuation of efforts to incorporate into a general model notions that lateralization and localization of spectrally-rich stimuli depend upon the patterns of neural activity within a plane defined by frequency and interaural delay.
NASA Astrophysics Data System (ADS)
Maes, Julien; Geiger, Sebastian
2018-01-01
Laboratory experiments have shown that oil production from sandstone and carbonate reservoirs by waterflooding could be significantly increased by manipulating the composition of the injected water (e.g. by lowering the ionic strength). Recent studies suggest that a change of wettability induced by a change in surface charge is likely to be one of the driving mechanism of the so-called low-salinity effect. In this case, the potential increase of oil recovery during waterflooding at low ionic strength would be strongly impacted by the inter-relations between flow, transport and chemical reaction at the pore-scale. Hence, a new numerical model that includes two-phase flow, solute reactive transport and wettability alteration is implemented based on the Direct Numerical Simulation of the Navier-Stokes equations and surface complexation modelling. Our model is first used to match experimental results of oil droplet detachment from clay patches. We then study the effect of wettability change on the pore-scale displacement for simple 2D calcite micro-models and evaluate the impact of several parameters such as water composition and injected velocity. Finally, we repeat the simulation experiments on a larger and more complex pore geometry representing a carbonate rock. Our simulations highlight two different effects of low-salinity on oil production from carbonate rocks: a smaller number of oil clusters left in the pores after invasion, and a greater number of pores invaded.
Unexpected Results are Usually Wrong, but Often Interesting
NASA Astrophysics Data System (ADS)
Huber, M.
2014-12-01
In climate modeling, an unexpected result is usually wrong, arising from some sort of mistake. Despite the fact that we all bemoan uncertainty in climate, the field is underlain by a robust, successful body of theory and any properly conducted modeling experiment is posed and conducted within that context. Consequently, if results from a complex climate model disagree with theory or from expectations from simpler models, much skepticism is in order. But, this exposes the fundamental tension of using complex, sophisticated models. If simple models and theory were perfect there would be no reason for complex models--the entire point of sophisticated models is to see if unexpected phenomena arise as emergent properties of the system. In this talk, I will step through some paleoclimate examples, drawn from my own work, of unexpected results that emerge from complex climate models arising from mistakes of two kinds. The first kind of mistake, is what I call a 'smart mistake'; it is an intentional incorporation of assumptions, boundary conditions, or physics that is in violation of theoretical or observational constraints. The second mistake, a 'dumb mistake', is just that, an unintentional violation. Analysis of such mistaken simulations provides some potentially novel and certainly interesting insights into what is possible and right in paleoclimate modeling by forcing the reexamination of well-held assumptions and theories.
Using Video Modeling to Teach Complex Social Sequences to Children with Autism
ERIC Educational Resources Information Center
Nikopoulos, Christos K.; Keenan, Mickey
2007-01-01
This study comprised of two experiments was designed to teach complex social sequences to children with autism. Experimental control was achieved by collecting data using means of within-system design methodology. Across a number of conditions children were taken to a room to view one of the four short videos of two people engaging in a simple…
ERIC Educational Resources Information Center
Doenyas, Ceymi
2016-01-01
We propose an unprecedented intervention for individuals with autism spectrum disorder (ASD) and their parents: the social living complex. Unlike existing social skills interventions, peer-mediated interventions here are not limited to the school/experiment duration and setting. Whereas other supported living services house adults with ASD only,…
ERIC Educational Resources Information Center
Aslan, Erhan
2017-01-01
Employing the complex adaptive systems (CAS) model, the present case study provides a self-report description of the attitudes, perceptions and experiences of an advanced adult L2 English learner with respect to his L2 phonological attainment. CAS is predicated on the notion that an individual's cognitive processes are intricately related to his…
Predictors of Successful Discharge from Out-of-Home Care among Children with Complex Needs
ERIC Educational Resources Information Center
Yampolskaya, Svetlana; Kershaw, Mary Ann; Banks, Steve
2006-01-01
We examined the predictors for successful discharge from out-of-home care of children with complex needs placed in a novel comprehensive service intervention (Manatee Model) and compared their discharge experiences to their out-of-home counterparts from the same county. The study design consisted of a longitudinal two-year comparison of these two…
Distance Training as Part of a Distance Consulting Solution.
ERIC Educational Resources Information Center
Fulantelli, Giovanni; Chiazzese, Giuseppe; Allegra, Mario
"Distance Training" models, when integrated in a more complex framework, such as a "Distance Consulting" model, present specific features and impose a revision of the strategies commonly adopted in distance training experiences. This paper reports on the distance training strategies adopted in a European funded project aimed at…
Molecular Modeling and Computational Chemistry at Humboldt State University.
ERIC Educational Resources Information Center
Paselk, Richard A.; Zoellner, Robert W.
2002-01-01
Describes a molecular modeling and computational chemistry (MM&CC) facility for undergraduate instruction and research at Humboldt State University. This facility complex allows the introduction of MM&CC throughout the chemistry curriculum with tailored experiments in general, organic, and inorganic courses as well as a new molecular modeling…
Two Maintenance Mechanisms of Verbal Information in Working Memory
ERIC Educational Resources Information Center
Camos, V.; Lagner, P.; Barrouillet, P.
2009-01-01
The present study evaluated the interplay between two mechanisms of maintenance of verbal information in working memory, namely articulatory rehearsal as described in Baddeley's model, and attentional refreshing as postulated in Barrouillet and Camos's Time-Based Resource-Sharing (TBRS) model. In four experiments using complex span paradigm, we…
Model-Driven Design: Systematically Building Integrated Blended Learning Experiences
ERIC Educational Resources Information Center
Laster, Stephen
2010-01-01
Developing and delivering curricula that are integrated and that use blended learning techniques requires a highly orchestrated design. While institutions have demonstrated the ability to design complex curricula on an ad-hoc basis, these projects are generally successful at a great human and capital cost. Model-driven design provides a…
"No Soy de Aqui ni Soy de Alla": Transgenerational Cultural Identity Formation
ERIC Educational Resources Information Center
Cardona, Jose Ruben Parra; Busby, Dean M.; Wampler, Richard S.
2004-01-01
The transgenerational cultural identity model offers a detailed understanding of the immigration experience by challenging agendas of assimilation and by expanding on existing theories of cultural identity. Based on this model, immigration is a complex phenomenon influenced by many variables such as sociopsychological dimensions, family,…
Dynamics of Affective States during Complex Learning
ERIC Educational Resources Information Center
D'Mello, Sidney; Graesser, Art
2012-01-01
We propose a model to explain the dynamics of affective states that emerge during deep learning activities. The model predicts that learners in a state of engagement/flow will experience cognitive disequilibrium and confusion when they face contradictions, incongruities, anomalies, obstacles to goals, and other impasses. Learners revert into the…
Wang, Jiguang; Sun, Yidan; Zheng, Si; Zhang, Xiang-Sun; Zhou, Huarong; Chen, Luonan
2013-01-01
Synergistic interactions among transcription factors (TFs) and their cofactors collectively determine gene expression in complex biological systems. In this work, we develop a novel graphical model, called Active Protein-Gene (APG) network model, to quantify regulatory signals of transcription in complex biomolecular networks through integrating both TF upstream-regulation and downstream-regulation high-throughput data. Firstly, we theoretically and computationally demonstrate the effectiveness of APG by comparing with the traditional strategy based only on TF downstream-regulation information. We then apply this model to study spontaneous type 2 diabetic Goto-Kakizaki (GK) and Wistar control rats. Our biological experiments validate the theoretical results. In particular, SP1 is found to be a hidden TF with changed regulatory activity, and the loss of SP1 activity contributes to the increased glucose production during diabetes development. APG model provides theoretical basis to quantitatively elucidate transcriptional regulation by modelling TF combinatorial interactions and exploiting multilevel high-throughput information.
Wang, Jiguang; Sun, Yidan; Zheng, Si; Zhang, Xiang-Sun; Zhou, Huarong; Chen, Luonan
2013-01-01
Synergistic interactions among transcription factors (TFs) and their cofactors collectively determine gene expression in complex biological systems. In this work, we develop a novel graphical model, called Active Protein-Gene (APG) network model, to quantify regulatory signals of transcription in complex biomolecular networks through integrating both TF upstream-regulation and downstream-regulation high-throughput data. Firstly, we theoretically and computationally demonstrate the effectiveness of APG by comparing with the traditional strategy based only on TF downstream-regulation information. We then apply this model to study spontaneous type 2 diabetic Goto-Kakizaki (GK) and Wistar control rats. Our biological experiments validate the theoretical results. In particular, SP1 is found to be a hidden TF with changed regulatory activity, and the loss of SP1 activity contributes to the increased glucose production during diabetes development. APG model provides theoretical basis to quantitatively elucidate transcriptional regulation by modelling TF combinatorial interactions and exploiting multilevel high-throughput information. PMID:23346354
Along-strike complex geometry of subduction zones - an experimental approach
NASA Astrophysics Data System (ADS)
Midtkandal, I.; Gabrielsen, R. H.; Brun, J.-P.; Huismans, R.
2012-04-01
Recent knowledge of the great geometric and dynamic complexity insubduction zones, combined with new capacity for analogue mechanical and numerical modeling has sparked a number of studies on subduction processes. Not unexpectedly, such models reveal a complex relation between physical conditions during subduction initiation, strength profile of the subducting plate, the thermo-dynamic conditions and the subduction zones geometries. One rare geometrical complexity of subduction that remains particularly controversial, is the potential for polarity shift in subduction systems. The present experiments were therefore performed to explore the influence of the architecture, strength and strain velocity on complexities in subduction zones, focusing on along-strike variation of the collision zone. Of particular concern were the consequences for the geometry and kinematics of the transition zones between segments of contrasting subduction direction. Although the model design to some extent was inspired by the configuration along the Iberian - Eurasian suture zone, the results are also of significance for other orogens with complex along-strike geometries. The experiments were set up to explore the initial state of subduction only, and were accordingly terminated before slab subduction occurred. The model wasbuilt from layers of silicone putty and sand, tailored to simulate the assumed lithospheric geometries and strength-viscosity profiles along the plate boundary zone prior to contraction, and comprises two 'continental' plates separated by a thinner 'oceanic' plate that represents the narrow seaway. The experiment floats on a substrate of sodiumpolytungstate, representing mantle. 24 experimental runs were performed, varying the thickness (and thus strength) of the upper mantle lithosphere, as well as the strain rate. Keeping all other parameters identical for each experiment, the models were shortened by a computer-controlled jackscrew while time-lapse images were recorded. After completion, the models were saturated with water and frozen, allowing for sectioning and profile inspection. The experiments were invariably characterized by different along-strike patterns of deformation, so that three distinct structural domains could be distinguished in all cases. Model descriptions are subdivided accordingly, including domain CC, simulating a continent-continent collision, domain OC, characterized by continent-ocean-continent collision and domain T, representing the transition zone between domain CC and domain OC. The latter zone varied in width and complexity depending on the contrast in structural style developed in the two other domains; in cases where domain OC developed very differently from domain CC, the transition zone was generally wider and more complex. A typical experiment displayed the following features and strain history: In domain CC two principal thrust sheets are displayed, which obviously developed in an in-sequence foreland-directed fashion. The lowermost detachment nucleated at the base of the High Strength Lithospheric Mantle analogue, whereas the uppermost thrust was anchored within the "lower crust". The two thrusts operated in concert, the surface trace of the deepest dominating in the west, and the shallowest in the east. The kinematic development of domain CC could be subdivided into four stages, including initiation of a symmetrical anticline with a minute amplitude and situated directly above the velocity discontinuity defined by the plate contact (stage 1), contemporaneous development of the two thrusts (stage 2) and an associated asymmetrical anticline (stage 3) with a central collapse graben in the latest phase (stage 4). It is noted that the segment CC as seen in a clear majority of the experiments followed this pattern of development. In contrast, the configuration of domain OC displayed greater variation, and included north and south-directed subduction, folding, growth of pop-up-structures and triangle zones. In the "ocean crust" domain, stage 1 was characterized by the growth of a fault-propagation anticline with an E-W-oriented fold axis, ending with the surfacing of a north-vergent thrust. In stage 2, the contraction was concentrated to the south in the oceanic domain, again ending with the surfacing of a thrust, here with top-south transport. By continued movement (stage 3), the thrust fault propagated towards the east, crossing into the "continental" domain and linking with the fault systems of the segment CC. The structure of domain T is dominated by the interference of faults propagating westwards from the domain CC and eastwards from the domain OC, respectively. The zone of overlap in the experiment was significant, and its central part had the geometry of a double "crocodile structure" (sensuMeissner 1989), separating the two areas of northerly and southerly subduction. Hence, its development is less easily subdivided into stages. Reference: Meissner,R., 1989: Rupture, creep lamellae and crocodiles: happenings in the continental crust. Terra Nova, 1, 17-28.
Experimental Investigation of the Formation of Complex Craters
NASA Astrophysics Data System (ADS)
Martellato, E.; Dörfler, M. A.; Schuster, B.; Wünnemman, K.; Kenkmann, T.
2017-09-01
The formation of complex impact craters is still poorly understood, because standard material models fail to explain the gravity-driven collapse at the observed size-range of a bowl-shaped transient crater into a flat-floored crater structure with a central peak or ring and terraced rim. To explain such a collapse the so-called Acoustic Fluidization (AF) model has been proposed. The AF assumes that heavily fractured target rocks surrounding the transient crater are temporarily softened by an acoustic field in the wake of an expanding shock wave generated upon impact. The AF has been successfully employed in numerous modeling studies of complex crater formation; however, there is no clear relationship between model parameters and observables. In this study, we present preliminary results of laboratory experiments aiming at relating the AF parameters to observables such as the grain size, average wave length of the acoustic field and its decay time τ relative to the crater formation time.
Advanced Stochastic Collocation Methods for Polynomial Chaos in RAVEN
NASA Astrophysics Data System (ADS)
Talbot, Paul W.
As experiment complexity in fields such as nuclear engineering continually increases, so does the demand for robust computational methods to simulate them. In many simulations, input design parameters and intrinsic experiment properties are sources of uncertainty. Often small perturbations in uncertain parameters have significant impact on the experiment outcome. For instance, in nuclear fuel performance, small changes in fuel thermal conductivity can greatly affect maximum stress on the surrounding cladding. The difficulty quantifying input uncertainty impact in such systems has grown with the complexity of numerical models. Traditionally, uncertainty quantification has been approached using random sampling methods like Monte Carlo. For some models, the input parametric space and corresponding response output space is sufficiently explored with few low-cost calculations. For other models, it is computationally costly to obtain good understanding of the output space. To combat the expense of random sampling, this research explores the possibilities of using advanced methods in Stochastic Collocation for generalized Polynomial Chaos (SCgPC) as an alternative to traditional uncertainty quantification techniques such as Monte Carlo (MC) and Latin Hypercube Sampling (LHS) methods for applications in nuclear engineering. We consider traditional SCgPC construction strategies as well as truncated polynomial spaces using Total Degree and Hyperbolic Cross constructions. We also consider applying anisotropy (unequal treatment of different dimensions) to the polynomial space, and offer methods whereby optimal levels of anisotropy can be approximated. We contribute development to existing adaptive polynomial construction strategies. Finally, we consider High-Dimensional Model Reduction (HDMR) expansions, using SCgPC representations for the subspace terms, and contribute new adaptive methods to construct them. We apply these methods on a series of models of increasing complexity. We use analytic models of various levels of complexity, then demonstrate performance on two engineering-scale problems: a single-physics nuclear reactor neutronics problem, and a multiphysics fuel cell problem coupling fuels performance and neutronics. Lastly, we demonstrate sensitivity analysis for a time-dependent fuels performance problem. We demonstrate the application of all the algorithms in RAVEN, a production-level uncertainty quantification framework.
NASA Astrophysics Data System (ADS)
Jameson, Donald L.; Grzybowski, Joseph J.; Hammels, Deb E.; Castellano, Ronald K.; Hoke, Molly E.; Freed, Kimberly; Basquill, Sean; Mendel, Angela; Shoemaker, William J.
1998-04-01
This article describes a four-reaction sequence for the synthesis of two organometallic "cobaloxime" derivatives. The concept of "Umpolung" or reversal of reactivity is demonstrated in the preparation of complexes. The complex Co(dmgH)2(4-t-BuPy)Et is formed by the reaction of a cobalt (I) intermediate (cobalt in the role of nucleophile) with ethyl iodide. The complex Co(dmgH)2(4-t-BuPy)Ph is formed by the reaction of PhMgBr with a cobalt (III) intermediate (cobalt in the role of electrophile). All the products contain cobalt in the diamagnetic +3 oxidation state and are readily characterized by proton and carbon NMR. The four reaction sequence may be completed in two 4-hour lab periods. Cobaloximes are well known as model complexes for Vitamin B-12 and the experiment exposes students to aspects of classical coordination chemistry, organometallic chemistry and bioinorganic chemistry. The experiment also illustrates an important reactivity parallel between organic and organometallic chemistry.
NASA Astrophysics Data System (ADS)
Scheerer, O.; Höhne, M.; Juda, U.; Riemann, H.
1997-10-01
In this article, we report about complexes in silicon investigated by electron paramagnetic resonance (EPR). In silicon doped with C and Pt we detected two different complexes: cr-1Pt (cr: carbon-related, 1Pt: one Pt atom) and cr-3Pt. The complexes have similar EPR properties. They show a trigonal symmetry with effective g-values geff,⊥=2g⊥≈4 and geff,‖=g‖≈2 (g⊥, g‖ true g-values). The g-values can be explained by a spin Hamiltonian with large fine-structure energy (electron spin S=3/2) and smaller Zeeman interaction. The participation of platinum in the complexes is proved by the hyperfine interaction. From experiments with varying carbon concentration we conclude that the complexes contain carbon. Atomistic models based on the Watkins vacancy-model for substitutional Pt were developed.
ERIC Educational Resources Information Center
Lees, Anna; Kennedy, Adam S.
2017-01-01
The relevance and effectiveness of traditional, course- and clinical-experience-based models of teacher preparation have been called into question, and institutions of teacher education must respond to the changing landscape of educational policy, which increasingly emphasizes that candidates must be prepared for challenges faced in complex,…
Norris, Rebecca L; Bailey, Rachel L; Bolls, Paul D; Wise, Kevin R
2012-01-01
This experiment explored how the emotional tone and visual complexity of direct-to-consumer (DTC) drug advertisements affect the encoding and storage of specific risk and benefit statements about each of the drugs in question. Results are interpreted under the limited capacity model of motivated mediated message processing framework. Findings suggest that DTC drug ads should be pleasantly toned and high in visual complexity in order to maximize encoding and storage of risk and benefit information.
Thermodynamic Studies to Support Actinide/Lanthanide Separations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Linfeng
2016-09-04
Thermodynamic data on the complexation of Np(V) with HEDTA in a wide pH region were re-modeled by including a dimeric complex species, (NpO 2) 2(OH) 2L 2 6- where L 3- stands for the fully deprotonated HEDTA ligand and better fits were achieved for the spectrophotometric data. The presence of the dimeric complex species in high pH region was verified for the first time by the EXAFS experiments at Stanford Synchrotron Radiation Laboratory (SSRL).
Kiang, Lisa; Witkow, Melissa R; Thompson, Taylor L
2016-07-01
The model minority image is a common and pervasive stereotype that Asian American adolescents must navigate. Using multiwave data from 159 adolescents from Asian American backgrounds (mean age at initial recruitment = 15.03, SD = .92; 60 % female; 74 % US-born), the current study targeted unexplored aspects of the model minority experience in conjunction with more traditionally measured experiences of negative discrimination. When examining normative changes, perceptions of model minority stereotyping increased over the high school years while perceptions of discrimination decreased. Both experiences were not associated with each other, suggesting independent forms of social interactions. Model minority stereotyping generally promoted academic and socioemotional adjustment, whereas discrimination hindered outcomes. Moreover, in terms of academic adjustment, the model minority stereotype appears to protect against the detrimental effect of discrimination. Implications of the complex duality of adolescents' social interactions are discussed.
High-fidelity meshes from tissue samples for diffusion MRI simulations.
Panagiotaki, Eleftheria; Hall, Matt G; Zhang, Hui; Siow, Bernard; Lythgoe, Mark F; Alexander, Daniel C
2010-01-01
This paper presents a method for constructing detailed geometric models of tissue microstructure for synthesizing realistic diffusion MRI data. We construct three-dimensional mesh models from confocal microscopy image stacks using the marching cubes algorithm. Random-walk simulations within the resulting meshes provide synthetic diffusion MRI measurements. Experiments optimise simulation parameters and complexity of the meshes to achieve accuracy and reproducibility while minimizing computation time. Finally we assess the quality of the synthesized data from the mesh models by comparison with scanner data as well as synthetic data from simple geometric models and simplified meshes that vary only in two dimensions. The results support the extra complexity of the three-dimensional mesh compared to simpler models although sensitivity to the mesh resolution is quite robust.
Velankar, Sameer; Kryshtafovych, Andriy; Huang, Shen‐You; Schneidman‐Duhovny, Dina; Sali, Andrej; Segura, Joan; Fernandez‐Fuentes, Narcis; Viswanath, Shruthi; Elber, Ron; Grudinin, Sergei; Popov, Petr; Neveu, Emilie; Lee, Hasup; Baek, Minkyung; Park, Sangwoo; Heo, Lim; Rie Lee, Gyu; Seok, Chaok; Qin, Sanbo; Zhou, Huan‐Xiang; Ritchie, David W.; Maigret, Bernard; Devignes, Marie‐Dominique; Ghoorah, Anisah; Torchala, Mieczyslaw; Chaleil, Raphaël A.G.; Bates, Paul A.; Ben‐Zeev, Efrat; Eisenstein, Miriam; Negi, Surendra S.; Weng, Zhiping; Vreven, Thom; Pierce, Brian G.; Borrman, Tyler M.; Yu, Jinchao; Ochsenbein, Françoise; Guerois, Raphaël; Vangone, Anna; Rodrigues, João P.G.L.M.; van Zundert, Gydo; Nellen, Mehdi; Xue, Li; Karaca, Ezgi; Melquiond, Adrien S.J.; Visscher, Koen; Kastritis, Panagiotis L.; Bonvin, Alexandre M.J.J.; Xu, Xianjin; Qiu, Liming; Yan, Chengfei; Li, Jilong; Ma, Zhiwei; Cheng, Jianlin; Zou, Xiaoqin; Shen, Yang; Peterson, Lenna X.; Kim, Hyung‐Rae; Roy, Amit; Han, Xusi; Esquivel‐Rodriguez, Juan; Kihara, Daisuke; Yu, Xiaofeng; Bruce, Neil J.; Fuller, Jonathan C.; Wade, Rebecca C.; Anishchenko, Ivan; Kundrotas, Petras J.; Vakser, Ilya A.; Imai, Kenichiro; Yamada, Kazunori; Oda, Toshiyuki; Nakamura, Tsukasa; Tomii, Kentaro; Pallara, Chiara; Romero‐Durana, Miguel; Jiménez‐García, Brian; Moal, Iain H.; Férnandez‐Recio, Juan; Joung, Jong Young; Kim, Jong Yun; Joo, Keehyoung; Lee, Jooyoung; Kozakov, Dima; Vajda, Sandor; Mottarella, Scott; Hall, David R.; Beglov, Dmitri; Mamonov, Artem; Xia, Bing; Bohnuud, Tanggis; Del Carpio, Carlos A.; Ichiishi, Eichiro; Marze, Nicholas; Kuroda, Daisuke; Roy Burman, Shourya S.; Gray, Jeffrey J.; Chermak, Edrisse; Cavallo, Luigi; Oliva, Romina; Tovchigrechko, Andrey
2016-01-01
ABSTRACT We present the results for CAPRI Round 30, the first joint CASP‐CAPRI experiment, which brought together experts from the protein structure prediction and protein–protein docking communities. The Round comprised 25 targets from amongst those submitted for the CASP11 prediction experiment of 2014. The targets included mostly homodimers, a few homotetramers, and two heterodimers, and comprised protein chains that could readily be modeled using templates from the Protein Data Bank. On average 24 CAPRI groups and 7 CASP groups submitted docking predictions for each target, and 12 CAPRI groups per target participated in the CAPRI scoring experiment. In total more than 9500 models were assessed against the 3D structures of the corresponding target complexes. Results show that the prediction of homodimer assemblies by homology modeling techniques and docking calculations is quite successful for targets featuring large enough subunit interfaces to represent stable associations. Targets with ambiguous or inaccurate oligomeric state assignments, often featuring crystal contact‐sized interfaces, represented a confounding factor. For those, a much poorer prediction performance was achieved, while nonetheless often providing helpful clues on the correct oligomeric state of the protein. The prediction performance was very poor for genuine tetrameric targets, where the inaccuracy of the homology‐built subunit models and the smaller pair‐wise interfaces severely limited the ability to derive the correct assembly mode. Our analysis also shows that docking procedures tend to perform better than standard homology modeling techniques and that highly accurate models of the protein components are not always required to identify their association modes with acceptable accuracy. Proteins 2016; 84(Suppl 1):323–348. © 2016 The Authors Proteins: Structure, Function, and Bioinformatics Published by Wiley Periodicals, Inc. PMID:27122118
Hybrid modeling and empirical analysis of automobile supply chain network
NASA Astrophysics Data System (ADS)
Sun, Jun-yan; Tang, Jian-ming; Fu, Wei-ping; Wu, Bing-ying
2017-05-01
Based on the connection mechanism of nodes which automatically select upstream and downstream agents, a simulation model for dynamic evolutionary process of consumer-driven automobile supply chain is established by integrating ABM and discrete modeling in the GIS-based map. Firstly, the rationality is proved by analyzing the consistency of sales and changes in various agent parameters between the simulation model and a real automobile supply chain. Second, through complex network theory, hierarchical structures of the model and relationships of networks at different levels are analyzed to calculate various characteristic parameters such as mean distance, mean clustering coefficients, and degree distributions. By doing so, it verifies that the model is a typical scale-free network and small-world network. Finally, the motion law of this model is analyzed from the perspective of complex self-adaptive systems. The chaotic state of the simulation system is verified, which suggests that this system has typical nonlinear characteristics. This model not only macroscopically illustrates the dynamic evolution of complex networks of automobile supply chain but also microcosmically reflects the business process of each agent. Moreover, the model construction and simulation of the system by means of combining CAS theory and complex networks supplies a novel method for supply chain analysis, as well as theory bases and experience for supply chain analysis of auto companies.
NASA Astrophysics Data System (ADS)
Popke, Dagmar; Bony, Sandrine; Mauritsen, Thorsten; Stevens, Bjorn
2015-04-01
Model simulations with state-of-the-art general circulation models reveal a strong disagreement concerning the simulated regional precipitation patterns and their changes with warming. The deviating precipitation response even persists when reducing the model experiment complexity to aquaplanet simulation with forced sea surface temperatures (Stevens and Bony, 2013). To assess feedbacks between clouds and radiation on precipitation responses we analyze data from 5 models performing the aquaplanet simulations of the Clouds On Off Klima Intercomparison Experiment (COOKIE), where the interaction of clouds and radiation is inhibited. Although cloud radiative effects are then disabled, the precipitation patterns among models are as diverse as with cloud radiative effects switched on. Disentangling differing model responses in such simplified experiments thus appears to be key to better understanding the simulated regional precipitation in more standard configurations. By analyzing the local moisture and moist static energy budgets in the COOKIE experiments we investigate likely causes for the disagreement among models. References Stevens, B. & S. Bony: What Are Climate Models Missing?, Science, 2013, 340, 1053-1054
Stollenwerk, Kenneth G.
1998-01-01
A natural-gradient tracer test was conducted in an unconfined sand and gravel aquifer on Cape Cod, Massachusetts. Molybdate was included in the injectate to study the effects of variable groundwater chemistry on its aqueous distribution and to evaluate the reliability of laboratory experiments for identifying and quantifying reactions that control the transport of reactive solutes in groundwater. Transport of molybdate in this aquifer was controlled by adsorption. The amount adsorbed varied with aqueous chemistry that changed with depth as freshwater recharge mixed with a plume of sewage-contaminated groundwater. Molybdate adsorption was strongest near the water table where pH (5.7) and the concentration of the competing solutes phosphate (2.3 micromolar) and sulfate (86 micromolar) were low. Adsorption of molybdate decreased with depth as pH increased to 6.5, phosphate increased to 40 micromolar, and sulfate increased to 340 micromolar. A one-site diffuse-layer surface-complexation model and a two-site diffuse-layer surface-complexation model were used to simulate adsorption. Reactions and equilibrium constants for both models were determined in laboratory experiments and used in the reactive-transport model PHAST to simulate the two-dimensional transport of molybdate during the tracer test. No geochemical parameters were adjusted in the simulation to improve the fit between model and field data. Both models simulated the travel distance of the molybdate cloud to within 10% during the 2-year tracer test; however, the two-site diffuse-layer model more accurately simulated the molybdate concentration distribution within the cloud.
Informational Entropy and Bridge Scour Estimation under Complex Hydraulic Scenarios
NASA Astrophysics Data System (ADS)
Pizarro, Alonso; Link, Oscar; Fiorentino, Mauro; Samela, Caterina; Manfreda, Salvatore
2017-04-01
Bridges are important for society because they allow social, cultural and economic connectivity. Flood events can compromise the safety of bridge piers up to the complete collapse. The Bridge Scour phenomena has been described by empirical formulae deduced from hydraulic laboratory experiments. The range of applicability of such models is restricted by the specific hydraulic conditions or flume geometry used for their derivation (e.g., water depth, mean flow velocity, pier diameter and sediment properties). We seek to identify a general formulation able to capture the main dynamic of the process in order to cover a wide range of hydraulic and geometric configuration, allowing to extend our analysis in different contexts. Therefore, exploiting the Principle of Maximum Entropy (POME) and applying it on the recently proposed dimensionless Effective flow work, W*, we derived a simple model characterized by only one parameter. The proposed Bridge Scour Entropic (BRISENT) model shows good performances under complex hydraulic conditions as well as under steady-state flow. Moreover, the model was able to capture the evolution of scour in several hydraulic configurations even if the model contains only one parameter. Furthermore, results show that the model parameter is controlled by the geometric configurations of the experiment. This offers a possible strategy to obtain a priori model parameter calibration. The BRISENT model represents a good candidate for estimating the time-dependent scour depth under complex hydraulic scenarios. The authors are keen to apply this idea for describing the scour behavior during a real flood event. Keywords: Informational entropy, Sediment transport, Bridge pier scour, Effective flow work.
Regional-Scale Salt Tectonics Modelling: Bench-Scale Validation and Extension to Field-Scale
NASA Astrophysics Data System (ADS)
Crook, A. J. L.; Yu, J. G.; Thornton, D. A.
2010-05-01
The role of salt in the evolution of the West African continental margin, and in particular its impact on hydrocarbon migration and trap formation, is an important research topic. It has attracted many researchers who have based their research on bench-scale experiments, numerical models and seismic observations. This research has shown that the evolution is very complex. For example, regional analogue bench-scale models of the Angolan margin (Fort et al., 2004) indicate a complex system with an upslope extensional domain with sealed tilted blocks, growth fault and rollover systems and extensional diapers, and a downslope contractional domain with squeezed diapirs, polyharmonic folds and thrust faults, and late-stage folding and thrusting. Numerical models have the potential to provide additional insight into the evolution of these salt driven passive margins. The longer-term aim is to calibrate regional-scale evolution models, and then to evaluate the effect of the depositional history on the current day geomechanical and hydrogeologic state in potential target hydrocarbon reservoir formations adjacent to individual salt bodies. To achieve this goal the burial and deformational history of the sediment must be modelled from initial deposition to the current-day state, while also accounting for the reaction and transport processes occurring in the margin. Accurate forward modeling is, however complex, and necessitates advanced procedures for the prediction of fault formation and evolution, representation of the extreme deformations in the salt, and for coupling the geomechanical, fluid flow and temperature fields. The evolution of the sediment due to a combination of mechanical compaction, chemical compaction and creep relaxation must also be represented. In this paper ongoing research on a computational approach for forward modelling complex structural evolution, with particular reference to passive margins driven by salt tectonics is presented. The approach is an extension of a previously published approach (Crook et al., 2006a, 2006b) that focused on predictive modelling of structure evolution in 2-D sandbox experiments, and in particular two extensional sand box experiments that exhibit complex fault development including a series of superimposed crestal collapse graben systems (McClay, 1990) . The formulation adopts a finite strain Lagrangian method, complemented by advanced localization prediction algorithms and robust and efficient automated adaptive meshing techniques. The sediment is represented by an elasto-viscoplastic constitutive model based on extended critical state concepts, which enables representation of the combined effect of mechanical and chemical compaction. This is achieved by directly coupling the evolution of the material state boundary surface with both the mechanically and chemically driven porosity change. Using these procedures the evolution of the geological structures arises naturally from the imposed boundary conditions without the requirement of seeding using initial imperfections. Simulations are presented for regional bench-scale models based on the analogue experiments presented by Fort et al. (2004), together with additional insights provided by the numerical models. It is shown that the behaviour observed in both the extensional and compressional zones of these analogue models arises naturally in the finite element simulations. Extension of these models to the field-scale is then discussed and several simulations are presented to highlight important issues related to practical field-scale numerical modelling.
Earthquake sequence simulations with measured properties for JFAST core samples
NASA Astrophysics Data System (ADS)
Noda, Hiroyuki; Sawai, Michiyo; Shibazaki, Bunichiro
2017-08-01
Since the 2011 Tohoku-Oki earthquake, multi-disciplinary observational studies have promoted our understanding of both the coseismic and long-term behaviour of the Japan Trench subduction zone. We also have suggestions for mechanical properties of the fault from the experimental side. In the present study, numerical models of earthquake sequences are presented, accounting for the experimental outcomes and being consistent with observations of both long-term and coseismic fault behaviour and thermal measurements. Among the constraints, a previous study of friction experiments for samples collected in the Japan Trench Fast Drilling Project (JFAST) showed complex rate dependences: a and a-b values change with the slip rate. In order to express such complexity, we generalize a rate- and state-dependent friction law to a quadratic form in terms of the logarithmic slip rate. The constraints from experiments reduced the degrees of freedom of the model significantly, and we managed to find a plausible model by changing only a few parameters. Although potential scale effects between lab experiments and natural faults are important problems, experimental data may be useful as a guide in exploring the huge model parameter space. This article is part of the themed issue 'Faulting, friction and weakening: from slow to fast motion'.
Earthquake sequence simulations with measured properties for JFAST core samples.
Noda, Hiroyuki; Sawai, Michiyo; Shibazaki, Bunichiro
2017-09-28
Since the 2011 Tohoku-Oki earthquake, multi-disciplinary observational studies have promoted our understanding of both the coseismic and long-term behaviour of the Japan Trench subduction zone. We also have suggestions for mechanical properties of the fault from the experimental side. In the present study, numerical models of earthquake sequences are presented, accounting for the experimental outcomes and being consistent with observations of both long-term and coseismic fault behaviour and thermal measurements. Among the constraints, a previous study of friction experiments for samples collected in the Japan Trench Fast Drilling Project (JFAST) showed complex rate dependences: a and a - b values change with the slip rate. In order to express such complexity, we generalize a rate- and state-dependent friction law to a quadratic form in terms of the logarithmic slip rate. The constraints from experiments reduced the degrees of freedom of the model significantly, and we managed to find a plausible model by changing only a few parameters. Although potential scale effects between lab experiments and natural faults are important problems, experimental data may be useful as a guide in exploring the huge model parameter space.This article is part of the themed issue 'Faulting, friction and weakening: from slow to fast motion'. © 2017 The Author(s).
A Simplified Model of ARIS for Optimal Controller Design
NASA Technical Reports Server (NTRS)
Beech, Geoffrey S.; Hampton, R. David; Kross, Denny (Technical Monitor)
2001-01-01
Many space-science experiments require active vibration isolation. Boeing's Active Rack Isolation System (ARIS) isolates experiments at the rack (vs. experiment or sub-experiment) level, with multi e experiments per rack. An ARIS-isolated rack typically employs eight actuators and thirteen umbilicals; the umbilicals provide services such as power, data transmission, and cooling. Hampton, et al., used "Kane's method" to develop an analytical, nonlinear, rigid-body model of ARIS that includes full actuator dynamics (inertias). This model, less the umbilicals, was first implemented for simulation by Beech and Hampton; they developed and tested their model using two commercial-off-the-shelf (COTS) software packages. Rupert, et al., added umbilical-transmitted disturbances to this nonlinear model. Because the nonlinear model, even for the untethered system, is both exceedingly complex and "encapsulated" inside these COTS tools, it is largely inaccessible to ARIS controller designers. This paper shows that ISPR rattle-space constraints and small ARIS actuator masses permit considerable model simplification, without significant loss of fidelity. First, for various loading conditions, comparisons are made between the dynamic responses of the nonlinear model (untethered) and a truth model. Then comparisons are made among nonlinear, linearized, and linearized reduced-mass models. It is concluded that these three models all capture the significant system rigid-body dynamics, with the third being preferred due to its relative simplicity.
NASA Technical Reports Server (NTRS)
Cotton, W. R.; Tripoli, G. J.
1982-01-01
Observational requirements for predicting convective storm development and intensity as suggested by recent numerical experiments are examined. Recent 3D numerical experiments are interpreted with regard to the relationship between overshooting tops and surface wind gusts. The development of software for emulating satellite inferred cloud properties using 3D cloud model predicted data and the simulation of Heymsfield (1981) Northern Illinois storm are described as well as the development of a conceptual/semi-quantitative model of eastward propagating, mesoscale convective complexes forming to the lee of the Rocky Mountains.
Characterizing and Modeling the Noise and Complex Impedance of Feedhorn-Coupled TES Polarimeters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Appel, J. W.; Beall, J. A.; Essinger-Hileman, T.
2009-12-16
We present results from modeling the electrothermal performance of feedhorn-coupled transition edge sensor (TES) polarimeters under development for use in cosmic microwave background (CMB) polarization experiments. Each polarimeter couples radiation from a corrugated feedhorn through a planar orthomode transducer, which transmits power from orthogonal polarization modes to two TES bolometers. We model our TES with two- and three-block thermal architectures. We fit the complex impedance data at multiple points in the TES transition. From the fits, we predict the noise spectra. We present comparisons of these predictions to the data for two TESes on a prototype polarimeter.
Design Requirements for Communication-Intensive Interactive Applications
NASA Astrophysics Data System (ADS)
Bolchini, Davide; Garzotto, Franca; Paolini, Paolo
Online interactive applications call for new requirements paradigms to capture the growing complexity of computer-mediated communication. Crafting successful interactive applications (such as websites and multimedia) involves modeling the requirements for the user experience, including those leading to content design, usable information architecture and interaction, in profound coordination with the communication goals of all stakeholders involved, ranging from persuasion to social engagement, to call for action. To face this grand challenge, we propose a methodology for modeling communication requirements and provide a set of operational conceptual tools to be used in complex projects with multiple stakeholders. Through examples from real-life projects and lessons-learned from direct experience, we draw on the concepts of brand, value, communication goals, information and persuasion requirements to systematically guide analysts to master the multifaceted connections of these elements as drivers to inform successful communication designs.
A wind tunnel study on the effects of complex topography on wind turbine performance
NASA Astrophysics Data System (ADS)
Howard, Kevin; Hu, Stephen; Chamorro, Leonardo; Guala, Michele
2012-11-01
A set of wind tunnel experiments were conducted to study the response of a wind turbine under flow conditions typically observed at the wind farm scale, in complex terrain. A scale model wind turbine was placed in a fully developed turbulent boundary layer flow obtained in the SAFL Wind Tunnel. Experiments focused on the performance of a turbine model, under the effects induced by a second upwind turbine or a by three-dimensional, sinusoidal hill, peaking at the turbine hub height. High frequency measurements of fluctuating streamwise and wall normal velocities were obtained with a X-wire anemometer simultaneously with the rotor angular velocity and the turbine(s) voltage output. Velocity measurements in the wake of the first turbine and of the hill were used to determine the inflow conditions for the downwind test turbine. Turbine performance was inferred by the mean and fluctuating voltage statistics. Specific experiments were devoted to relate the mean voltage to the mean hub velocity, and the fluctuating voltage to the unsteadiness in the rotor kinematics induced by the perturbed (hill or turbine) or unperturbed (boundary layer) large scales of the incoming turbulent flow. Results show that the voltage signal can be used to assess turbine performance in complex flows.
Statistical Surrogate Modeling of Atmospheric Dispersion Events Using Bayesian Adaptive Splines
NASA Astrophysics Data System (ADS)
Francom, D.; Sansó, B.; Bulaevskaya, V.; Lucas, D. D.
2016-12-01
Uncertainty in the inputs of complex computer models, including atmospheric dispersion and transport codes, is often assessed via statistical surrogate models. Surrogate models are computationally efficient statistical approximations of expensive computer models that enable uncertainty analysis. We introduce Bayesian adaptive spline methods for producing surrogate models that capture the major spatiotemporal patterns of the parent model, while satisfying all the necessities of flexibility, accuracy and computational feasibility. We present novel methodological and computational approaches motivated by a controlled atmospheric tracer release experiment conducted at the Diablo Canyon nuclear power plant in California. Traditional methods for building statistical surrogate models often do not scale well to experiments with large amounts of data. Our approach is well suited to experiments involving large numbers of model inputs, large numbers of simulations, and functional output for each simulation. Our approach allows us to perform global sensitivity analysis with ease. We also present an approach to calibration of simulators using field data.
MaRIE theory, modeling and computation roadmap executive summary
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lookman, Turab
The confluence of MaRIE (Matter-Radiation Interactions in Extreme) and extreme (exascale) computing timelines offers a unique opportunity in co-designing the elements of materials discovery, with theory and high performance computing, itself co-designed by constrained optimization of hardware and software, and experiments. MaRIE's theory, modeling, and computation (TMC) roadmap efforts have paralleled 'MaRIE First Experiments' science activities in the areas of materials dynamics, irradiated materials and complex functional materials in extreme conditions. The documents that follow this executive summary describe in detail for each of these areas the current state of the art, the gaps that exist and the road mapmore » to MaRIE and beyond. Here we integrate the various elements to articulate an overarching theme related to the role and consequences of heterogeneities which manifest as competing states in a complex energy landscape. MaRIE experiments will locate, measure and follow the dynamical evolution of these heterogeneities. Our TMC vision spans the various pillar science and highlights the key theoretical and experimental challenges. We also present a theory, modeling and computation roadmap of the path to and beyond MaRIE in each of the science areas.« less
Simulation Experiment Description Markup Language (SED-ML) Level 1 Version 3 (L1V3).
Bergmann, Frank T; Cooper, Jonathan; König, Matthias; Moraru, Ion; Nickerson, David; Le Novère, Nicolas; Olivier, Brett G; Sahle, Sven; Smith, Lucian; Waltemath, Dagmar
2018-03-19
The creation of computational simulation experiments to inform modern biological research poses challenges to reproduce, annotate, archive, and share such experiments. Efforts such as SBML or CellML standardize the formal representation of computational models in various areas of biology. The Simulation Experiment Description Markup Language (SED-ML) describes what procedures the models are subjected to, and the details of those procedures. These standards, together with further COMBINE standards, describe models sufficiently well for the reproduction of simulation studies among users and software tools. The Simulation Experiment Description Markup Language (SED-ML) is an XML-based format that encodes, for a given simulation experiment, (i) which models to use; (ii) which modifications to apply to models before simulation; (iii) which simulation procedures to run on each model; (iv) how to post-process the data; and (v) how these results should be plotted and reported. SED-ML Level 1 Version 1 (L1V1) implemented support for the encoding of basic time course simulations. SED-ML L1V2 added support for more complex types of simulations, specifically repeated tasks and chained simulation procedures. SED-ML L1V3 extends L1V2 by means to describe which datasets and subsets thereof to use within a simulation experiment.
Synchronization Experiments With A Global Coupled Model of Intermediate Complexity
NASA Astrophysics Data System (ADS)
Selten, Frank; Hiemstra, Paul; Shen, Mao-Lin
2013-04-01
In the super modeling approach an ensemble of imperfect models are connected through nudging terms that nudge the solution of each model to the solution of all other models in the ensemble. The goal is to obtain a synchronized state through a proper choice of connection strengths that closely tracks the trajectory of the true system. For the super modeling approach to be successful, the connections should be dense and strong enough for synchronization to occur. In this study we analyze the behavior of an ensemble of connected global atmosphere-ocean models of intermediate complexity. All atmosphere models are connected to the same ocean model through the surface fluxes of heat, water and momentum, the ocean is integrated using weighted averaged surface fluxes. In particular we analyze the degree of synchronization between the atmosphere models and the characteristics of the ensemble mean solution. The results are interpreted using a low order atmosphere-ocean toy model.
Hou, Chang-Yu; Feng, Ling; Seleznev, Nikita; Freed, Denise E
2018-09-01
In this work, we establish an effective medium model to describe the low-frequency complex dielectric (conductivity) dispersion of dilute clay suspensions. We use previously obtained low-frequency polarization coefficients for a charged oblate spheroidal particle immersed in an electrolyte as the building block for the Maxwell Garnett mixing formula to model the dilute clay suspension. The complex conductivity phase dispersion exhibits a near-resonance peak when the clay grains have a narrow size distribution. The peak frequency is associated with the size distribution as well as the shape of clay grains and is often referred to as the characteristic frequency. In contrast, if the size of the clay grains has a broad distribution, the phase peak is broadened and can disappear into the background of the canonical phase response of the brine. To benchmark our model, the low-frequency dispersion of the complex conductivity of dilute clay suspensions is measured using a four-point impedance measurement, which can be reliably calibrated in the frequency range between 0.1 Hz and 10 kHz. By using a minimal number of fitting parameters when reliable information is available as input for the model and carefully examining the issue of potential over-fitting, we found that our model can be used to fit the measured dispersion of the complex conductivity with reasonable parameters. The good match between the modeled and experimental complex conductivity dispersion allows us to argue that our simplified model captures the essential physics for describing the low-frequency dispersion of the complex conductivity of dilute clay suspensions. Copyright © 2018 Elsevier Inc. All rights reserved.
Harvey, Desley; Foster, Michele; Strivens, Edward; Quigley, Rachel
2017-05-01
Objective The aim of the present study was to describe the care transition experiences of older people who transfer between subacute and primary care, and to identify factors that influence these experiences. A further aim of the study was to identify ways to enhance the Geriatric Evaluation and Management (GEM) model of care and improve local coordination of services for older people. Methods The present study was an exploratory, longitudinal case study involving repeat interviews with 19 patients and carers, patient chart audits and three focus groups with service providers. Interview transcripts were coded and synthesised to identify recurring themes. Results Patients and carers experienced care transitions as dislocating and unpredictable within a complex and turbulent service context. The experience was characterised by precarious self-management in the community, floundering with unmet needs and holistic care within the GEM service. Patient and carer attitudes to seeking help, quality and timeliness of communication and information exchange, and system pressure affected care transition experiences. Conclusion Further policy and practice attention, including embedding early intervention and prevention, strengthening links between levels of care by building on existing programs and educative and self-help initiatives for patients and carers is recommended to improve care transition experiences and optimise the impact of the GEM model of care. What is known about the topic? Older people with complex care needs experience frequent care transitions because of fluctuating health and fragmentation of aged care services in Australia. The GEM model of care promotes multidisciplinary, coordinated care to improve care transitions and outcomes for older people with complex care needs. What does this paper add? The present study highlights the crucial role of the GEM service, but found there is a lack of systemised linkages within and across levels of care that disrupts coordinated care and affects care transition experiences. There are underutilised opportunities for early intervention and prevention across the system, including the emergency department and general practice. What are the implications for practitioners? Comprehensive screening, assessment and intervention in primary and acute care, formalised transition processes and enhanced support for patients and carers to access timely, appropriate care is required to achieve quality, coordinated care transitions for older people.
Identifying protein complexes in PPI network using non-cooperative sequential game.
Maulik, Ujjwal; Basu, Srinka; Ray, Sumanta
2017-08-21
Identifying protein complexes from protein-protein interaction (PPI) network is an important and challenging task in computational biology as it helps in better understanding of cellular mechanisms in various organisms. In this paper we propose a noncooperative sequential game based model for protein complex detection from PPI network. The key hypothesis is that protein complex formation is driven by mechanism that eventually optimizes the number of interactions within the complex leading to dense subgraph. The hypothesis is drawn from the observed network property named small world. The proposed multi-player game model translates the hypothesis into the game strategies. The Nash equilibrium of the game corresponds to a network partition where each protein either belong to a complex or form a singleton cluster. We further propose an algorithm to find the Nash equilibrium of the sequential game. The exhaustive experiment on synthetic benchmark and real life yeast networks evaluates the structural as well as biological significance of the network partitions.
Competitive sorption of carbonate and arsenic to hematite: combined ATR-FTIR and batch experiments.
Brechbühl, Yves; Christl, Iso; Elzinga, Evert J; Kretzschmar, Ruben
2012-07-01
The competitive sorption of carbonate and arsenic to hematite was investigated in closed-system batch experiments. The experimental conditions covered a pH range of 3-7, arsenate concentrations of 3-300 μM, and arsenite concentrations of 3-200 μM. Dissolved carbonate concentrations were varied by fixing the CO(2) partial pressure at 0.39 (atmospheric), 10, or 100 hPa. Sorption data were modeled with a one-site three plane model considering carbonate and arsenate surface complexes derived from ATR-FTIR spectroscopy analyses. Macroscopic sorption data revealed that in the pH range 3-7, carbonate was a weak competitor for both arsenite and arsenate. The competitive effect of carbonate increased with increasing CO(2) partial pressure and decreasing arsenic concentrations. For arsenate, sorption was reduced by carbonate only at slightly acidic to neutral pH values, whereas arsenite sorption was decreased across the entire pH range. ATR-FTIR spectra indicated the predominant formation of bidentate binuclear inner-sphere surface complexes for both sorbed arsenate and sorbed carbonate. Surface complexation modeling based on the dominant arsenate and carbonate surface complexes indicated by ATR-FTIR and assuming inner-sphere complexation of arsenite successfully described the macroscopic sorption data. Our results imply that in natural arsenic-contaminated systems where iron oxide minerals are important sorbents, dissolved carbonate may increase aqueous arsenite concentrations, but will affect dissolved arsenate concentrations only at neutral to alkaline pH and at very high CO(2) partial pressures. Copyright © 2012 Elsevier Inc. All rights reserved.
Power, Gerald; Miller, Anne
2007-01-01
Abstract: Cardiopulmonary bypass (CPB) is a complex task requiring high levels of practitioner expertise. Although some education standards exist, few are based on an analysis of perfusionists’ problem-solving needs. This study shows the efficacy of work domain analysis (WDA) as a framework for analyzing perfusionists’ conceptualization and problem-solving strategies. A WDA model of a CPB circuit was developed. A high-fidelity CPB simulator (Manbit) was used to present routine and oxygenator failure scenarios to six proficient perfusionists. The video-cued recall technique was used to elicit perfusionists’ conceptualization strategies. The resulting recall transcripts were coded using the WDA model and analyzed for associations between task completion times and patterns of conceptualization. The WDA model developed was successful in being able to account for and describe the thought process followed by each participant. It was also shown that, although there was no correlation between experience with CPB and ability to change an oxygenator, there was a link between the between specific thought patterns and the efficiency in undertaking this task. Simulators are widely used in many fields of human endeavor, and in this research, the attempt was made to use WDA to gain insights into the complexities of the human thought process when engaged in the complex task of conducting CPB. The assumption that experience equates with ability is challenged, and rather, it is shown that thought process is a more significant determinant of success when engaged in complex tasks. WDA analysis in combination with a CPB simulator may be used to elucidate successful strategies for completing complex tasks. PMID:17972450
The effects of strain and stress state in hot forming of mg AZ31 sheet
NASA Astrophysics Data System (ADS)
Sherek, Paul A.; Carpenter, Alexander J.; Hector, Louis G.; Krajewski, Paul E.; Carter, Jon T.; Lasceski, Joshua; Taleff, Eric M.
Wrought magnesium alloys, such as AZ31 sheet, are of considerable interest for light-weighting of vehicle structural components. The poor room-temperature ductility of AZ31 sheet has been a hindrance to forming the complex part shapes necessary for practical applications. However, the outstanding formability of AZ31 sheet at elevated temperature provides an opportunity to overcome that problem. Complex demonstration components have already been produced at 450°C using gas-pressure forming. Accurate simulations of such hot, gas-pressure forming will be required for the design and optimization exercises necessary if this technology is to be implemented commercially. We report on experiments and simulations used to construct the accurate material constitutive models necessary for finite-element-method simulations. In particular, the effects of strain and stress state on plastic deformation of AZ31 sheet at 450°C are considered in material constitutive model development. Material models are validated against data from simple forming experiments.
The fluid trampoline: droplets bouncing on a soap film
NASA Astrophysics Data System (ADS)
Bush, John; Gilet, Tristan
2008-11-01
We present the results of a combined experimental and theoretical investigation of droplets falling onto a horizontal soap film. Both static and vertically vibrated soap films are considered. A quasi-static description of the soap film shape yields a force-displacement relation that provides excellent agreement with experiment, and allows us to model the film as a nonlinear spring. This approach yields an accurate criterion for the transition between droplet bouncing and crossing on the static film; moreover, it allows us to rationalize the observed constancy of the contact time and scaling for the coefficient of restitution in the bouncing states. On the vibrating film, a variety of bouncing behaviours were observed, including simple and complex periodic states, multiperiodicity and chaos. A simple theoretical model is developed that captures the essential physics of the bouncing process, reproducing all observed bouncing states. Quantitative agreement between model and experiment is deduced for simple periodic modes, and qualitative agreement for more complex periodic and chaotic bouncing states.
Genotypic Complexity of Fisher’s Geometric Model
Hwang, Sungmin; Park, Su-Chan; Krug, Joachim
2017-01-01
Fisher’s geometric model was originally introduced to argue that complex adaptations must occur in small steps because of pleiotropic constraints. When supplemented with the assumption of additivity of mutational effects on phenotypic traits, it provides a simple mechanism for the emergence of genotypic epistasis from the nonlinear mapping of phenotypes to fitness. Of particular interest is the occurrence of reciprocal sign epistasis, which is a necessary condition for multipeaked genotypic fitness landscapes. Here we compute the probability that a pair of randomly chosen mutations interacts sign epistatically, which is found to decrease with increasing phenotypic dimension n, and varies nonmonotonically with the distance from the phenotypic optimum. We then derive expressions for the mean number of fitness maxima in genotypic landscapes comprised of all combinations of L random mutations. This number increases exponentially with L, and the corresponding growth rate is used as a measure of the complexity of the landscape. The dependence of the complexity on the model parameters is found to be surprisingly rich, and three distinct phases characterized by different landscape structures are identified. Our analysis shows that the phenotypic dimension, which is often referred to as phenotypic complexity, does not generally correlate with the complexity of fitness landscapes and that even organisms with a single phenotypic trait can have complex landscapes. Our results further inform the interpretation of experiments where the parameters of Fisher’s model have been inferred from data, and help to elucidate which features of empirical fitness landscapes can be described by this model. PMID:28450460
Bhatla, Puneet; Tretter, Justin T; Ludomirsky, Achi; Argilla, Michael; Latson, Larry A; Chakravarti, Sujata; Barker, Piers C; Yoo, Shi-Joon; McElhinney, Doff B; Wake, Nicole; Mosca, Ralph S
2017-01-01
Rapid prototyping facilitates comprehension of complex cardiac anatomy. However, determining when this additional information proves instrumental in patient management remains a challenge. We describe our experience with patient-specific anatomic models created using rapid prototyping from various imaging modalities, suggesting their utility in surgical and interventional planning in congenital heart disease (CHD). Virtual and physical 3-dimensional (3D) models were generated from CT or MRI data, using commercially available software for patients with complex muscular ventricular septal defects (CMVSD) and double-outlet right ventricle (DORV). Six patients with complex anatomy and uncertainty of the optimal management strategy were included in this study. The models were subsequently used to guide management decisions, and the outcomes reviewed. 3D models clearly demonstrated the complex intra-cardiac anatomy in all six patients and were utilized to guide management decisions. In the three patients with CMVSD, one underwent successful endovascular device closure following a prior failed attempt at transcatheter closure, and the other two underwent successful primary surgical closure with the aid of 3D models. In all three cases of DORV, the models provided better anatomic delineation and additional information that altered or confirmed the surgical plan. Patient-specific 3D heart models show promise in accurately defining intra-cardiac anatomy in CHD, specifically CMVSD and DORV. We believe these models improve understanding of the complex anatomical spatial relationships in these defects and provide additional insight for pre/intra-interventional management and surgical planning.
Computer modeling describes gravity-related adaptation in cell cultures.
Alexandrov, Ludmil B; Alexandrova, Stoyana; Usheva, Anny
2009-12-16
Questions about the changes of biological systems in response to hostile environmental factors are important but not easy to answer. Often, the traditional description with differential equations is difficult due to the overwhelming complexity of the living systems. Another way to describe complex systems is by simulating them with phenomenological models such as the well-known evolutionary agent-based model (EABM). Here we developed an EABM to simulate cell colonies as a multi-agent system that adapts to hyper-gravity in starvation conditions. In the model, the cell's heritable characteristics are generated and transferred randomly to offspring cells. After a qualitative validation of the model at normal gravity, we simulate cellular growth in hyper-gravity conditions. The obtained data are consistent with previously confirmed theoretical and experimental findings for bacterial behavior in environmental changes, including the experimental data from the microgravity Atlantis and the Hypergravity 3000 experiments. Our results demonstrate that it is possible to utilize an EABM with realistic qualitative description to examine the effects of hypergravity and starvation on complex cellular entities.
Experimental and modeling study of the uranium (VI) sorption on goethite.
Missana, Tiziana; García-Gutiérrez, Miguel; Maffiotte, Cesar
2003-04-15
Acicular goethite was synthesized in the laboratory and its main physicochemical properties (composition, microstructure, surface area, and surface charge) were analyzed as a previous step to sorption experiments. The stability of the oxide, under the conditions used in sorption studies, was also investigated. The sorption of U(VI) onto goethite was studied under O(2)- and CO(2)-free atmosphere and in a wide range of experimental conditions (pH, ionic strength, radionuclide, and solid concentration), in order to assess the validity of different surface complexation models available for the interpretation of sorption data. Three different models were used to fit the experimental data. The first two models were based on the diffuse double layer concept. The first one (Model 1) considered two different monodentate complexes with the goethite surface and the second (Model 2) a single binuclear bidentate complex. A nonelectrostatic (NE) approach was used as a third model and, in that case, the same species considered in Model 1 were used. The results showed that all the models are able to describe the sorption behavior fairly well as a function of pH, electrolyte concentration, and U(VI) concentration. However, Model 2 fails in the description of the uranium sorption behavior as a function of the sorbent concentration. This demonstrates the importance of checking the validity of any surface complexation model under the widest possible range of experimental conditions.
Pattern recognition tool based on complex network-based approach
NASA Astrophysics Data System (ADS)
Casanova, Dalcimar; Backes, André Ricardo; Martinez Bruno, Odemir
2013-02-01
This work proposed a generalization of the method proposed by the authors: 'A complex network-based approach for boundary shape analysis'. Instead of modelling a contour into a graph and use complex networks rules to characterize it, here, we generalize the technique. This way, the work proposes a mathematical tool for characterization signals, curves and set of points. To evaluate the pattern description power of the proposal, an experiment of plat identification based on leaf veins image are conducted. Leaf vein is a taxon characteristic used to plant identification proposes, and one of its characteristics is that these structures are complex, and difficult to be represented as a signal or curves and this way to be analyzed in a classical pattern recognition approach. Here, we model the veins as a set of points and model as graphs. As features, we use the degree and joint degree measurements in a dynamic evolution. The results demonstrates that the technique has a good power of discrimination and can be used for plant identification, as well as other complex pattern recognition tasks.
Müller, Katharina; Gröschel, Annett; Rossberg, André; Bok, Frank; Franzen, Carola; Brendler, Vinzenz; Foerstendorf, Harald
2015-02-17
Hematite plays a decisive role in regulating the mobility of contaminants in rocks and soils. The Np(V) reactions at the hematite-water interface were comprehensively investigated by a combined approach of in situ vibrational spectroscopy, X-ray absorption spectroscopy and surface complexation modeling. A variety of sorption parameters such as Np(V) concentration, pH, ionic strength, and the presence of bicarbonate was considered. Time-resolved IR spectroscopic sorption experiments at the iron oxide-water interface evidenced the formation of a single monomer Np(V) inner-sphere sorption complex. EXAFS provided complementary information on bidentate edge-sharing coordination. In the presence of atmospherically derived bicarbonate the formation of the bis-carbonato inner-sphere complex was confirmed supporting previous EXAFS findings.1 The obtained molecular structure allows more reliable surface complexation modeling of recent and future macroscopic data. Such confident modeling is mandatory for evaluating water contamination and for predicting the fate and migration of radioactive contaminants in the subsurface environment as it might occur in the vicinity of a radioactive waste repository or a reprocessing plant.
He, Yixuan; Kodali, Anita; Wallace, Dorothy I
2018-06-14
Neuroblastoma is the leading cause of cancer death in young children. Although treatment for neuroblastoma has improved, the 5-year survival rate of patients still remains less than half. Recent studies have indicated that bevacizumab, an anti-VEGF drug used in treatment of several other cancer types, may be effective for treating neuroblastoma as well. However, its effect on neuroblastoma has not been well characterized. While traditional experiments are costly and time-consuming, mathematical models are capable of simulating complex systems quickly and inexpensively. In this study, we present a model of vascular tumor growth of neuroblastoma IMR-32 that is complex enough to replicate experimental data across a range of tumor cell properties measured in a suite of in vitro and in vivo experiments. The model provides quantitative insight into tumor vasculature, predicting a linear relationship between vasculature and tumor volume. The tumor growth model was coupled with known pharmacokinetics and pharmacodynamics of the VEGF blocker bevacizumab to study its effect on neuroblastoma growth dynamics. The results of our model suggest that total administered bevacizumab concentration per week, as opposed to dosage regimen, is the major determining factor in tumor suppression. Our model also establishes an exponentially decreasing relationship between administered bevacizumab concentration and tumor growth rate.
Safari, Leila; Patrick, Jon D
2018-06-01
This paper reports on a generic framework to provide clinicians with the ability to conduct complex analyses on elaborate research topics using cascaded queries to resolve internal time-event dependencies in the research questions, as an extension to the proposed Clinical Data Analytics Language (CliniDAL). A cascaded query model is proposed to resolve internal time-event dependencies in the queries which can have up to five levels of criteria starting with a query to define subjects to be admitted into a study, followed by a query to define the time span of the experiment. Three more cascaded queries can be required to define control groups, control variables and output variables which all together simulate a real scientific experiment. According to the complexity of the research questions, the cascaded query model has the flexibility of merging some lower level queries for simple research questions or adding a nested query to each level to compose more complex queries. Three different scenarios (one of them contains two studies) are described and used for evaluation of the proposed solution. CliniDAL's complex analyses solution enables answering complex queries with time-event dependencies at most in a few hours which manually would take many days. An evaluation of results of the research studies based on the comparison between CliniDAL and SQL solutions reveals high usability and efficiency of CliniDAL's solution. Copyright © 2018 Elsevier Inc. All rights reserved.
Teaching Qualitative Research for Human Services Students: A Three-Phase Model
ERIC Educational Resources Information Center
Goussinsky, Ruhama; Reshef, Arie; Yanay-Ventura, Galit; Yassour-Borochowitz, Dalit
2011-01-01
Qualitative research is an inherent part of the human services profession, since it emphasizes the great and multifaceted complexity characterizing human experience and the sociocultural context in which humans act. In the department of human services at Emek Yezreel College, Israel, we have developed a three-phase model to ensure a relatively…
Enabling the Classroom and the Curriculum: Higher Education, Literary Studies and Disability
ERIC Educational Resources Information Center
Bolt, David
2017-01-01
In this article the tripartite model of disability is applied to the lived experience of twenty-first-century higher education. The tripartite model facilitates a complex understanding of disability that recognises assumptions and discrimination but not at the cost of valued identity. This being so, not only the normative positivisms and…
An Undergraduate Research Experience Studying Ras and Ras Mutants
ERIC Educational Resources Information Center
Griffeth, Nancy; Batista, Naralys; Grosso, Terri; Arianna, Gianluca; Bhatia, Ravnit; Boukerche, Faiza; Crispi, Nicholas; Fuller, Neno; Gauza, Piotr; Kingsbury, Lyle; Krynski, Kamil; Levine, Alina; Ma, Rui Yan; Nam, Jennifer; Pearl, Eitan; Rosa, Alessandro; Salarbux, Stephanie; Sun, Dylan
2016-01-01
Each January from 2010 to 2014, an undergraduate workshop on modeling biological systems was held at Lehman College of the City University of New York. The workshops were funded by a National Science Foundation (NSF) Expedition in Computing, "Computational Modeling and Analysis of Complex Systems (CMACS)." The primary goal was to…
Modeling disturbance and succession in forest landscapes using LANDIS: introduction
Brian R. Sturtevant; Eric J. Gustafson; Hong S. He
2004-01-01
Modeling forest landscape change is challenging because it involves the interaction of a variety of factors and processes, such as climate, succession, disturbance, and management. These processes occur at various spatial and temporal scales, and the interactions can be complex on heterogeneous landscapes. Because controlled field experiments designed to investigate...
Working Memory Span Development: A Time-Based Resource-Sharing Model Account
ERIC Educational Resources Information Center
Barrouillet, Pierre; Gavens, Nathalie; Vergauwe, Evie; Gaillard, Vinciane; Camos, Valerie
2009-01-01
The time-based resource-sharing model (P. Barrouillet, S. Bernardin, & V. Camos, 2004) assumes that during complex working memory span tasks, attention is frequently and surreptitiously switched from processing to reactivate decaying memory traces before their complete loss. Three experiments involving children from 5 to 14 years of age…
Webster, Fiona; Christian, Jennifer; Mansfield, Elizabeth; Bhattacharyya, Onil; Hawker, Gillian; Levinson, Wendy; Naglie, Gary; Pham, Thuy-Nga; Rose, Louise; Schull, Michael; Sinha, Samir; Stergiopoulos, Vicky; Upshur, Ross; Wilson, Lynn
2015-01-01
Objectives The perspectives, needs and preferences of individuals with complex health and social needs can be overlooked in the design of healthcare interventions. This study was designed to provide new insights on patient perspectives drawing from the qualitative evaluation of 5 complex healthcare interventions. Setting Patients and their caregivers were recruited from 5 interventions based in primary, hospital and community care in Ontario, Canada. Participants We included 62 interviews from 44 patients and 18 non-clinical caregivers. Intervention Our team analysed the transcripts from 5 distinct projects. This approach to qualitative meta-evaluation identifies common issues described by a diverse group of patients, therefore providing potential insights into systems issues. Outcome measures This study is a secondary analysis of qualitative data; therefore, no outcome measures were identified. Results We identified 5 broad themes that capture the patients’ experience and highlight issues that might not be adequately addressed in complex interventions. In our study, we found that: (1) the emergency department is the unavoidable point of care; (2) patients and caregivers are part of complex and variable family systems; (3) non-medical issues mediate patients’ experiences of health and healthcare delivery; (4) the unanticipated consequences of complex healthcare interventions are often the most valuable; and (5) patient experiences are shaped by the healthcare discourses on medically complex patients. Conclusions Our findings suggest that key assumptions about patients that inform intervention design need to be made explicit in order to build capacity to better understand and support patients with multiple chronic diseases. Across many health systems internationally, multiple models are being implemented simultaneously that may have shared features and target similar patients, and a qualitative meta-evaluation approach, thus offers an opportunity for cumulative learning at a system level in addition to informing intervention design and modification. PMID:26351182
Information Management for Unmanned Systems: Combining DL-Reasoning with Publish/Subscribe
NASA Astrophysics Data System (ADS)
Moser, Herwig; Reichelt, Toni; Oswald, Norbert; Förster, Stefan
Sharing capabilities and information between collaborating entities by using modem information- and communication-technology is a core principle in complex distributed civil or military mission scenarios. Previous work proved the suitability of Service-oriented Architectures for modelling and sharing the participating entities' capabilities. Albeit providing a satisfactory model for capabilities sharing, pure service-orientation curtails expressiveness for information exchange as opposed to dedicated data-centric communication principles. In this paper we introduce an Information Management System which combines OWL-Ontologies and automated reasoning with Publish/Subscribe-Systems, providing for a shared but decoupled data model. While confirming existing related research results, we emphasise the novel application and lack of practical experience of using Semantic Web technologies in areas other than originally intended. That is, aiding decision support and software design in the context of a mission scenario for an unmanned system. Experiments within a complex simulation environment show the immediate benefits of a semantic information-management and -dissemination platform: Clear separation of concerns in code and data model, increased service re-usability and extensibility as well as regulation of data flow and respective system behaviour through declarative rules.
Phase-field simulations of GaN growth by selective area epitaxy on complex mask geometries
Aagesen, Larry K.; Coltrin, Michael Elliott; Han, Jung; ...
2015-05-15
Three-dimensional phase-field simulations of GaN growth by selective area epitaxy were performed. Furthermore, this model includes a crystallographic-orientation-dependent deposition rate and arbitrarily complex mask geometries. The orientation-dependent deposition rate can be determined from experimental measurements of the relative growth rates of low-index crystallographic facets. Growth on various complex mask geometries was simulated on both c-plane and a-plane template layers. Agreement was observed between simulations and experiment, including complex phenomena occurring at the intersections between facets. The sources of the discrepancies between simulated and experimental morphologies were also investigated. We found that the model provides a route to optimize masks andmore » processing conditions during materials synthesis for solar cells, light-emitting diodes, and other electronic and opto-electronic applications.« less
Menolascina, Filippo; Bellomo, Domenico; Maiwald, Thomas; Bevilacqua, Vitoantonio; Ciminelli, Caterina; Paradiso, Angelo; Tommasi, Stefania
2009-10-15
Mechanistic models are becoming more and more popular in Systems Biology; identification and control of models underlying biochemical pathways of interest in oncology is a primary goal in this field. Unfortunately the scarce availability of data still limits our understanding of the intrinsic characteristics of complex pathologies like cancer: acquiring information for a system understanding of complex reaction networks is time consuming and expensive. Stimulus response experiments (SRE) have been used to gain a deeper insight into the details of biochemical mechanisms underlying cell life and functioning. Optimisation of the input time-profile, however, still remains a major area of research due to the complexity of the problem and its relevance for the task of information retrieval in systems biology-related experiments. We have addressed the problem of quantifying the information associated to an experiment using the Fisher Information Matrix and we have proposed an optimal experimental design strategy based on evolutionary algorithm to cope with the problem of information gathering in Systems Biology. On the basis of the theoretical results obtained in the field of control systems theory, we have studied the dynamical properties of the signals to be used in cell stimulation. The results of this study have been used to develop a microfluidic device for the automation of the process of cell stimulation for system identification. We have applied the proposed approach to the Epidermal Growth Factor Receptor pathway and we observed that it minimises the amount of parametric uncertainty associated to the identified model. A statistical framework based on Monte-Carlo estimations of the uncertainty ellipsoid confirmed the superiority of optimally designed experiments over canonical inputs. The proposed approach can be easily extended to multiobjective formulations that can also take advantage of identifiability analysis. Moreover, the availability of fully automated microfluidic platforms explicitly developed for the task of biochemical model identification will hopefully reduce the effects of the 'data rich--data poor' paradox in Systems Biology.
Brain functional BOLD perturbation modelling for forward fMRI and inverse mapping
Robinson, Jennifer; Calhoun, Vince
2018-01-01
Purpose To computationally separate dynamic brain functional BOLD responses from static background in a brain functional activity for forward fMRI signal analysis and inverse mapping. Methods A brain functional activity is represented in terms of magnetic source by a perturbation model: χ = χ0 +δχ, with δχ for BOLD magnetic perturbations and χ0 for background. A brain fMRI experiment produces a timeseries of complex-valued images (T2* images), whereby we extract the BOLD phase signals (denoted by δP) by a complex division. By solving an inverse problem, we reconstruct the BOLD δχ dataset from the δP dataset, and the brain χ distribution from a (unwrapped) T2* phase image. Given a 4D dataset of task BOLD fMRI, we implement brain functional mapping by temporal correlation analysis. Results Through a high-field (7T) and high-resolution (0.5mm in plane) task fMRI experiment, we demonstrated in detail the BOLD perturbation model for fMRI phase signal separation (P + δP) and reconstructing intrinsic brain magnetic source (χ and δχ). We also provided to a low-field (3T) and low-resolution (2mm) task fMRI experiment in support of single-subject fMRI study. Our experiments show that the δχ-depicted functional map reveals bidirectional BOLD χ perturbations during the task performance. Conclusions The BOLD perturbation model allows us to separate fMRI phase signal (by complex division) and to perform inverse mapping for pure BOLD δχ reconstruction for intrinsic functional χ mapping. The full brain χ reconstruction (from unwrapped fMRI phase) provides a new brain tissue image that allows to scrutinize the brain tissue idiosyncrasy for the pure BOLD δχ response through an automatic function/structure co-localization. PMID:29351339
Cardiovascular system simulation in biomedical engineering education.
NASA Technical Reports Server (NTRS)
Rideout, V. C.
1972-01-01
Use of complex cardiovascular system models, in conjunction with a large hybrid computer, in biomedical engineering courses. A cardiovascular blood pressure-flow model, driving a compartment model for the study of dye transport, was set up on the computer for use as a laboratory exercise by students who did not have the computer experience or skill to be able to easily set up such a simulation involving some 27 differential equations running at 'real time' rate. The students were given detailed instructions regarding the model, and were then able to study effects such as those due to septal and valve defects upon the pressure, flow, and dye dilution curves. The success of this experiment in the use of involved models in engineering courses was such that it seems that this type of laboratory exercise might be considered for use in physiology courses as an adjunct to animal experiments.
PANORAMA: An approach to performance modeling and diagnosis of extreme-scale workflows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deelman, Ewa; Carothers, Christopher; Mandal, Anirban
Here we report that computational science is well established as the third pillar of scientific discovery and is on par with experimentation and theory. However, as we move closer toward the ability to execute exascale calculations and process the ensuing extreme-scale amounts of data produced by both experiments and computations alike, the complexity of managing the compute and data analysis tasks has grown beyond the capabilities of domain scientists. Therefore, workflow management systems are absolutely necessary to ensure current and future scientific discoveries. A key research question for these workflow management systems concerns the performance optimization of complex calculation andmore » data analysis tasks. The central contribution of this article is a description of the PANORAMA approach for modeling and diagnosing the run-time performance of complex scientific workflows. This approach integrates extreme-scale systems testbed experimentation, structured analytical modeling, and parallel systems simulation into a comprehensive workflow framework called Pegasus for understanding and improving the overall performance of complex scientific workflows.« less
PANORAMA: An approach to performance modeling and diagnosis of extreme-scale workflows
Deelman, Ewa; Carothers, Christopher; Mandal, Anirban; ...
2015-07-14
Here we report that computational science is well established as the third pillar of scientific discovery and is on par with experimentation and theory. However, as we move closer toward the ability to execute exascale calculations and process the ensuing extreme-scale amounts of data produced by both experiments and computations alike, the complexity of managing the compute and data analysis tasks has grown beyond the capabilities of domain scientists. Therefore, workflow management systems are absolutely necessary to ensure current and future scientific discoveries. A key research question for these workflow management systems concerns the performance optimization of complex calculation andmore » data analysis tasks. The central contribution of this article is a description of the PANORAMA approach for modeling and diagnosing the run-time performance of complex scientific workflows. This approach integrates extreme-scale systems testbed experimentation, structured analytical modeling, and parallel systems simulation into a comprehensive workflow framework called Pegasus for understanding and improving the overall performance of complex scientific workflows.« less
Bai, Shuming; Song, Kai; Shi, Qiang
2015-05-21
Observations of oscillatory features in the 2D spectra of several photosynthetic complexes have led to diverged opinions on their origins, including electronic coherence, vibrational coherence, and vibronic coherence. In this work, effects of these different types of quantum coherence on ultrafast pump-probe polarization anisotropy are investigated and distinguished. We first simulate the isotropic pump-probe signal and anisotropy decay of the Fenna-Matthews-Olson (FMO) complex using a model with only electronic coherence at low temperature and obtain the same coherence time as in the previous experiment. Then, three model dimer systems with different prespecified quantum coherence are simulated, and the results show that their different spectral characteristics can be used to determine the type of coherence during the spectral process. Finally, we simulate model systems with different electronic-vibrational couplings and reveal the condition in which long time vibronic coherence can be observed in systems like the FMO complex.
NASA Astrophysics Data System (ADS)
Parikh, H. M.; Carlton, A. G.; Zhang, H.; Kamens, R.; Vizuete, W.
2011-12-01
Secondary organic aerosol (SOA) is simulated for 6 outdoor smog chamber experiments using a SOA model based on a kinetic chemical mechanism in conjunction with a volatility basis set (VBS) approach. The experiments include toluene, a non-SOA-forming hydrocarbon mixture, diesel exhaust or meat cooking emissions and NOx, and are performed under varying conditions of relative humidity. SOA formation from toluene is modeled using a condensed kinetic aromatic mechanism that includes partitioning of lumped semi-volatile products in particle organic-phase and incorporates particle aqueous-phase chemistry to describe uptake of glyoxal and methylglyoxal. Modeling using the kinetic mechanism alone, along with primary organic aerosol (POA) from diesel exhaust (DE) /meat cooking (MC) fails to simulate the rapid SOA formation at the beginning hours of the experiments. Inclusion of a VBS approach with the kinetic mechanism to characterize the emissions and chemistry of complex mixture of intermediate volatility organic compounds (IVOCs) from DE/MC, substantially improves SOA predictions when compared with observed data. The VBS model includes photochemical aging of IVOCs and evaporation of POA after dilution. The relative contribution of SOA mass from DE/MC is as high as 95% in the morning, but substantially decreases after mid-afternoon. For high humidity experiments, aqueous-phase SOA fraction dominates the total SOA mass at the end of the day (approximately 50%). In summary, the combined kinetic and VBS approach provides a new and improved framework to semi-explicitly model SOA from VOC precursors in conjunction with a VBS approach that can be used on complex emission mixtures comprised with hundreds of individual chemical species.
Human systems dynamics: Toward a computational model
NASA Astrophysics Data System (ADS)
Eoyang, Glenda H.
2012-09-01
A robust and reliable computational model of complex human systems dynamics could support advancements in theory and practice for social systems at all levels, from intrapersonal experience to global politics and economics. Models of human interactions have evolved from traditional, Newtonian systems assumptions, which served a variety of practical and theoretical needs of the past. Another class of models has been inspired and informed by models and methods from nonlinear dynamics, chaos, and complexity science. None of the existing models, however, is able to represent the open, high dimension, and nonlinear self-organizing dynamics of social systems. An effective model will represent interactions at multiple levels to generate emergent patterns of social and political life of individuals and groups. Existing models and modeling methods are considered and assessed against characteristic pattern-forming processes in observed and experienced phenomena of human systems. A conceptual model, CDE Model, based on the conditions for self-organizing in human systems, is explored as an alternative to existing models and methods. While the new model overcomes the limitations of previous models, it also provides an explanatory base and foundation for prospective analysis to inform real-time meaning making and action taking in response to complex conditions in the real world. An invitation is extended to readers to engage in developing a computational model that incorporates the assumptions, meta-variables, and relationships of this open, high dimension, and nonlinear conceptual model of the complex dynamics of human systems.
Proceedings of the Ninth Annual Software Engineering Workshop
NASA Technical Reports Server (NTRS)
1984-01-01
Experiences in measurement, utilization, and evaluation of software methodologies, models, and tools are discussed. NASA's involvement in ever larger and more complex systems, like the space station project, provides a motive for the support of software engineering research and the exchange of ideas in such forums. The topics of current SEL research are software error studies, experiments with software development, and software tools.
ERIC Educational Resources Information Center
Heddy, Benjamin C.; Sinatra, Gale M.
2013-01-01
Teaching and learning about complex scientific content, such as biological evolution, is challenging in part because students have a difficult time seeing the relevance of evolution in their everyday lives. The purpose of this study was to explore the effectiveness of the Teaching for Transformative Experiences in Science (TTES) model (Pugh, 2002)…
A Colorful Mixing Experiment in a Stirred Tank Using Non-Newtonian Blue Maize Flour Suspensions
ERIC Educational Resources Information Center
Trujilo-de Santiago, Grissel; Rojas-de Gante, Cecillia; García-Lara, Silverio; Ballesca´-Estrada, Adriana; Alvarez, Marion Moise´s
2014-01-01
A simple experiment designed to study mixing of a material of complex rheology in a stirred tank is described. Non-Newtonian suspensions of blue maize flour that naturally contain anthocyanins have been chosen as a model fluid. These anthocyanins act as a native, wide spectrum pH indicator exhibiting greenish colors in alkaline environments, blue…
Oberauer, Klaus; Lewandowsky, Stephan
2016-11-01
The article reports four experiments with complex-span tasks in which encoding of memory items alternates with processing of distractors. The experiments test two assumptions of a computational model of complex span, SOB-CS: (1) distractor processing impairs memory because distractors are encoded into working memory, thereby interfering with memoranda; and (2) free time following distractors is used to remove them from working memory by unbinding their representations from list context. Experiment 1 shows that distractors are erroneously chosen for recall more often than not-presented stimuli, demonstrating that distractors are encoded into memory. Distractor intrusions declined with longer free time, as predicted by distractor removal. Experiment 2 shows these effects even when distractors precede the memory list, ruling out an account based on selective rehearsal of memoranda during free time. Experiments 3 and 4 test the notion that distractors decay over time. Both experiments show that, contrary to the notion of distractor decay, the chance of a distractor intruding at test does not decline with increasing time since encoding of that distractor. Experiment 4 provides additional evidence against the prediction from distractor decay that distractor intrusions decline over an unfilled retention interval. Taken together, the results support SOB-CS and rule out alternative explanations. Data and simulation code are available on Open Science Framework: osf.io/3ewh7. Copyright © 2016 Elsevier B.V. All rights reserved.
A Computational Approach for Modeling Neutron Scattering Data from Lipid Bilayers
Carrillo, Jan-Michael Y.; Katsaras, John; Sumpter, Bobby G.; ...
2017-01-12
Biological cell membranes are responsible for a range of structural and dynamical phenomena crucial to a cell's well-being and its associated functions. Due to the complexity of cell membranes, lipid bilayer systems are often used as biomimetic models. These systems have led to signficant insights into vital membrane phenomena such as domain formation, passive permeation and protein insertion. Experimental observations of membrane structure and dynamics are, however, limited in resolution, both spatially and temporally. Importantly, computer simulations are starting to play a more prominent role in interpreting experimental results, enabling a molecular under- standing of lipid membranes. Particularly, the synergymore » between scattering experiments and simulations offers opportunities for new discoveries in membrane physics, as the length and time scales probed by molecular dynamics (MD) simulations parallel those of experiments. We also describe a coarse-grained MD simulation approach that mimics neutron scattering data from large unilamellar lipid vesicles over a range of bilayer rigidity. Specfically, we simulate vesicle form factors and membrane thickness fluctuations determined from small angle neutron scattering (SANS) and neutron spin echo (NSE) experiments, respectively. Our simulations accurately reproduce trends from experiments and lay the groundwork for investigations of more complex membrane systems.« less
Off-design Performance Analysis of Multi-Stage Transonic Axial Compressors
NASA Astrophysics Data System (ADS)
Du, W. H.; Wu, H.; Zhang, L.
Because of the complex flow fields and component interaction in modern gas turbine engines, they require extensive experiment to validate performance and stability. The experiment process can become expensive and complex. Modeling and simulation of gas turbine engines are way to reduce experiment costs, provide fidelity and enhance the quality of essential experiment. The flow field of a transonic compressor contains all the flow aspects, which are difficult to present-boundary layer transition and separation, shock-boundary layer interactions, and large flow unsteadiness. Accurate transonic axial compressor off-design performance prediction is especially difficult, due in large part to three-dimensional blade design and the resulting flow field. Although recent advancements in computer capacity have brought computational fluid dynamics to forefront of turbomachinery design and analysis, the grid and turbulence model still limit Reynolds-average Navier-Stokes (RANS) approximations in the multi-stage transonic axial compressor flow field. Streamline curvature methods are still the dominant numerical approach as an important tool for turbomachinery to analyze and design, and it is generally accepted that streamline curvature solution techniques will provide satisfactory flow prediction as long as the losses, deviation and blockage are accurately predicted.
A simple analytical infiltration model for short-duration rainfall
NASA Astrophysics Data System (ADS)
Wang, Kaiwen; Yang, Xiaohua; Liu, Xiaomang; Liu, Changming
2017-12-01
Many infiltration models have been proposed to simulate infiltration process. Different initial soil conditions and non-uniform initial water content can lead to infiltration simulation errors, especially for short-duration rainfall (SHR). Few infiltration models are specifically derived to eliminate the errors caused by the complex initial soil conditions. We present a simple analytical infiltration model for SHR infiltration simulation, i.e., Short-duration Infiltration Process model (SHIP model). The infiltration simulated by 5 models (i.e., SHIP (high) model, SHIP (middle) model, SHIP (low) model, Philip model and Parlange model) were compared based on numerical experiments and soil column experiments. In numerical experiments, SHIP (middle) and Parlange models had robust solutions for SHR infiltration simulation of 12 typical soils under different initial soil conditions. The absolute values of percent bias were less than 12% and the values of Nash and Sutcliffe efficiency were greater than 0.83. Additionally, in soil column experiments, infiltration rate fluctuated in a range because of non-uniform initial water content. SHIP (high) and SHIP (low) models can simulate an infiltration range, which successfully covered the fluctuation range of the observed infiltration rate. According to the robustness of solutions and the coverage of fluctuation range of infiltration rate, SHIP model can be integrated into hydrologic models to simulate SHR infiltration process and benefit the flood forecast.
Action Centered Contextual Bandits.
Greenewald, Kristjan; Tewari, Ambuj; Klasnja, Predrag; Murphy, Susan
2017-12-01
Contextual bandits have become popular as they offer a middle ground between very simple approaches based on multi-armed bandits and very complex approaches using the full power of reinforcement learning. They have demonstrated success in web applications and have a rich body of associated theoretical guarantees. Linear models are well understood theoretically and preferred by practitioners because they are not only easily interpretable but also simple to implement and debug. Furthermore, if the linear model is true, we get very strong performance guarantees. Unfortunately, in emerging applications in mobile health, the time-invariant linear model assumption is untenable. We provide an extension of the linear model for contextual bandits that has two parts: baseline reward and treatment effect. We allow the former to be complex but keep the latter simple. We argue that this model is plausible for mobile health applications. At the same time, it leads to algorithms with strong performance guarantees as in the linear model setting, while still allowing for complex nonlinear baseline modeling. Our theory is supported by experiments on data gathered in a recently concluded mobile health study.
Heterogeneity effects in visual search predicted from the group scanning model.
Macquistan, A D
1994-12-01
The group scanning model of feature integration theory (Treisman & Gormican, 1988) suggests that subjects search visual displays serially by groups, but process items within each group in parallel. The size of these groups is determined by the discriminability of the targets in the background of distractors. When the target is poorly discriminable, the size of the scanned group will be small, and search will be slow. The model predicts that group size will be smallest when targets of an intermediate value on a perceptual dimension are presented in a heterogeneous background of distractors that have higher and lower values on the same dimension. Experiment 1 demonstrates this effect. Experiment 2 controls for a possible confound of decision complexity in Experiment 1. For simple feature targets, the group scanning model provides a good account of the visual search process.
Recent "Ground Testing" Experiences in the National Full-Scale Aerodynamics Complex
NASA Technical Reports Server (NTRS)
Zell, Peter; Stich, Phil; Sverdrup, Jacobs; George, M. W. (Technical Monitor)
2002-01-01
The large test sections of the National Full-scale Aerodynamics Complex (NFAC) wind tunnels provide ideal controlled wind environments to test ground-based objects and vehicles. Though this facility was designed and provisioned primarily for aeronautical testing requirements, several experiments have been designed to utilize existing model mount structures to support "non-flying" systems. This presentation will discuss some of the ground-based testing capabilities of the facility and provide examples of groundbased tests conducted in the facility to date. It will also address some future work envisioned and solicit input from the SATA membership on ways to improve the service that NASA makes available to customers.
NASA Technical Reports Server (NTRS)
Bremmer, D. A.
1986-01-01
The feasibility of some off-the-shelf microprocessors and state-of-art software is assessed (1) as a development system for the principle investigator (pi) in the design of the experiment model, (2) as an example of available technology application for future PI's experiments, (3) as a system capable of being interactive in the PCTC's simulation of the dedicated experiment processor (DEP), preferably by bringing the PI's DEP software directly into the simulation model, (4) as a system having bus compatibility with host VAX simulation computers, (5) as a system readily interfaced with mock-up panels and information displays, and (6) as a functional system for post mission data analysis.
An experimental approach to the fundamental principles of hemodynamics.
Pontiga, Francisco; Gaytán, Susana P
2005-09-01
An experimental model has been developed to give students hands-on experience with the fundamental laws of hemodynamics. The proposed experimental setup is of simple construction but permits the precise measurements of physical variables involved in the experience. The model consists in a series of experiments where different basic phenomena are quantitatively investigated, such as the pressure drop in a long straight vessel and in an obstructed vessel, the transition from laminar to turbulent flow, the association of vessels in vascular networks, or the generation of a critical stenosis. Through these experiments, students acquire a direct appreciation of the importance of the parameters involved in the relationship between pressure and flow rate, thus facilitating the comprehension of more complex problems in hemodynamics.
The edge complex: implicit memory for figure assignment in shape perception.
Peterson, Mary A; Enns, James T
2005-05-01
Viewing a stepped edge is likely to prompt the perceptual assignment of one side of the edge as figure. This study demonstrates that even a single brief glance at a novel edge gives rise to an implicit memory regarding which side was seen as figure; this edge complex enters into the figure assignment process the next time the edge is encountered, both speeding same-different judgments when the figural side is repeated and slowing these judgments when the new figural side is identical to the former ground side (Experiments 1A and 1B). These results were obtained even when the facing direction of the repeated edge was mirror reversed (Experiment 2). This study shows that implicit measures can reveal the effects of past experience on figure assignment, following a single prior exposure to a novel shape, and supports a competitive model of figure assignment in which past experience serves as one of many figural cues.
The development of episodic memory: items, contexts, and relations.
Yim, Hyungwook; Dennis, Simon J; Sloutsky, Vladimir M
2013-11-01
Episodic memory involves the formation of relational structures that bind information about the stimuli people experience to the contexts in which they experience them. The ability to form and retain such structures may be at the core of the development of episodic memory. In the first experiment reported here, 4- and 7-year-olds were presented with paired-associate learning tasks requiring memory structures of different complexity. A multinomial-processing tree model was applied to estimate the use of different structures in the two age groups. The use of two-way list-context-to-target structures and three-way structures was found to increase between the ages of 4 and 7. Experiment 2 demonstrated that the ability to form increasingly complex relational memory structures develops between the ages of 4 and 7 years and that this development extends well into adulthood. These results have important implications for theories of memory development.
Service, Elisabet; Maury, Sini
2015-01-01
Working memory (WM) has been described as an interface between cognition and action, or a system for access to a limited amount of information needed in complex cognition. Access to morphological information is needed for comprehending and producing sentences. The present study probed WM for morphologically complex word forms in Finnish, a morphologically rich language. We studied monomorphemic (boy), inflected (boy+’s), and derived (boy+hood) words in three tasks. Simple span, immediate serial recall of words, in Experiment 1, is assumed to mainly rely on information in the focus of attention. Sentence span, a dual task combining sentence reading with recall of the last word (Experiment 2) or of a word not included in the sentence (Experiment 3) is assumed to involve establishment of a search set in long-term memory for fast activation into the focus of attention. Recall was best for monomorphemic and worst for inflected word forms with performance on derived words in between. However, there was an interaction between word type and experiment, suggesting that complex span is more sensitive to morphological complexity in derivations than simple span. This was explored in a within-subjects Experiment 4 combining all three tasks. An interaction between morphological complexity and task was replicated. Both inflected and derived forms increased load in WM. In simple span, recall of inflectional forms resulted in form errors. Complex span tasks were more sensitive to morphological load in derived words, possibly resulting from interference from morphological neighbors in the mental lexicon. The results are best understood as involving competition among inflectional forms when binding words from input into an output structure, and competition from morphological neighbors in secondary memory during cumulative retrieval-encoding cycles. Models of verbal recall need to be able to represent morphological as well as phonological and semantic information. PMID:25642181
NASA Astrophysics Data System (ADS)
Mishra, Bhoopesh
Recent studies have shown that diverse groups of bacteria adsorb metals to similar extents and uptake can be modeled using a universal adsorption model. In this study, XAFS has been used to resolve whether binding sites determined for single species systems are responsible for adsorption in more complex natural bacterial assemblages. Results obtained from a series of XAFS experiments on pure Gram positive and Gram negative bacterial strains and consortia of bacteria as a function of pH and Cd loading suggests that every bacterial strain has a complex physiology and they are all slightly different from each other. Nevertheless from the metal adsorption chemistry point of view, the main difference between them lies in the site ratio of three fundamental sites only - carboxyl, phosphoryl and sulfide. Two completely different consortia of bacteria (obtained from natural river water, and soil system with severe organic contamination) were successfully modeled in the pH range 3.4--7.8 using the EXAFS models developed for single species systems. Results thus obtained can potentially have very high impact on the modeling of the complex bacterial systems in realistic geological settings, leading to further refinement and development of robust remediation strategies for metal contamination at macroscopic level. In another study, solution speciation of Pb and Cd with DFO-B has been examined using a combination of techniques (ICP, TOC, thermodynamic modeling and XAFS). Results indicate that Pb does not complex with DFO-B at all until about pH 3.5, but forms a totally caged structure at pH 7.5. At intermediate pH conditions, mixture of species (one and two hydroxamate groups complexed) is formed. Cd on the other hand, does not complex until pH 5, forms intermediate complexes at pH 8 and is totally chelated at pH 9. Further studies were conducted for Pb sorption to mineral surface kaolinite with and without DFO-B. In the absence of DFO-B, results suggest outer sphere and inner sphere sorption of Pb on kaolinite surface at acidic and circumneutral pH conditions respectively. In the presence of DFO-B, bulk sorption studies indicated that Pb sorption is enhanced in the presence of DFO-B around pH 6 and inhibited above pH 6.5. This was confirmed by x-ray fluorescence measurements. Extended XAFS study clearly indicated unwrapping of DFO-B molecule at the surface. Our study has unambiguously recognized it as a "Type A" ternary complex ("Type A" complex means surface-metal-ligand type of interaction). Taken together, bulk adsorption measurements and XAFS experiments represent a powerful approach for determining and modeling metal speciation and adsorption.
Pore-scale and Continuum Simulations of Solute Transport Micromodel Benchmark Experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oostrom, Martinus; Mehmani, Yashar; Romero Gomez, Pedro DJ
Four sets of micromodel nonreactive solute transport experiments were conducted with flow velocity, grain diameter, pore-aspect ratio, and flow focusing heterogeneity as the variables. The data sets were offered to pore-scale modeling groups to test their simulators. Each set consisted of two learning experiments, for which all results was made available, and a challenge experiment, for which only the experimental description and base input parameters were provided. The experimental results showed a nonlinear dependence of the dispersion coefficient on the Peclet number, a negligible effect of the pore-aspect ratio on transverse mixing, and considerably enhanced mixing due to flow focusing.more » Five pore-scale models and one continuum-scale model were used to simulate the experiments. Of the pore-scale models, two used a pore-network (PN) method, two others are based on a lattice-Boltzmann (LB) approach, and one employed a computational fluid dynamics (CFD) technique. The learning experiments were used by the PN models to modify the standard perfect mixing approach in pore bodies into approaches to simulate the observed incomplete mixing. The LB and CFD models used these experiments to appropriately discretize the grid representations. The continuum model use published non-linear relations between transverse dispersion coefficients and Peclet numbers to compute the required dispersivity input values. Comparisons between experimental and numerical results for the four challenge experiments show that all pore-scale models were all able to satisfactorily simulate the experiments. The continuum model underestimated the required dispersivity values and, resulting in less dispersion. The PN models were able to complete the simulations in a few minutes, whereas the direct models needed up to several days on supercomputers to resolve the more complex problems.« less
Perception of Simultaneous Auditive Contents
NASA Astrophysics Data System (ADS)
Tschinkel, Christian
Based on a model of pluralistic music, we may approach an aesthetic concept of music, which employs dichotic listening situations. The concept of dichotic listening stems from neuropsychological test conditions in lateralization experiments on brain hemispheres, in which each ear is exposed to a different auditory content. In the framework of such sound experiments, the question which primarily arises concerns a new kind of hearing, which is also conceivable without earphones as a spatial composition, and which may superficially be linked to its degree of complexity. From a psychological perspective, the degree of complexity is correlated with the degree of attention given, with the listener's musical or listening experience and the level of his appreciation. Therefore, we may possibly also expect a measurable increase in physical activity. Furthermore, a dialectic interpretation of such "dualistic" music presents itself.
NASA Astrophysics Data System (ADS)
Ermakov, Ilya; Crucifix, Michel; Munhoven, Guy
2013-04-01
Complex climate models require high computational burden. However, computational limitations may be avoided by using emulators. In this work we present several approaches for dynamical emulation (also called metamodelling) of the Multi-Box Model (MBM) coupled to the Model of Early Diagenesis in the Upper Sediment A (MEDUSA) that simulates the carbon cycle of the ocean and atmosphere [1]. We consider two experiments performed on the MBM-MEDUSA that explore the Basin-to-Shelf Transfer (BST) dynamics. In both experiments the sea level is varied according to a paleo sea level reconstruction. Such experiments are interesting because the BST is an important cause of the CO2 variation and the dynamics is potentially nonlinear. The output that we are interested in is the variation of the carbon dioxide partial pressure in the atmosphere over the Pleistocene. The first experiment considers that the BST is fixed constant during the simulation. In the second experiment the BST is interactively adjusted according to the sea level, since the sea level is the primary control of the growth and decay of coral reefs and other shelf carbon reservoirs. The main aim of the present contribution is to create a metamodel of the MBM-MEDUSA using the Dynamic Emulation Modelling methodology [2] and compare the results obtained using linear and non-linear methods. The first step in the emulation methodology used in this work is to identify the structure of the metamodel. In order to select an optimal approach for emulation we compare the results of identification obtained by the simple linear and more complex nonlinear models. In order to identify the metamodel in the first experiment the simple linear regression and the least-squares method is sufficient to obtain a 99,9% fit between the temporal outputs of the model and the metamodel. For the second experiment the MBM's output is highly nonlinear. In this case we apply nonlinear models, such as, NARX, Hammerstein model, and an 'ad-hoc' switching model. After the identification we perform the parameter mapping using spline interpolation and validate the emulator on a new set of parameters. References: [1] G. Munhoven, "Glacial-interglacial rain ratio changes: Implications for atmospheric CO2 and ocean-sediment interaction," Deep-Sea Res Pt II, vol. 54, pp. 722-746, 2007. [2] A. Castelletti et al., "A general framework for Dynamic Emulation Modelling in environmental problems," Environ Modell Softw, vol. 34, pp. 5-18, 2012.
Casey, F P; Baird, D; Feng, Q; Gutenkunst, R N; Waterfall, J J; Myers, C R; Brown, K S; Cerione, R A; Sethna, J P
2007-05-01
We apply the methods of optimal experimental design to a differential equation model for epidermal growth factor receptor signalling, trafficking and down-regulation. The model incorporates the role of a recently discovered protein complex made up of the E3 ubiquitin ligase, Cbl, the guanine exchange factor (GEF), Cool-1 (beta -Pix) and the Rho family G protein Cdc42. The complex has been suggested to be important in disrupting receptor down-regulation. We demonstrate that the model interactions can accurately reproduce the experimental observations, that they can be used to make predictions with accompanying uncertainties, and that we can apply ideas of optimal experimental design to suggest new experiments that reduce the uncertainty on unmeasurable components of the system.
A Novel Prediction Method about Single Components of Analog Circuits Based on Complex Field Modeling
Tian, Shulin; Yang, Chenglin
2014-01-01
Few researches pay attention to prediction about analog circuits. The few methods lack the correlation with circuit analysis during extracting and calculating features so that FI (fault indicator) calculation often lack rationality, thus affecting prognostic performance. To solve the above problem, this paper proposes a novel prediction method about single components of analog circuits based on complex field modeling. Aiming at the feature that faults of single components hold the largest number in analog circuits, the method starts with circuit structure, analyzes transfer function of circuits, and implements complex field modeling. Then, by an established parameter scanning model related to complex field, it analyzes the relationship between parameter variation and degeneration of single components in the model in order to obtain a more reasonable FI feature set via calculation. According to the obtained FI feature set, it establishes a novel model about degeneration trend of analog circuits' single components. At last, it uses particle filter (PF) to update parameters for the model and predicts remaining useful performance (RUP) of analog circuits' single components. Since calculation about the FI feature set is more reasonable, accuracy of prediction is improved to some extent. Finally, the foregoing conclusions are verified by experiments. PMID:25147853
Nguyen, Hai; Pérez, Alberto; Bermeo, Sherry; Simmerling, Carlos
2016-01-01
The Generalized Born (GB) implicit solvent model has undergone significant improvements in accuracy for modeling of proteins and small molecules. However, GB still remains a less widely explored option for nucleic acid simulations, in part because fast GB models are often unable to maintain stable nucleic acid structures, or they introduce structural bias in proteins, leading to difficulty in application of GB models in simulations of protein-nucleic acid complexes. Recently, GB-neck2 was developed to improve the behavior of protein simulations. In an effort to create a more accurate model for nucleic acids, a similar procedure to the development of GB-neck2 is described here for nucleic acids. The resulting parameter set significantly reduces absolute and relative energy error relative to Poisson Boltzmann for both nucleic acids and nucleic acid-protein complexes, when compared to its predecessor GB-neck model. This improvement in solvation energy calculation translates to increased structural stability for simulations of DNA and RNA duplexes, quadruplexes, and protein-nucleic acid complexes. The GB-neck2 model also enables successful folding of small DNA and RNA hairpins to near native structures as determined from comparison with experiment. The functional form and all required parameters are provided here and also implemented in the AMBER software. PMID:26574454
NASA Astrophysics Data System (ADS)
Sessoms, D. A.; Amon, A.; Courbin, L.; Panizza, P.
2010-10-01
The binary path selection of droplets reaching a T junction is regulated by time-delayed feedback and nonlinear couplings. Such mechanisms result in complex dynamics of droplet partitioning: numerous discrete bifurcations between periodic regimes are observed. We introduce a model based on an approximation that makes this problem tractable. This allows us to derive analytical formulae that predict the occurrence of the bifurcations between consecutive regimes, establish selection rules for the period of a regime, and describe the evolutions of the period and complexity of droplet pattern in a cycle with the key parameters of the system. We discuss the validity and limitations of our model which describes semiquantitatively both numerical simulations and microfluidic experiments.
Simple systems that exhibit self-directed replication
NASA Technical Reports Server (NTRS)
Reggia, James A.; Armentrout, Steven L.; Chou, Hui-Hsien; Peng, Yun
1993-01-01
Biological experience and intuition suggest that self-replication is an inherently complex phenomenon, and early cellular automata models support that conception. More recently, simpler computational models of self-directed replication called sheathed loops have been developed. It is shown here that 'unsheathing' these structures and altering certain assumptions about the symmetry of their components leads to a family of nontrivial self-replicating structures some substantially smaller and simpler than those previously reported. The dependence of replication time and transition function complexity on initial structure size, cell state symmetry, and neighborhood are examined. These results support the view that self-replication is not an inherently complex phenomenon but rather an emergent property arising from local interactions in systems that can be much simpler than is generally believed.
Organization and Dynamics of Receptor Proteins in a Plasma Membrane.
Koldsø, Heidi; Sansom, Mark S P
2015-11-25
The interactions of membrane proteins are influenced by their lipid environment, with key lipid species able to regulate membrane protein function. Advances in high-resolution microscopy can reveal the organization and dynamics of proteins and lipids within living cells at resolutions <200 nm. Parallel advances in molecular simulations provide near-atomic-resolution models of the dynamics of the organization of membranes of in vivo-like complexity. We explore the dynamics of proteins and lipids in crowded and complex plasma membrane models, thereby closing the gap in length and complexity between computations and experiments. Our simulations provide insights into the mutual interplay between lipids and proteins in determining mesoscale (20-100 nm) fluctuations of the bilayer, and in enabling oligomerization and clustering of membrane proteins.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aagesen, Larry K.; Coltrin, Michael Elliott; Han, Jung
Three-dimensional phase-field simulations of GaN growth by selective area epitaxy were performed. Furthermore, this model includes a crystallographic-orientation-dependent deposition rate and arbitrarily complex mask geometries. The orientation-dependent deposition rate can be determined from experimental measurements of the relative growth rates of low-index crystallographic facets. Growth on various complex mask geometries was simulated on both c-plane and a-plane template layers. Agreement was observed between simulations and experiment, including complex phenomena occurring at the intersections between facets. The sources of the discrepancies between simulated and experimental morphologies were also investigated. We found that the model provides a route to optimize masks andmore » processing conditions during materials synthesis for solar cells, light-emitting diodes, and other electronic and opto-electronic applications.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aagesen, Larry K.; Thornton, Katsuyo, E-mail: kthorn@umich.edu; Coltrin, Michael E.
Three-dimensional phase-field simulations of GaN growth by selective area epitaxy were performed. The model includes a crystallographic-orientation-dependent deposition rate and arbitrarily complex mask geometries. The orientation-dependent deposition rate can be determined from experimental measurements of the relative growth rates of low-index crystallographic facets. Growth on various complex mask geometries was simulated on both c-plane and a-plane template layers. Agreement was observed between simulations and experiment, including complex phenomena occurring at the intersections between facets. The sources of the discrepancies between simulated and experimental morphologies were also investigated. The model provides a route to optimize masks and processing conditions during materialsmore » synthesis for solar cells, light-emitting diodes, and other electronic and opto-electronic applications.« less
Fernandez-Lozano, Carlos; Gestal, Marcos; Munteanu, Cristian R; Dorado, Julian; Pazos, Alejandro
2016-01-01
The design of experiments and the validation of the results achieved with them are vital in any research study. This paper focuses on the use of different Machine Learning approaches for regression tasks in the field of Computational Intelligence and especially on a correct comparison between the different results provided for different methods, as those techniques are complex systems that require further study to be fully understood. A methodology commonly accepted in Computational intelligence is implemented in an R package called RRegrs. This package includes ten simple and complex regression models to carry out predictive modeling using Machine Learning and well-known regression algorithms. The framework for experimental design presented herein is evaluated and validated against RRegrs. Our results are different for three out of five state-of-the-art simple datasets and it can be stated that the selection of the best model according to our proposal is statistically significant and relevant. It is of relevance to use a statistical approach to indicate whether the differences are statistically significant using this kind of algorithms. Furthermore, our results with three real complex datasets report different best models than with the previously published methodology. Our final goal is to provide a complete methodology for the use of different steps in order to compare the results obtained in Computational Intelligence problems, as well as from other fields, such as for bioinformatics, cheminformatics, etc., given that our proposal is open and modifiable.
Gestal, Marcos; Munteanu, Cristian R.; Dorado, Julian; Pazos, Alejandro
2016-01-01
The design of experiments and the validation of the results achieved with them are vital in any research study. This paper focuses on the use of different Machine Learning approaches for regression tasks in the field of Computational Intelligence and especially on a correct comparison between the different results provided for different methods, as those techniques are complex systems that require further study to be fully understood. A methodology commonly accepted in Computational intelligence is implemented in an R package called RRegrs. This package includes ten simple and complex regression models to carry out predictive modeling using Machine Learning and well-known regression algorithms. The framework for experimental design presented herein is evaluated and validated against RRegrs. Our results are different for three out of five state-of-the-art simple datasets and it can be stated that the selection of the best model according to our proposal is statistically significant and relevant. It is of relevance to use a statistical approach to indicate whether the differences are statistically significant using this kind of algorithms. Furthermore, our results with three real complex datasets report different best models than with the previously published methodology. Our final goal is to provide a complete methodology for the use of different steps in order to compare the results obtained in Computational Intelligence problems, as well as from other fields, such as for bioinformatics, cheminformatics, etc., given that our proposal is open and modifiable. PMID:27920952
NASA Technical Reports Server (NTRS)
Sinha, Neeraj; Brinckman, Kevin; Jansen, Bernard; Seiner, John
2011-01-01
A method was developed of obtaining propulsive base flow data in both hot and cold jet environments, at Mach numbers and altitude of relevance to NASA launcher designs. The base flow data was used to perform computational fluid dynamics (CFD) turbulence model assessments of base flow predictive capabilities in order to provide increased confidence in base thermal and pressure load predictions obtained from computational modeling efforts. Predictive CFD analyses were used in the design of the experiments, available propulsive models were used to reduce program costs and increase success, and a wind tunnel facility was used. The data obtained allowed assessment of CFD/turbulence models in a complex flow environment, working within a building-block procedure to validation, where cold, non-reacting test data was first used for validation, followed by more complex reacting base flow validation.
The Importance of Protons in Reactive Transport Modeling
NASA Astrophysics Data System (ADS)
McNeece, C. J.; Hesse, M. A.
2014-12-01
The importance of pH in aqueous chemistry is evident; yet, its role in reactive transport is complex. Consider a column flow experiment through silica glass beads. Take the column to be saturated and flowing with solution of a distinct pH. An instantaneous change in the influent solution pH can yield a breakthrough curve with both a rarefaction and shock component (composite wave). This behavior is unique among aqueous ions in transport and is more complex than intuition would tell. Analysis of the hyperbolic limit of this physical system can explain these first order transport phenomenon. This analysis shows that transport behavior is heavily dependent on the shape of the adsorption isotherm. Hence it is clear that accurate surface chemistry models are important in reactive transport. The proton adsorption isotherm has nonconstant concavity due to the proton's ability to partition into hydroxide. An eigenvalue analysis shows that an inflection point in the adsorption isotherm allows the development of composite waves. We use electrostatic surface complexation models to calculate realistic proton adsorption isotherms. Surface characteristics such as specific surface area, and surface site density were determined experimentally. We validate the model by comparison against silica glass bead flow through experiments. When coupled to surface complexation models, the transport equation captures the timing and behavior of breakthrough curves markedly better than with commonly used Langmuir assumptions. Furthermore, we use the adsorption isotherm to predict, a priori, the transport behavior of protons across pH composition space. Expansion of the model to multicomponent systems shows that proton adsorption can force composite waves to develop in the breakthrough curves of ions that would not otherwise exhibit such behavior. Given the abundance of reactive surfaces in nature and the nonlinearity of chemical systems, we conclude that building a greater understanding of proton adsorption is of utmost importance to reactive transport modeling.
Modelling of Tc migration in an un-oxidized fractured drill core from Äspö, Sweden
NASA Astrophysics Data System (ADS)
Huber, F. M.; Totskiy, Y.; Montoya Garcia, V.; Enzmann, F.; Trumm, M.; Wenka, A.; Geckeis, H.; Schaefer, T.
2015-12-01
The radionuclide retention of redox sensitive radionuclides (e.g. Pu, Np, U, Tc) in crystalline host rock greatly depends on the rock matrix and the rock redox capacity. Preservation of drill cores concerning oxidation is therefore of paramount importance to reliably predict the near-natural radionuclide retention properties. Here, experimental results of HTO and Tc laboratory migration experiments in a naturally single fractured Äspö un-oxidized drill core are modelled using two different 2D models. Both models employ geometrical information obtained by μ-computed tomography (μCT) scanning of the drill core. The models differ in geometrical complexity meaning the first model (PPM-MD) consists of a simple parallel plate with a porous matrix adjacent to the fracture whereas the second model (MPM) uses the mid-plane of the 3D fracture only (no porous matrix). Simulation results show that for higher flow rates (Peclet number > 1), the MPM satisfactorily describes the HTO breakthrough curves (BTC) whereas the PPM-MD model nicely reproduces the HTO BTC for small Pe numbers (<1). These findings clearly highlight the influence of fracture geometry/flow field complexity on solute transport for Pe numbers > 1 and the dominating effect of matrix diffusion for Peclet numbers < 1. Retention of Tc is modelled using a simple Kd-approach in case of the PPM-MD and including 1st order sorptive reduction/desorption kinetics in case of the MPM. Batch determined sorptive reduction/desorption kinetic rates and Kd values for Tc on non-oxidized Äspö diorite are used in the model and compared to best fit values. By this approach, the transferability of kinetic data concerning sorptive reduction determined in static batch experiments to dynamic transport experiments is examined.
IR spectroscopic study of the chemical composition of epiphytic lichens
NASA Astrophysics Data System (ADS)
Meysurova, A. F.; Khizhnyak, S. D.; Pakhomov, P. M.
2011-11-01
Changes in the chemical composition of lichens exposed to pollutants are investigated by means of FTIR spectroscopy. According to model experiments, alkyl nitrates, ammonium salts, amines, and sulfones develop in the lichen thallus through the action of ammonia and nitric and sulfuric acids. Spectroscopic data of modeling experiments enabled nitrogen- and sulfur-containing substances to be identified as the main air pollutants in the vicinity of a pig-breeding complex and information to be obtained on the content of the pollutants and their impact on the lichens.
Data mining and statistical inference in selective laser melting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamath, Chandrika
Selective laser melting (SLM) is an additive manufacturing process that builds a complex three-dimensional part, layer-by-layer, using a laser beam to fuse fine metal powder together. The design freedom afforded by SLM comes associated with complexity. As the physical phenomena occur over a broad range of length and time scales, the computational cost of modeling the process is high. At the same time, the large number of parameters that control the quality of a part make experiments expensive. In this paper, we describe ways in which we can use data mining and statistical inference techniques to intelligently combine simulations andmore » experiments to build parts with desired properties. We start with a brief summary of prior work in finding process parameters for high-density parts. We then expand on this work to show how we can improve the approach by using feature selection techniques to identify important variables, data-driven surrogate models to reduce computational costs, improved sampling techniques to cover the design space adequately, and uncertainty analysis for statistical inference. Here, our results indicate that techniques from data mining and statistics can complement those from physical modeling to provide greater insight into complex processes such as selective laser melting.« less
Data mining and statistical inference in selective laser melting
Kamath, Chandrika
2016-01-11
Selective laser melting (SLM) is an additive manufacturing process that builds a complex three-dimensional part, layer-by-layer, using a laser beam to fuse fine metal powder together. The design freedom afforded by SLM comes associated with complexity. As the physical phenomena occur over a broad range of length and time scales, the computational cost of modeling the process is high. At the same time, the large number of parameters that control the quality of a part make experiments expensive. In this paper, we describe ways in which we can use data mining and statistical inference techniques to intelligently combine simulations andmore » experiments to build parts with desired properties. We start with a brief summary of prior work in finding process parameters for high-density parts. We then expand on this work to show how we can improve the approach by using feature selection techniques to identify important variables, data-driven surrogate models to reduce computational costs, improved sampling techniques to cover the design space adequately, and uncertainty analysis for statistical inference. Here, our results indicate that techniques from data mining and statistics can complement those from physical modeling to provide greater insight into complex processes such as selective laser melting.« less
Application of Biologically-Based Lumping To Investigate the ...
People are often exposed to complex mixtures of environmental chemicals such as gasoline, tobacco smoke, water contaminants, or food additives. However, investigators have often considered complex mixtures as one lumped entity. Valuable information can be obtained from these experiments, though this simplification provides little insight into the impact of a mixture's chemical composition on toxicologically-relevant metabolic interactions that may occur among its constituents. We developed an approach that applies chemical lumping methods to complex mixtures, in this case gasoline, based on biologically relevant parameters used in physiologically-based pharmacokinetic (PBPK) modeling. Inhalation exposures were performed with rats to evaluate performance of our PBPK model. There were 109 chemicals identified and quantified in the vapor in the chamber. The time-course kinetic profiles of 10 target chemicals were also determined from blood samples collected during and following the in vivo experiments. A general PBPK model was used to compare the experimental data to the simulated values of blood concentration for the 10 target chemicals with various numbers of lumps, iteratively increasing from 0 to 99. Large reductions in simulation error were gained by incorporating enzymatic chemical interactions, in comparison to simulating the individual chemicals separately. The error was further reduced by lumping the 99 non-target chemicals. Application of this biologic
Rao, Harita; Damian, Mariana S; Alshiekh, Alak; Elmroth, Sofi K C; Diederichsen, Ulf
2015-12-28
Conjugation of metal complexes with peptide scaffolds possessing high DNA binding affinity has shown to modulate their biological activities and to enhance their interaction with DNA. In this work, a platinum complex/peptide chimera was synthesized based on a model of the Integration Host Factor (IHF), an architectural protein possessing sequence specific DNA binding and bending abilities through its interaction with a minor groove. The model peptide consists of a cyclic unit resembling the minor grove binding subdomain of IHF, a positively charged lysine dendrimer for electrostatic interactions with the DNA phosphate backbone and a flexible glycine linker tethering the two units. A norvaline derived artificial amino acid was designed to contain a dimethylethylenediamine as a bidentate platinum chelating unit, and introduced into the IHF mimicking peptides. The interaction of the chimeric peptides with various DNA sequences was studied by utilizing the following experiments: thermal melting studies, agarose gel electrophoresis for plasmid DNA unwinding experiments, and native and denaturing gel electrophoresis to visualize non-covalent and covalent peptide-DNA adducts, respectively. By incorporation of the platinum metal center within the model peptide mimicking IHF we have attempted to improve its specificity and DNA targeting ability, particularly towards those sequences containing adjacent guanine residues.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Justin; Hund, Lauren
2017-02-01
Dynamic compression experiments are being performed on complicated materials using increasingly complex drivers. The data produced in these experiments are beginning to reach a regime where traditional analysis techniques break down; requiring the solution of an inverse problem. A common measurement in dynamic experiments is an interface velocity as a function of time, and often this functional output can be simulated using a hydrodynamics code. Bayesian model calibration is a statistical framework to estimate inputs into a computational model in the presence of multiple uncertainties, making it well suited to measurements of this type. In this article, we apply Bayesianmore » model calibration to high pressure (250 GPa) ramp compression measurements in tantalum. We address several issues speci c to this calibration including the functional nature of the output as well as parameter and model discrepancy identi ability. Speci cally, we propose scaling the likelihood function by an e ective sample size rather than modeling the autocorrelation function to accommodate the functional output and propose sensitivity analyses using the notion of `modularization' to assess the impact of experiment-speci c nuisance input parameters on estimates of material properties. We conclude that the proposed Bayesian model calibration procedure results in simple, fast, and valid inferences on the equation of state parameters for tantalum.« less
Experiences in integrating auto-translated state-chart designs for model checking
NASA Technical Reports Server (NTRS)
Pingree, P. J.; Benowitz, E. G.
2003-01-01
In the complex environment of JPL's flight missions with increasing dependency on advanced software designs, traditional software validation methods of simulation and testing are being stretched to adequately cover the needs of software development.
ERIC Educational Resources Information Center
Andreae, John H.; Cleary, John G.
1976-01-01
The new mechanism, PUSS, enables experience of any complex environment to be accumulated in a predictive model. PURR-PUSS is a teachable robot system based on the new mechanism. Cumulative learning is demonstrated by a detailed example. (Author)
Analysis and Design of Complex Network Environments
2012-03-01
and J. Lowe, “The myths and facts behind cyber security risks for industrial control systems ,” in the Proceedings of the VDE Kongress, VDE Congress...questions about 1) how to model them, 2) the design of experiments necessary to discover their structure (and thus adapt system inputs to optimize the...theoretical work that clarifies fundamental limitations of complex networks with network engineering and systems biology to implement specific designs and
Evolution of surface structure in laser-preheated, perturbed materials
Di Stefano, Carlos; Merritt, Elizabeth Catherine; Doss, Forrest William; ...
2017-02-03
Here, we report an experimental and computational study investigating the effects of laser preheat on the hydrodynamic behavior of a material layer. In particular, we find that perturbation of the surface of the layer results in a complex interaction, in which the bulk of the layer develops density, pressure, and temperature structure and in which the surface experiences instability-like behavior, including mode coupling. A uniform one-temperature preheat model is used to reproduce the experimentally observed behavior, and we find that this model can be used to capture the evolution of the layer, while also providing evidence of complexities in themore » preheat behavior. Lastly, this result has important consequences for inertially confined fusion plasmas, which can be difficult to diagnose in detail, as well as for laser hydrodynamics experiments, which generally depend on assumptions about initial conditions in order to interpret their results.« less
Vertical-probe-induced asymmetric dust oscillation in complex plasma.
Harris, B J; Matthews, L S; Hyde, T W
2013-05-01
A complex plasma vertical oscillation experiment which modifies the bulk is presented. Spherical, micron-sized particles within a Coulomb crystal levitated in the sheath above the powered lower electrode in a GEC reference cell are perturbed using a probe attached to a Zyvex S100 Nanomanipulator. By oscillating the probe potential sinusoidally, particle motion is found to be asymmetric, exhibiting superharmonic response in one case. Using a simple electric field model for the plasma sheath, including a nonzero electric field at the sheath edge, dust particle charges are found by employing a balance of relevant forces and emission analysis. Adjusting the parameters of the electric field model allowed the change predicted in the levitation height to be compared with experiment. A discrete oscillator Green's function is applied using the derived force, which accurately predicts the particle's motion and allows the determination of the electric field at the sheath edge.
Ferro, Stefania; De Luca, Laura; Barreca, Maria Letizia; Iraci, Nunzio; De Grazia, Sara; Christ, Frauke; Witvrouw, Myriam; Debyser, Zeger; Chimirri, Alba
2009-01-22
A new model of HIV-1 integrase-Mg-DNA complex that is useful for docking experiments has been built. It was used to study the binding mode of integrase strand transfer inhibitor 1 (CHI-1043) and other fluorine analogues. Molecular modeling results prompted us to synthesize the designed derivatives which showed potent enzymatic inhibition at nanomolar concentration, high antiviral activity, and low toxicity. Microwave assisted organic synthesis (MAOS) was employed in several steps of the synthetic pathway, thus reducing reaction times and improving yields.
Assessment of Surrogate Fractured Rock Networks for Evidence of Complex Behavior
NASA Astrophysics Data System (ADS)
Wood, T. R.; McJunkin, T. R.; Podgorney, R. K.; Glass, R. J.; Starr, R. C.; Stoner, D. L.; Noah, K. S.; LaViolette, R. A.; Fairley, J.
2001-12-01
A complex system or complex process is -"one whose properties are not fully explained by an understanding of its component parts". Results from field experiments conducted at the Hell's Half-Acre field site (Arco, Idaho) suggest that the flow of water in an unsaturated, fractured medium exhibits characteristics of a complex process. A series of laboratory studies is underway with sufficient rigor to determine if complex behavior observed in the field is in fact a fundamental characteristic of water flow in unsaturated, fractured media. As an initial step, a series of four duplicate experiments has been performed using an array of bricks to simulate fractured, unsaturated media. The array consisted of 12 limestone blocks cut to uniform size (5cm x 7 cm x 30 cm) stacked on end 4 blocks wide and 3 blocks high with the interfaces between adjacent blocks representing 3 vertical fractures intersecting 2 horizontal fractures. Water was introduced at three point sources on the upper boundary of the model at the top of the vertical fractures. Water was applied under constant flux at a rate below the infiltration capacity of the system, thus maintaining unsaturated flow conditions. Water was collected from the lower boundary via fiberglass wicks at the bottom of each fracture. An automated system acquired and processed water inflow and outflow data and time-lapse photographic data during each of the 72-hour tests. From these experiments, we see that a few general statements can be made on the overall advance of the wetting front in the surrogate fracture networks. For instance, flow generally converged with depth to the center fracture in the bottom row of bricks. Another observation is that fracture intersections integrate the steady flow in overlying vertical fractures and allow or cause short duration high discharge pulses or "avalanches" of flow to quickly traverse the fracture network below. Smaller scale tests of single fracture and fracture intersections are underway to evaluate a wide array of unit processes that are believed to contribute to complex behavior. Examples of these smaller scale experiments include the role of fracture intersections in integrating a steady inflow to generate giant fluctuations in network discharge; the influence of microbe growth on flow; and the role of geochemistry in alterations of flow paths. Experiments are planned at the meso and field scale to document and understand the controls on self-organized behavior. Modeling is being conducted in parallel with the experiments to understand how simulations can be improved to capture the complexity of fluid flow in fractured rock vadose zones and to make better predictions of contaminant transport.
Schätzlein, Martina Palomino; Becker, Johanna; Schulze-Sünninghausen, David; Pineda-Lucena, Antonio; Herance, José Raul; Luy, Burkhard
2018-04-01
Isotope labeling enables the use of 13 C-based metabolomics techniques with strongly improved resolution for a better identification of relevant metabolites and tracing of metabolic fluxes in cell and animal models, as required in fluxomics studies. However, even at high NMR-active isotope abundance, the acquisition of one-dimensional 13 C and classical two-dimensional 1 H, 13 C-HSQC experiments remains time consuming. With the aim to provide a shorter, more efficient alternative, herein we explored the ALSOFAST-HSQC experiment with its rapid acquisition scheme for the analysis of 13 C-labeled metabolites in complex biological mixtures. As an initial step, the parameters of the pulse sequence were optimized to take into account the specific characteristics of the complex samples. We then applied the fast two-dimensional experiment to study the effect of different kinds of antioxidant gold nanoparticles on a HeLa cancer cell model grown on 13 C glucose-enriched medium. As a result, 1 H, 13 C-2D correlations could be obtained in a couple of seconds to few minutes, allowing a simple and reliable identification of various 13 C-enriched metabolites and the determination of specific variations between the different sample groups. Thus, it was possible to monitor glucose metabolism in the cell model and study the antioxidant effect of the coated gold nanoparticles in detail. Finally, with an experiment time of only half an hour, highly resolved 1 H, 13 C-HSQC spectra using the ALSOFAST-HSQC pulse sequence were acquired, revealing the isotope-position-patterns of the corresponding 13 C-nuclei from carbon multiplets. Graphical abstract Fast NMR applied to metabolomics and fluxomics studies with gold nanoparticles.
Complex optimization for big computational and experimental neutron datasets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bao, Feng; Oak Ridge National Lab.; Archibald, Richard
Here, we present a framework to use high performance computing to determine accurate solutions to the inverse optimization problem of big experimental data against computational models. We demonstrate how image processing, mathematical regularization, and hierarchical modeling can be used to solve complex optimization problems on big data. We also demonstrate how both model and data information can be used to further increase solution accuracy of optimization by providing confidence regions for the processing and regularization algorithms. Finally, we use the framework in conjunction with the software package SIMPHONIES to analyze results from neutron scattering experiments on silicon single crystals, andmore » refine first principles calculations to better describe the experimental data.« less
Complex optimization for big computational and experimental neutron datasets
Bao, Feng; Oak Ridge National Lab.; Archibald, Richard; ...
2016-11-07
Here, we present a framework to use high performance computing to determine accurate solutions to the inverse optimization problem of big experimental data against computational models. We demonstrate how image processing, mathematical regularization, and hierarchical modeling can be used to solve complex optimization problems on big data. We also demonstrate how both model and data information can be used to further increase solution accuracy of optimization by providing confidence regions for the processing and regularization algorithms. Finally, we use the framework in conjunction with the software package SIMPHONIES to analyze results from neutron scattering experiments on silicon single crystals, andmore » refine first principles calculations to better describe the experimental data.« less
Optical observables in stars with non-stationary atmospheres. [fireballs and cepheid models
NASA Technical Reports Server (NTRS)
Hillendahl, R. W.
1980-01-01
Experience gained by use of Cepheid modeling codes to predict the dimensional and photometric behavior of nuclear fireballs is used as a means of validating various computational techniques used in the Cepheid codes. Predicted results from Cepheid models are compared with observations of the continuum and lines in an effort to demonstrate that the atmospheric phenomena in Cepheids are quite complex but that they can be quantitatively modeled.
NASA Astrophysics Data System (ADS)
Pu, Z.; Zhang, H.
2013-12-01
Near-surface atmospheric observations are the main conventional observations for weather forecasts. However, in modern numerical weather prediction, the use of surface observations, especially those data over complex terrain, remains a unique challenge. There are fundamental difficulties in assimilating surface observations with three-dimensional variational data assimilation (3DVAR). In our early study[1] (Pu et al. 2013), a series of observing system simulation experiments was performed with the ensemble Kalman filter (EnKF) and compared with 3DVAR for its ability to assimilate surface observations with 3DVAR. Using the advanced research version of the Weather Research and Forecasting (WRF) model, results demonstrate that the EnKF can overcome some fundamental limitations that 3DVAR has in assimilating surface observations over complex terrain. Specifically, through its flow-dependent background error term, the EnKF produces more realistic analysis increments over complex terrain in general. Over complex terrain, the EnKF clearly performs better than 3DVAR, because it is more capable of handling surface data in the presence of terrain misrepresentation. With this presentation, we further examine the impact of EnKF data assimilation on the predictability of atmospheric conditions over complex terrain with the WRF model and the observations obtained from the most recent field experiments of the Mountain Terrain Atmospheric Modeling and Observations (MATERHORN) Program. The MATERHORN program provides comprehensive observations over mountainous regions, allowing the opportunity to study the predictability of atmospheric conditions over complex terrain in great details. Specifically, during fall 2012 and spring 2013, comprehensive observations were collected of soil states, surface energy budgets, near-surface atmospheric conditions, and profiling measurements from multiple platforms (e.g., balloon, lidar, radiosondes, etc.) over Dugway Proving Ground (DPG), Utah. With the near-surface observations and sounding data obtained during the MATERHORN fall 2012 field experiment, a month-long cycled EnKF analysis and forecast was produced with the WRF model and an advanced EnKF data assimilation system. Results are compared with the WRF near real-time forecasting during the same month and a set of analysis with 3DVAR data assimilation. Overall evaluation suggests some useful insights on the impacts of different data assimilation methods, surface and soil states, terrain representation on the predictability of atmospheric conditions over mountainous terrain. Details will be presented. References [1] Pu, Z., H. Zhang, and J. A. Anderson,. 'Ensemble Kalman filter assimilation of near-surface observations over complex terrain: Comparison with 3DVAR for short-range forecasts.' Tellus A, vol. 65,19620. 2013. http://dx.doi.org/10.3402/tellusa.v65i0. 19620.
Marin, Manuela M.; Leder, Helmut
2013-01-01
Subjective complexity has been found to be related to hedonic measures of preference, pleasantness and beauty, but there is no consensus about the nature of this relationship in the visual and musical domains. Moreover, the affective content of stimuli has been largely neglected so far in the study of complexity but is crucial in many everyday contexts and in aesthetic experiences. We thus propose a cross-domain approach that acknowledges the multidimensional nature of complexity and that uses a wide range of objective complexity measures combined with subjective ratings. In four experiments, we employed pictures of affective environmental scenes, representational paintings, and Romantic solo and chamber music excerpts. Stimuli were pre-selected to vary in emotional content (pleasantness and arousal) and complexity (low versus high number of elements). For each set of stimuli, in a between-subjects design, ratings of familiarity, complexity, pleasantness and arousal were obtained for a presentation time of 25 s from 152 participants. In line with Berlyne’s collative-motivation model, statistical analyses controlling for familiarity revealed a positive relationship between subjective complexity and arousal, and the highest correlations were observed for musical stimuli. Evidence for a mediating role of arousal in the complexity-pleasantness relationship was demonstrated in all experiments, but was only significant for females with regard to music. The direction and strength of the linear relationship between complexity and pleasantness depended on the stimulus type and gender. For environmental scenes, the root mean square contrast measures and measures of compressed file size correlated best with subjective complexity, whereas only edge detection based on phase congruency yielded equivalent results for representational paintings. Measures of compressed file size and event density also showed positive correlations with complexity and arousal in music, which is relevant for the discussion on which aspects of complexity are domain-specific and which are domain-general. PMID:23977295
Marin, Manuela M; Leder, Helmut
2013-01-01
Subjective complexity has been found to be related to hedonic measures of preference, pleasantness and beauty, but there is no consensus about the nature of this relationship in the visual and musical domains. Moreover, the affective content of stimuli has been largely neglected so far in the study of complexity but is crucial in many everyday contexts and in aesthetic experiences. We thus propose a cross-domain approach that acknowledges the multidimensional nature of complexity and that uses a wide range of objective complexity measures combined with subjective ratings. In four experiments, we employed pictures of affective environmental scenes, representational paintings, and Romantic solo and chamber music excerpts. Stimuli were pre-selected to vary in emotional content (pleasantness and arousal) and complexity (low versus high number of elements). For each set of stimuli, in a between-subjects design, ratings of familiarity, complexity, pleasantness and arousal were obtained for a presentation time of 25 s from 152 participants. In line with Berlyne's collative-motivation model, statistical analyses controlling for familiarity revealed a positive relationship between subjective complexity and arousal, and the highest correlations were observed for musical stimuli. Evidence for a mediating role of arousal in the complexity-pleasantness relationship was demonstrated in all experiments, but was only significant for females with regard to music. The direction and strength of the linear relationship between complexity and pleasantness depended on the stimulus type and gender. For environmental scenes, the root mean square contrast measures and measures of compressed file size correlated best with subjective complexity, whereas only edge detection based on phase congruency yielded equivalent results for representational paintings. Measures of compressed file size and event density also showed positive correlations with complexity and arousal in music, which is relevant for the discussion on which aspects of complexity are domain-specific and which are domain-general.
Colle, Livia; Pellecchia, Giovanni; Moroni, Fabio; Carcione, Antonino; Nicolò, Giuseppe; Semerari, Antonio; Procacci, Michele
2017-01-01
Social sharing capacities have attracted attention from a number of fields of social cognition and have been variously defined and analyzed in numerous studies. Social sharing consists in the subjective awareness that aspects of the self's experience are held in common with other individuals. The definition of social sharing must take a variety of elements into consideration: the motivational element, the contents of the social sharing experience, the emotional responses it evokes, the behavioral outcomes, and finally, the circumstances and the skills which enable social sharing. The primary objective of this study is to explore some of the diverse forms of human social sharing and to classify them according to levels of complexity. We identify four different types of social sharing, categorized according to the nature of the content being shared and the complexity of the mindreading skills required. The second objective of this study is to consider possible applications of this graded model of social sharing experience in clinical settings. Specifically, this model may support the development of graded, focused clinical interventions for patients with personality disorders characterized by severe social withdrawal.
NASA Astrophysics Data System (ADS)
Clark, Martyn; Essery, Richard
2017-04-01
When faced with the complex and interdisciplinary challenge of building process-based land models, different modelers make different decisions at different points in the model development process. These modeling decisions are generally based on several considerations, including fidelity (e.g., what approaches faithfully simulate observed processes), complexity (e.g., which processes should be represented explicitly), practicality (e.g., what is the computational cost of the model simulations; are there sufficient resources to implement the desired modeling concepts), and data availability (e.g., is there sufficient data to force and evaluate models). Consequently the research community, comprising modelers of diverse background, experience, and modeling philosophy, has amassed a wide range of models, which differ in almost every aspect of their conceptualization and implementation. Model comparison studies have been undertaken to explore model differences, but have not been able to meaningfully attribute inter-model differences in predictive ability to individual model components because there are often too many structural and implementation differences among the different models considered. As a consequence, model comparison studies to date have provided limited insight into the causes of differences in model behavior, and model development has often relied on the inspiration and experience of individual modelers rather than on a systematic analysis of model shortcomings. This presentation will summarize the use of "multiple-hypothesis" modeling frameworks to understand differences in process-based snow models. Multiple-hypothesis frameworks define a master modeling template, and include a a wide variety of process parameterizations and spatial configurations that are used in existing models. Such frameworks provide the capability to decompose complex models into the individual decisions that are made as part of model development, and evaluate each decision in isolation. It is hence possible to attribute differences in system-scale model predictions to individual modeling decisions, providing scope to mimic the behavior of existing models, understand why models differ, characterize model uncertainty, and identify productive pathways to model improvement. Results will be presented applying multiple hypothesis frameworks to snow model comparison projects, including PILPS, SnowMIP, and the upcoming ESM-SnowMIP project.
Fatigue Damage of Collagenous Tissues: Experiment, Modeling and Simulation Studies
Martin, Caitlin; Sun, Wei
2017-01-01
Mechanical fatigue damage is a critical issue for soft tissues and tissue-derived materials, particularly for musculoskeletal and cardiovascular applications; yet, our understanding of the fatigue damage process is incomplete. Soft tissue fatigue experiments are often difficult and time-consuming to perform, which has hindered progress in this area. However, the recent development of soft-tissue fatigue-damage constitutive models has enabled simulation-based fatigue analyses of tissues under various conditions. Computational simulations facilitate highly controlled and quantitative analyses to study the distinct effects of various loading conditions and design features on tissue durability; thus, they are advantageous over complex fatigue experiments. Although significant work to calibrate the constitutive models from fatigue experiments and to validate predictability remains, further development in these areas will add to our knowledge of soft-tissue fatigue damage and will facilitate the design of durable treatments and devices. In this review, the experimental, modeling, and simulation efforts to study collagenous tissue fatigue damage are summarized and critically assessed. PMID:25955007
Finding Furfural Hydrogenation Catalysts via Predictive Modelling
Strassberger, Zea; Mooijman, Maurice; Ruijter, Eelco; Alberts, Albert H; Maldonado, Ana G; Orru, Romano V A; Rothenberg, Gadi
2010-01-01
Abstract We combine multicomponent reactions, catalytic performance studies and predictive modelling to find transfer hydrogenation catalysts. An initial set of 18 ruthenium-carbene complexes were synthesized and screened in the transfer hydrogenation of furfural to furfurol with isopropyl alcohol complexes gave varied yields, from 62% up to >99.9%, with no obvious structure/activity correlations. Control experiments proved that the carbene ligand remains coordinated to the ruthenium centre throughout the reaction. Deuterium-labelling studies showed a secondary isotope effect (kH:kD=1.5). Further mechanistic studies showed that this transfer hydrogenation follows the so-called monohydride pathway. Using these data, we built a predictive model for 13 of the catalysts, based on 2D and 3D molecular descriptors. We tested and validated the model using the remaining five catalysts (cross-validation, R2=0.913). Then, with this model, the conversion and selectivity were predicted for four completely new ruthenium-carbene complexes. These four catalysts were then synthesized and tested. The results were within 3% of the model’s predictions, demonstrating the validity and value of predictive modelling in catalyst optimization. PMID:23193388
Managing complexity in simulations of land surface and near-surface processes
Coon, Ethan T.; Moulton, J. David; Painter, Scott L.
2016-01-12
Increasing computing power and the growing role of simulation in Earth systems science have led to an increase in the number and complexity of processes in modern simulators. We present a multiphysics framework that specifies interfaces for coupled processes and automates weak and strong coupling strategies to manage this complexity. Process management is enabled by viewing the system of equations as a tree, where individual equations are associated with leaf nodes and coupling strategies with internal nodes. A dynamically generated dependency graph connects a variable to its dependencies, streamlining and automating model evaluation, easing model development, and ensuring models aremore » modular and flexible. Additionally, the dependency graph is used to ensure that data requirements are consistent between all processes in a given simulation. Here we discuss the design and implementation of these concepts within the Arcos framework, and demonstrate their use for verification testing and hypothesis evaluation in numerical experiments.« less
Overlapping community detection in weighted networks via a Bayesian approach
NASA Astrophysics Data System (ADS)
Chen, Yi; Wang, Xiaolong; Xiang, Xin; Tang, Buzhou; Chen, Qingcai; Fan, Shixi; Bu, Junzhao
2017-02-01
Complex networks as a powerful way to represent complex systems have been widely studied during the past several years. One of the most important tasks of complex network analysis is to detect communities embedded in networks. In the real world, weighted networks are very common and may contain overlapping communities where a node is allowed to belong to multiple communities. In this paper, we propose a novel Bayesian approach, called the Bayesian mixture network (BMN) model, to detect overlapping communities in weighted networks. The advantages of our method are (i) providing soft-partition solutions in weighted networks; (ii) providing soft memberships, which quantify 'how strongly' a node belongs to a community. Experiments on a large number of real and synthetic networks show that our model has the ability in detecting overlapping communities in weighted networks and is competitive with other state-of-the-art models at shedding light on community partition.
1982-02-01
1968, 1969 and 1972 Confereaces. Zec -tain items at. the list delineate problems needing research (reattachment zones, *iv/bou j.Azy layer interactions...viscous energy equation--each in unaveraged form. As Peter Bradshaw has put it, God gave us one good model. Why should there be another model that is
NASA Astrophysics Data System (ADS)
Jang, J. H.; Nemer, M.
2015-12-01
The U.S. DOE Waste Isolation Pilot Plant (WIPP) is a deep underground repository for the permanent disposal of transuranic (TRU) radioactive waste. The WIPP is located in the Permian Delaware Basin near Carlsbad, New Mexico, U.S.A. The TRU waste includes, but is not limited to, iron-based alloys and the complexing agent, citric acid. Iron is also present from the steel used in the waste containers. The objective of this analysis is to derive the Pitzer activity coefficients for the pair of Na+ and FeCit- complex to expand current WIPP thermodynamic database. An aqueous model for the dissolution of Fe(OH)2(s) in a Na3Cit solution was fitted to the experimentally measured solubility data. The aqueous model consists of several chemical reactions and related Pitzer interaction parameters. Specifically, Pitzer interaction parameters for the Na+ and FeCit- pair (β(0), β(1), and Cφ) plus the stability constant for species of FeCit- were fitted to the experimental data. Anoxic gloveboxes were used to keep the oxygen level low (<1 ppm) throughout the experiments due to redox sensitivity. EQ3NR, a computer program for geochemical aqueous speciation-solubility calculations, packaged in EQ3/6 v.8.0a, calculates the aqueous speciation and saturation index using an aqueous model addressed in EQ3/6's database. The saturation index indicates how far the system is from equilibrium with respect to the solid of interest. Thus, the smaller the sum of squared saturation indices that the aqueous model calculates for the given number of experiments, the more closely the model attributes equilibrium to each individual experiment with respect to the solid of interest. The calculation of aqueous speciation and saturation indices was repeated by adjusting stability constant of FeCit-, β(0), β(1), and Cφ in the database until the values are found that make the sum of squared saturation indices the smallest for the given number of experiments. Results will be presented at the time of conference.
A Three-Dimensional DOSY HMQC Experiment for the High-Resolution Analysis of Complex Mixtures
NASA Astrophysics Data System (ADS)
Barjat, Hervé; Morris, Gareth A.; Swanson, Alistair G.
1998-03-01
A three-dimensional experiment is described in which NMR signals are separated according to their proton chemical shift,13C chemical shift, and diffusion coefficient. The sequence is built up from a stimulated echo sequence with bipolar field gradient pulses and a conventional decoupled HMQC sequence. Results are presented for a model mixture of quinine, camphene, and geraniol in deuteriomethanol.
Vibrational relaxation of I2 in complexing solvents: The role of solvent-solute attractive forces
NASA Astrophysics Data System (ADS)
Shiang, Joseph J.; Liu, Hongjun; Sension, Roseanne J.
1998-12-01
Femtosecond transient absorption studies of I2-arene complexes, with arene=hexamethylbenzene (HMB), mesitylene (MST), or m-xylene (mX), are used to investigate the effect of solvent-solute attractive forces upon the rate of vibrational relaxation in solution. Comparison of measurements on I2-MST complexes in neat mesitylene and I2-MST complexes diluted in carbontetrachloride demonstrate that binary solvent-solute attractive forces control the rate of vibrational relaxation in this prototypical model of diatomic vibrational relaxation. The data obtained for different arenes demonstrate that the rate of I2 relaxation increases with the magnitude of the I2-arene attractive interaction. I2-HMB relaxes much faster than I2 in MST or mX. The results of these experiments are discussed in terms of both isolated binary collision and instantaneous normal mode models for vibrational relaxation.
McMahon, Michelle A; Christopher, Kimberly A
2011-08-19
As the complexity of health care delivery continues to increase, educators are challenged to determine educational best practices to prepare BSN students for the ambiguous clinical practice setting. Integrative, active, and student-centered curricular methods are encouraged to foster student ability to use clinical judgment for problem solving and informed clinical decision making. The proposed pedagogical model of progressive complexity in nursing education suggests gradually introducing students to complex and multi-contextual clinical scenarios through the utilization of case studies and problem-based learning activities, with the intention to transition nursing students into autonomous learners and well-prepared practitioners at the culmination of a nursing program. Exemplar curricular activities are suggested to potentiate student development of a transferable problem solving skill set and a flexible knowledge base to better prepare students for practice in future novel clinical experiences, which is a mutual goal for both educators and students.
Fragment-based modelling of single stranded RNA bound to RNA recognition motif containing proteins
de Beauchene, Isaure Chauvot; de Vries, Sjoerd J.; Zacharias, Martin
2016-01-01
Abstract Protein-RNA complexes are important for many biological processes. However, structural modeling of such complexes is hampered by the high flexibility of RNA. Particularly challenging is the docking of single-stranded RNA (ssRNA). We have developed a fragment-based approach to model the structure of ssRNA bound to a protein, based on only the protein structure, the RNA sequence and conserved contacts. The conformational diversity of each RNA fragment is sampled by an exhaustive library of trinucleotides extracted from all known experimental protein–RNA complexes. The method was applied to ssRNA with up to 12 nucleotides which bind to dimers of the RNA recognition motifs (RRMs), a highly abundant eukaryotic RNA-binding domain. The fragment based docking allows a precise de novo atomic modeling of protein-bound ssRNA chains. On a benchmark of seven experimental ssRNA–RRM complexes, near-native models (with a mean heavy-atom deviation of <3 Å from experiment) were generated for six out of seven bound RNA chains, and even more precise models (deviation < 2 Å) were obtained for five out of seven cases, a significant improvement compared to the state of the art. The method is not restricted to RRMs but was also successfully applied to Pumilio RNA binding proteins. PMID:27131381
Design of Low Complexity Model Reference Adaptive Controllers
NASA Technical Reports Server (NTRS)
Hanson, Curt; Schaefer, Jacob; Johnson, Marcus; Nguyen, Nhan
2012-01-01
Flight research experiments have demonstrated that adaptive flight controls can be an effective technology for improving aircraft safety in the event of failures or damage. However, the nonlinear, timevarying nature of adaptive algorithms continues to challenge traditional methods for the verification and validation testing of safety-critical flight control systems. Increasingly complex adaptive control theories and designs are emerging, but only make testing challenges more difficult. A potential first step toward the acceptance of adaptive flight controllers by aircraft manufacturers, operators, and certification authorities is a very simple design that operates as an augmentation to a non-adaptive baseline controller. Three such controllers were developed as part of a National Aeronautics and Space Administration flight research experiment to determine the appropriate level of complexity required to restore acceptable handling qualities to an aircraft that has suffered failures or damage. The controllers consist of the same basic design, but incorporate incrementally-increasing levels of complexity. Derivations of the controllers and their adaptive parameter update laws are presented along with details of the controllers implementations.
Mifsud, Borbala; Martincorena, Inigo; Darbo, Elodie; Sugar, Robert; Schoenfelder, Stefan; Fraser, Peter; Luscombe, Nicholas M
2017-01-01
Hi-C is one of the main methods for investigating spatial co-localisation of DNA in the nucleus. However, the raw sequencing data obtained from Hi-C experiments suffer from large biases and spurious contacts, making it difficult to identify true interactions. Existing methods use complex models to account for biases and do not provide a significance threshold for detecting interactions. Here we introduce a simple binomial probabilistic model that resolves complex biases and distinguishes between true and false interactions. The model corrects biases of known and unknown origin and yields a p-value for each interaction, providing a reliable threshold based on significance. We demonstrate this experimentally by testing the method against a random ligation dataset. Our method outperforms previous methods and provides a statistical framework for further data analysis, such as comparisons of Hi-C interactions between different conditions. GOTHiC is available as a BioConductor package (http://www.bioconductor.org/packages/release/bioc/html/GOTHiC.html).
Tinkering With AGCMs To Investigate Atmospheric Behavior
NASA Astrophysics Data System (ADS)
Bitz, C. M.
2014-12-01
My experience teaching a course in global climate modeling has proven that students (and instructors) with wide-ranging backgrounds in earth-science learn effectively about the complexity of climate by tinker with model components. As an example, I will present a series of experiments in an AGCM with highly simplified geometries for ocean and land to test the response of the atmosphere to variations in basic parameters. The figure below shows an example of how the zonal wind changes with surface roughness and orography. The pinnacle of experiments explored in my course was the outcome of a homework assignment where students reduced the cloud droplet radius by 40% over ocean, and the results surprised students and instructor alike.
Solubility enhancement of seven metal contaminants using carboxymethyl-β-cyclodextrin (CMCD)
NASA Astrophysics Data System (ADS)
Skold, Magnus E.; Thyne, Geoffrey D.; Drexler, John W.; McCray, John E.
2009-07-01
Carboxymethyl-β-cyclodextrin (CMCD) has been suggested as a complexing agent for remediation of sites co-contaminated with metals and organic pollutants. As part of an attempt to construct a geochemical complexation model for metal-CMCD interactions, conditional formation constants for the complexes between CMCD and 7 metal ions (Ba, Ca, Cd, Ni, Pb, Sr, and Zn) are estimated from experimental data. Stable metal concentrations were reached after approximately 1 day and estimated logarithmic conditional formation constants range from - 3.2 to - 5.1 with confidence intervals within ± 0.08 log units. Experiments performed at 10 °C and 25 °C show that temperature affects the solubility of the metal salts but the strength of CMCD-metal complexes are not affected by this temperature variation. The conditional stability constants and complexation model presented in this work can be used to screen CMCD as a potential remediation agent for clean-up of contaminated soil and groundwater.
Trace Metal-Humic Complexes in Natural Waters: Insights From Speciation Experiments
NASA Astrophysics Data System (ADS)
Stern, J. C.; Salters, V.; Sonke, J.
2006-12-01
The DOM cycle is intimately linked to the cycling and bioavailability of trace metals in aqueous environments. The presence or absence of DOM in the water column can determined whether trace elements will be present in limited quantities as a nutrient, or in surplus quantities as a toxicant. Humic substances (HS), which represent the refractory products of DOM degradation, strongly affect the speciation of trace metals in natural waters. To simulate metal-HS interactions in nature, experiments must be carried out using trace metal concentrations. Sensitive detection systems such as ICP-MS make working with small (nanomolar) concentrations possible. Capillary electrophoresis coupled with ICP-MS (CE-ICP-MS) has recently been identified as a rapid and accurate method to separate metal species and calculate conditional binding constants (log K_c) of metal-humic complexes. CE-ICP-MS was used to measure partitioning of metals between humic substances and a competing ligand (EDTA) and calculate binding constants of rare earth element (REE) and Th, Hf, and Zr-humic complexes at pH 3.5-8 and ionic strength of 0.1. Equilibrium dialysis ligand exchange (EDLE) experiments to validate the CE-ICP-MS method were performed to separate the metal-HS and metal-EDTA species by partitioning due to size exclusion via diffusion through a 1000 Da membrane. CE-ICP-MS experiments were also conducted to compare binding constants of REE with humic substances of various origin, including soil, peat, and aquatic DOM. Results of our experiments show an increase in log K_c with decrease in ionic radius for REE-humic complexes (the lanthanide contraction effect). Conditional binding constants of tetravalent metal-humic complexes were found to be several orders of magnitude higher than REE-humic complexes, indicating that tetravalent metals have a very strong affinity for humic substances. Because thorium is often used as a proxy for the tetravalent actinides, Th-HS binding constants can allow us to assess the importance of tetravalent actinide-humic complexes in groundwater transport from nuclear repositories. Our results suggest that tetravalent actinide-humic complexes couild be more important to account for in predictive speciation models than previously thought.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romander, C M; Cagliostro, D J
Five experiments were performed to help evaluate the structural integrity of the reactor vessel and head design and to verify code predictions. In the first experiment (SM 1), a detailed model of the head was loaded statically to determine its stiffness. In the remaining four experiments (SM 2 to SM 5), models of the vessel and head were loaded dynamically under a simulated 661 MW-s hypothetical core disruptive accident (HCDA). Models SM 2 to SM 4, each of increasing complexity, systematically showed the effects of upper internals structures, a thermal liner, core support platform, and torospherical bottom on vessel response.more » Model SM 5, identical to SM 4 but more heavily instrumented, demonstrated experimental reproducibility and provided more comprehensive data. The models consisted of a Ni 200 vessel and core barrel, a head with shielding and simulated component masses, and an upper internals structure (UIS).« less
Orlowska, Ewelina; Roller, Alexander; Pignitter, Marc; Jirsa, Franz; Krachler, Regina; Kandioller, Wolfgang; Keppler, Bernhard K
2017-01-15
A series of monomeric and dimeric Fe III complexes with O,O-; O,N-; O,S-coordination motifs has been prepared and characterized by standard analytical methods in order to elucidate their potential to act as model compounds for aquatic humic acids. Due to the postulated reduction of iron in humic acids and following uptake by microorganisms, the redox behavior of the models was investigated with cyclic voltammetry. Most of the investigated compounds showed iron reduction potentials accessible to biological reducing agents. Additionally, observed reduction processes were predominantly irreversible, suggesting that subsequent reactions can take place after reduction of the iron center. Also the stability of the synthesized complexes in pure water and artificial seawater was monitored from 24h up to 21days by means of UV-Vis spectrometry. Several complexes remained stable even after 21days, showing only partially precipitation but some of them showed changes in UV-Vis spectra already after 24h which were connected to protonation/deprotonation processes as well as redox processes and degradation of the complexes. The ability to act as an iron source for primary producers was tested in algal growth experiments with two marine algae species Chlorella salina and Prymnesium parvum. Some of the compounds showed effects on the algal cultures, which are comparable with natural humic acids and better as for the samples kept under ideal conditions. Those findings help to understand which functional groups of humic acids could be responsible for the reversible iron binding and transport in aquatic humic substances. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Analysis of Modeling Processes
ERIC Educational Resources Information Center
Bandura, Albert
1975-01-01
Traditional learning theories stress that people are either conditioned through reward and punishment or by close association with neutral or evocative stimuli. These direct experience theories do not account for people's learning complex behavior through observation. Attentional, retention, motoric reproduction, reinforcement, and motivational…
The CAESAR models for developmental toxicity
The new REACH legislation requires assessment of a high number of chemicals in the European market for several endpoints. Developmental Toxicity results amongst the most difficult endpoint to assess, due to the complexity, length and costs of experiments. Following the encouragem...
NASA Technical Reports Server (NTRS)
McElroy, Mark; Jackson, Wade; Pankow, Mark
2016-01-01
It is not easy to isolate the damage mechanisms associated with low-velocity impact in composites using traditional experiments. In this work, a new experiment is presented with the goal of generating data representative of progressive damage processes caused by low-velocity impact in composite materials. Carbon fiber reinforced polymer test specimens were indented quasi-statically such that a biaxial-bending state of deformation was achieved. As a result, a three-dimensional damage process, involving delamination and delamination-migration, was observed and documented using ultrasonic and x-ray computed tomography. Results from two different layups are presented in this paper. Delaminations occurred at up to three different interfaces and interacted with one another via transverse matrix cracks. Although this damage pattern is much less complex than that of low-velocity impact on a plate, it is more complex than that of a standard delamination coupon test and provides a way to generate delamination, matrix cracking, and delamination-migration in a controlled manner. By limiting the damage process in the experiment to three delaminations, the same damage mechanisms seen during impact could be observed but in a simplified manner. This type of data is useful in stages of model development and validation when the model is capable of simulating simple tests, but not yet capable of simulating more complex and realistic damage scenarios.
NASA Astrophysics Data System (ADS)
Ehlmann, Bryon K.
Current scientific experiments are often characterized by massive amounts of very complex data and the need for complex data analysis software. Object-oriented database (OODB) systems have the potential of improving the description of the structure and semantics of this data and of integrating the analysis software with the data. This dissertation results from research to enhance OODB functionality and methodology to support scientific databases (SDBs) and, more specifically, to support a nuclear physics experiments database for the Continuous Electron Beam Accelerator Facility (CEBAF). This research to date has identified a number of problems related to the practical application of OODB technology to the conceptual design of the CEBAF experiments database and other SDBs: the lack of a generally accepted OODB design methodology, the lack of a standard OODB model, the lack of a clear conceptual level in existing OODB models, and the limited support in existing OODB systems for many common object relationships inherent in SDBs. To address these problems, the dissertation describes an Object-Relationship Diagram (ORD) and an Object-oriented Database Definition Language (ODDL) that provide tools that allow SDB design and development to proceed systematically and independently of existing OODB systems. These tools define multi-level, conceptual data models for SDB design, which incorporate a simple notation for describing common types of relationships that occur in SDBs. ODDL allows these relationships and other desirable SDB capabilities to be supported by an extended OODB system. A conceptual model of the CEBAF experiments database is presented in terms of ORDs and the ODDL to demonstrate their functionality and use and provide a foundation for future development of experimental nuclear physics software using an OODB approach.
Ion exchange of several radionuclides on the hydrous crystalline silicotitanate, UOP IONSIV IE-911
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huckman, M.E.; Latheef, I.M.; Anthony, R.G.
1999-04-01
The crystalline silicotitanate, UOP IONSIV IE-911, is a proven material for removing radionuclides from a wide variety of waste streams. It is superior for removing several radionuclides from the highly alkaline solutions typical of DOE wastes. This laboratory previously developed an equilibrium model applicable to complex solutions for IE-910 (the power form of the granular IE-911), and more recently, the authors have developed several single component ion-exchange kinetic models for predicting column breakthrough curves and batch reactor concentration histories. In this paper, the authors model ion-exchange column performance using effective diffusivities determined from batch kinetic experiments. This technique is preferablemore » because the batch experiments are easier, faster, and cheaper to perform than column experiments. They also extend these ideas to multicomponent systems. Finally, they evaluate the ability of the equilibrium model to predict data for IE-911.« less
Bayesian component separation: The Planck experience
NASA Astrophysics Data System (ADS)
Wehus, Ingunn Kathrine; Eriksen, Hans Kristian
2018-05-01
Bayesian component separation techniques have played a central role in the data reduction process of Planck. The most important strength of this approach is its global nature, in which a parametric and physical model is fitted to the data. Such physical modeling allows the user to constrain very general data models, and jointly probe cosmological, astrophysical and instrumental parameters. This approach also supports statistically robust goodness-of-fit tests in terms of data-minus-model residual maps, which are essential for identifying residual systematic effects in the data. The main challenges are high code complexity and computational cost. Whether or not these costs are justified for a given experiment depends on its final uncertainty budget. We therefore predict that the importance of Bayesian component separation techniques is likely to increase with time for intensity mapping experiments, similar to what has happened in the CMB field, as observational techniques mature, and their overall sensitivity improves.
NASA Astrophysics Data System (ADS)
Tourret, D.; Karma, A.; Clarke, A. J.; Gibbs, P. J.; Imhoff, S. D.
2015-06-01
We present a three-dimensional (3D) extension of a previously proposed multi-scale Dendritic Needle Network (DNN) approach for the growth of complex dendritic microstructures. Using a new formulation of the DNN dynamics equations for dendritic paraboloid-branches of a given thickness, one can directly extend the DNN approach to 3D modeling. We validate this new formulation against known scaling laws and analytical solutions that describe the early transient and steady-state growth regimes, respectively. Finally, we compare the predictions of the model to in situ X-ray imaging of Al-Cu alloy solidification experiments. The comparison shows a very good quantitative agreement between 3D simulations and thin sample experiments. It also highlights the importance of full 3D modeling to accurately predict the primary dendrite arm spacing that is significantly over-estimated by 2D simulations.
Tourret, D.; Karma, A.; Clarke, A. J.; ...
2015-06-11
We present a three-dimensional (3D) extension of a previously proposed multi-scale Dendritic Needle Network (DNN) approach for the growth of complex dendritic microstructures. Using a new formulation of the DNN dynamics equations for dendritic paraboloid-branches of a given thickness, one can directly extend the DNN approach to 3D modeling. We validate this new formulation against known scaling laws and analytical solutions that describe the early transient and steady-state growth regimes, respectively. Finally, we compare the predictions of the model to in situ X-ray imaging of Al-Cu alloy solidification experiments. The comparison shows a very good quantitative agreement between 3D simulationsmore » and thin sample experiments. It also highlights the importance of full 3D modeling to accurately predict the primary dendrite arm spacing that is significantly over-estimated by 2D simulations.« less
Kim, K B; Shanyfelt, L M; Hahn, D W
2006-01-01
Dense-medium scattering is explored in the context of providing a quantitative measurement of turbidity, with specific application to corneal haze. A multiple-wavelength scattering technique is proposed to make use of two-color scattering response ratios, thereby providing a means for data normalization. A combination of measurements and simulations are reported to assess this technique, including light-scattering experiments for a range of polystyrene suspensions. Monte Carlo (MC) simulations were performed using a multiple-scattering algorithm based on full Mie scattering theory. The simulations were in excellent agreement with the polystyrene suspension experiments, thereby validating the MC model. The MC model was then used to simulate multiwavelength scattering in a corneal tissue model. Overall, the proposed multiwavelength scattering technique appears to be a feasible approach to quantify dense-medium scattering such as the manifestation of corneal haze, although more complex modeling of keratocyte scattering, and animal studies, are necessary.
Surface Complexation Modeling of Eu(III) and U(VI) Interactions with Graphene Oxide.
Xie, Yu; Helvenston, Edward M; Shuller-Nickles, Lindsay C; Powell, Brian A
2016-02-16
Graphene oxide (GO) has great potential for actinide removal due to its extremely high sorption capacity, but the mechanism of sorption remains unclear. In this study, the carboxylic functional group and an unexpected sulfonate functional group on GO were characterized as the reactive surface sites and quantified via diffuse layer modeling of the GO acid/base titrations. The presence of sulfonate functional group on GO was confirmed using elemental analysis and X-ray photoelectron spectroscopy. Batch experiments of Eu(III) and U(VI) sorption to GO as the function of pH (1-8) and as the function of analyte concentration (10-100, 000 ppb) at a constant pH ≈ 5 were conducted; the batch sorption results were modeled simultaneously using surface complexation modeling (SCM). The SCM indicated that Eu(III) and U(VI) complexation to carboxylate functional group is the main mechanism for their sorption to GO; their complexation to the sulfonate site occurred at the lower pH range and the complexation of Eu(III) to sulfonate site are more significant than that of U(VI). Eu(III) and U(VI) facilitated GO aggregation was observed with high Eu(III) and U(VI) concentration and may be caused by surface charge neutralization of GO after sorption.
Modelling short pulse, high intensity laser plasma interactions
NASA Astrophysics Data System (ADS)
Evans, R. G.
2006-06-01
Modelling the interaction of ultra-intense laser pulses with solid targets is made difficult through the large range of length and time scales involved in the transport of relativistic electrons. An implicit hybrid PIC-fluid model using the commercial code LSP (LSP is marketed by MRC (Albuquerque), New Mexico, USA) reveals a variety of complex phenomena which seem to be borne out in experiments and some existing theories.
Public-private partnerships for hospitals.
McKee, Martin; Edwards, Nigel; Atun, Rifat
2006-11-01
While some forms of public-private partnerships are a feature of hospital construction and operation in all countries with mixed economies, there is increasing interest in a model in which a public authority contracts with a private company to design, build and operate an entire hospital. Drawing on the experience of countries such as Australia, Spain, and the United Kingdom, this paper reviews the experience with variants of this model. Although experience is still very limited and rigorous evaluations lacking, four issues have emerged: cost, quality, flexibility and complexity. New facilities have, in general, been more expensive than they would have been if procured using traditional methods. Compared with the traditional system, new facilities are more likely to be built on time and within budget, but this seems often to be at the expense of compromises on quality. The need to minimize the risk to the parties means that it is very difficult to "future-proof" facilities in a rapidly changing world. Finally, such projects are extremely, and in some cases prohibitively, complex. While it is premature to say whether the problems experienced relate to the underlying model or to their implementation, it does seem that a public-private partnership further complicates the already difficult task of building and operating a hospital.
Public-private partnerships for hospitals.
McKee, Martin; Edwards, Nigel; Atun, Rifat
2006-01-01
While some forms of public-private partnerships are a feature of hospital construction and operation in all countries with mixed economies, there is increasing interest in a model in which a public authority contracts with a private company to design, build and operate an entire hospital. Drawing on the experience of countries such as Australia, Spain, and the United Kingdom, this paper reviews the experience with variants of this model. Although experience is still very limited and rigorous evaluations lacking, four issues have emerged: cost, quality, flexibility and complexity. New facilities have, in general, been more expensive than they would have been if procured using traditional methods. Compared with the traditional system, new facilities are more likely to be built on time and within budget, but this seems often to be at the expense of compromises on quality. The need to minimize the risk to the parties means that it is very difficult to "future-proof" facilities in a rapidly changing world. Finally, such projects are extremely, and in some cases prohibitively, complex. While it is premature to say whether the problems experienced relate to the underlying model or to their implementation, it does seem that a public-private partnership further complicates the already difficult task of building and operating a hospital. PMID:17143463
Surface complexation model of uranyl sorption on Georgia kaolinite
Payne, T.E.; Davis, J.A.; Lumpkin, G.R.; Chisari, R.; Waite, T.D.
2004-01-01
The adsorption of uranyl on standard Georgia kaolinites (KGa-1 and KGa-1B) was studied as a function of pH (3-10), total U (1 and 10 ??mol/l), and mass loading of clay (4 and 40 g/l). The uptake of uranyl in air-equilibrated systems increased with pH and reached a maximum in the near-neutral pH range. At higher pH values, the sorption decreased due to the presence of aqueous uranyl carbonate complexes. One kaolinite sample was examined after the uranyl uptake experiments by transmission electron microscopy (TEM), using energy dispersive X-ray spectroscopy (EDS) to determine the U content. It was found that uranium was preferentially adsorbed by Ti-rich impurity phases (predominantly anatase), which are present in the kaolinite samples. Uranyl sorption on the Georgia kaolinites was simulated with U sorption reactions on both titanol and aluminol sites, using a simple non-electrostatic surface complexation model (SCM). The relative amounts of U-binding >TiOH and >AlOH sites were estimated from the TEM/EDS results. A ternary uranyl carbonate complex on the titanol site improved the fit to the experimental data in the higher pH range. The final model contained only three optimised log K values, and was able to simulate adsorption data across a wide range of experimental conditions. The >TiOH (anatase) sites appear to play an important role in retaining U at low uranyl concentrations. As kaolinite often contains trace TiO2, its presence may need to be taken into account when modelling the results of sorption experiments with radionuclides or trace metals on kaolinite. ?? 2004 Elsevier B.V. All rights reserved.
Three-Dimensional High Fidelity Progressive Failure Damage Modeling of NCF Composites
NASA Technical Reports Server (NTRS)
Aitharaju, Venkat; Aashat, Satvir; Kia, Hamid G.; Satyanarayana, Arunkumar; Bogert, Philip B.
2017-01-01
Performance prediction of off-axis laminates is of significant interest in designing composite structures for energy absorption. Phenomenological models available in most of the commercial programs, where the fiber and resin properties are smeared, are very efficient for large scale structural analysis, but lack the ability to model the complex nonlinear behavior of the resin and fail to capture the complex load transfer mechanisms between the fiber and the resin matrix. On the other hand, high fidelity mesoscale models, where the fiber tows and matrix regions are explicitly modeled, have the ability to account for the complex behavior in each of the constituents of the composite. However, creating a finite element model of a larger scale composite component could be very time consuming and computationally very expensive. In the present study, a three-dimensional mesoscale model of non-crimp composite laminates was developed for various laminate schemes. The resin material was modeled as an elastic-plastic material with nonlinear hardening. The fiber tows were modeled with an orthotropic material model with brittle failure. In parallel, new stress based failure criteria combined with several damage evolution laws for matrix stresses were proposed for a phenomenological model. The results from both the mesoscale and phenomenological models were compared with the experiments for a variety of off-axis laminates.
Droplet combustion experiment drop tower tests using models of the space flight apparatus
NASA Technical Reports Server (NTRS)
Haggard, J. B.; Brace, M. H.; Kropp, J. L.; Dryer, F. L.
1989-01-01
The Droplet Combustion Experiment (DCE) is an experiment that is being developed to ultimately operate in the shuttle environment (middeck or Spacelab). The current experiment implementation is for use in the 2.2 or 5 sec drop towers at NASA Lewis Research Center. Initial results were reported in the 1986 symposium of this meeting. Since then significant progress was made in drop tower instrumentation. The 2.2 sec drop tower apparatus, a conceptual level model, was improved to give more reproducible performance as well as operate over a wider range of test conditions. Some very low velocity deployments of ignited droplets were observed. An engineering model was built at TRW. This model will be used in the 5 sec drop tower operation to obtain science data. In addition, it was built using the flight design except for changes to accommodate the drop tower requirements. The mechanical and electrical assemblies have the same level of complexity as they will have in flight. The model was tested for functional operation and then delivered to NASA Lewis. The model was then integrated into the 5 sec drop tower. The model is currently undergoing initial operational tests prior to starting the science tests.
Simulation Experiment Description Markup Language (SED-ML) Level 1 Version 2.
Bergmann, Frank T; Cooper, Jonathan; Le Novère, Nicolas; Nickerson, David; Waltemath, Dagmar
2015-09-04
The number, size and complexity of computational models of biological systems are growing at an ever increasing pace. It is imperative to build on existing studies by reusing and adapting existing models and parts thereof. The description of the structure of models is not sufficient to enable the reproduction of simulation results. One also needs to describe the procedures the models are subjected to, as recommended by the Minimum Information About a Simulation Experiment (MIASE) guidelines. This document presents Level 1 Version 2 of the Simulation Experiment Description Markup Language (SED-ML), a computer-readable format for encoding simulation and analysis experiments to apply to computational models. SED-ML files are encoded in the Extensible Markup Language (XML) and can be used in conjunction with any XML-based model encoding format, such as CellML or SBML. A SED-ML file includes details of which models to use, how to modify them prior to executing a simulation, which simulation and analysis procedures to apply, which results to extract and how to present them. Level 1 Version 2 extends the format by allowing the encoding of repeated and chained procedures.
Simulation Experiment Description Markup Language (SED-ML) Level 1 Version 2.
Bergmann, Frank T; Cooper, Jonathan; Le Novère, Nicolas; Nickerson, David; Waltemath, Dagmar
2015-06-01
The number, size and complexity of computational models of biological systems are growing at an ever increasing pace. It is imperative to build on existing studies by reusing and adapting existing models and parts thereof. The description of the structure of models is not sufficient to enable the reproduction of simulation results. One also needs to describe the procedures the models are subjected to, as recommended by the Minimum Information About a Simulation Experiment (MIASE) guidelines. This document presents Level 1 Version 2 of the Simulation Experiment Description Markup Language (SED-ML), a computer-readable format for encoding simulation and analysis experiments to apply to computational models. SED-ML files are encoded in the Extensible Markup Language (XML) and can be used in conjunction with any XML-based model encoding format, such as CellML or SBML. A SED-ML file includes details of which models to use, how to modify them prior to executing a simulation, which simulation and analysis procedures to apply, which results to extract and how to present them. Level 1 Version 2 extends the format by allowing the encoding of repeated and chained procedures.
The diminishing criterion model for metacognitive regulation of time investment.
Ackerman, Rakefet
2014-06-01
According to the Discrepancy Reduction Model for metacognitive regulation, people invest time in cognitive tasks in a goal-driven manner until their metacognitive judgment, either judgment of learning (JOL) or confidence, meets their preset goal. This stopping rule should lead to judgments above the goal, regardless of invested time. However, in many tasks, time is negatively correlated with JOL and confidence, with low judgments after effortful processing. This pattern has often been explained as stemming from bottom-up fluency effects on the judgments. While accepting this explanation for simple tasks, like memorizing pairs of familiar words, the proposed Diminishing Criterion Model (DCM) challenges this explanation for complex tasks, like problem solving. Under the DCM, people indeed invest effort in a goal-driven manner. However, investing more time leads to increasing compromise on the goal, resulting in negative time-judgment correlations. Experiment 1 exposed that with word-pair memorization, negative correlations are found only with minimal fluency and difficulty variability, whereas in problem solving, they are found consistently. As predicted, manipulations of low incentives (Experiment 2) and time pressure (Experiment 3) in problem solving revealed greater compromise as more time was invested in a problem. Although intermediate confidence ratings rose during the solving process, the result was negative time-confidence correlations (Experiments 3, 4, and 5), and this was not eliminated by the opportunity to respond "don't know" (Experiments 4 and 5). The results suggest that negative time-judgment correlations in complex tasks stem from top-down regulatory processes with a criterion that diminishes with invested time. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Maximum entropy perception-action space: a Bayesian model of eye movement selection
NASA Astrophysics Data System (ADS)
Colas, Francis; Bessière, Pierre; Girard, Benoît
2011-03-01
In this article, we investigate the issue of the selection of eye movements in a free-eye Multiple Object Tracking task. We propose a Bayesian model of retinotopic maps with a complex logarithmic mapping. This model is structured in two parts: a representation of the visual scene, and a decision model based on the representation. We compare different decision models based on different features of the representation and we show that taking into account uncertainty helps predict the eye movements of subjects recorded in a psychophysics experiment. Finally, based on experimental data, we postulate that the complex logarithmic mapping has a functional relevance, as the density of objects in this space in more uniform than expected. This may indicate that the representation space and control strategies are such that the object density is of maximum entropy.
Tailored Codes for Small Quantum Memories
NASA Astrophysics Data System (ADS)
Robertson, Alan; Granade, Christopher; Bartlett, Stephen D.; Flammia, Steven T.
2017-12-01
We demonstrate that small quantum memories, realized via quantum error correction in multiqubit devices, can benefit substantially by choosing a quantum code that is tailored to the relevant error model of the system. For a biased noise model, with independent bit and phase flips occurring at different rates, we show that a single code greatly outperforms the well-studied Steane code across the full range of parameters of the noise model, including for unbiased noise. In fact, this tailored code performs almost optimally when compared with 10 000 randomly selected stabilizer codes of comparable experimental complexity. Tailored codes can even outperform the Steane code with realistic experimental noise, and without any increase in the experimental complexity, as we demonstrate by comparison in the observed error model in a recent seven-qubit trapped ion experiment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shimizu, Kuniyasu, E-mail: kuniyasu.shimizu@it-chiba.ac.jp; Sekikawa, Munehisa; Inaba, Naohiko
2015-02-15
Bifurcations of complex mixed-mode oscillations denoted as mixed-mode oscillation-incrementing bifurcations (MMOIBs) have frequently been observed in chemical experiments. In a previous study [K. Shimizu et al., Physica D 241, 1518 (2012)], we discovered an extremely simple dynamical circuit that exhibits MMOIBs. Our model was represented by a slow/fast Bonhoeffer-van der Pol circuit under weak periodic perturbation near a subcritical Andronov-Hopf bifurcation point. In this study, we experimentally and numerically verify that our dynamical circuit captures the essence of the underlying mechanism causing MMOIBs, and we observe MMOIBs and chaos with distinctive waveforms in real circuit experiments.
Wang, Danny J J; Jann, Kay; Fan, Chang; Qiao, Yang; Zang, Yu-Feng; Lu, Hanbing; Yang, Yihong
2018-01-01
Recently, non-linear statistical measures such as multi-scale entropy (MSE) have been introduced as indices of the complexity of electrophysiology and fMRI time-series across multiple time scales. In this work, we investigated the neurophysiological underpinnings of complexity (MSE) of electrophysiology and fMRI signals and their relations to functional connectivity (FC). MSE and FC analyses were performed on simulated data using neural mass model based brain network model with the Brain Dynamics Toolbox, on animal models with concurrent recording of fMRI and electrophysiology in conjunction with pharmacological manipulations, and on resting-state fMRI data from the Human Connectome Project. Our results show that the complexity of regional electrophysiology and fMRI signals is positively correlated with network FC. The associations between MSE and FC are dependent on the temporal scales or frequencies, with higher associations between MSE and FC at lower temporal frequencies. Our results from theoretical modeling, animal experiment and human fMRI indicate that (1) Regional neural complexity and network FC may be two related aspects of brain's information processing: the more complex regional neural activity, the higher FC this region has with other brain regions; (2) MSE at high and low frequencies may represent local and distributed information processing across brain regions. Based on literature and our data, we propose that the complexity of regional neural signals may serve as an index of the brain's capacity of information processing-increased complexity may indicate greater transition or exploration between different states of brain networks, thereby a greater propensity for information processing.
Disulfide Trapping for Modeling and Structure Determination of Receptor: Chemokine Complexes.
Kufareva, Irina; Gustavsson, Martin; Holden, Lauren G; Qin, Ling; Zheng, Yi; Handel, Tracy M
2016-01-01
Despite the recent breakthrough advances in GPCR crystallography, structure determination of protein-protein complexes involving chemokine receptors and their endogenous chemokine ligands remains challenging. Here, we describe disulfide trapping, a methodology for generating irreversible covalent binary protein complexes from unbound protein partners by introducing two cysteine residues, one per interaction partner, at selected positions within their interaction interface. Disulfide trapping can serve at least two distinct purposes: (i) stabilization of the complex to assist structural studies and/or (ii) determination of pairwise residue proximities to guide molecular modeling. Methods for characterization of disulfide-trapped complexes are described and evaluated in terms of throughput, sensitivity, and specificity toward the most energetically favorable crosslinks. Due to abundance of native disulfide bonds at receptor:chemokine interfaces, disulfide trapping of their complexes can be associated with intramolecular disulfide shuffling and result in misfolding of the component proteins; because of this, evidence from several experiments is typically needed to firmly establish a positive disulfide crosslink. An optimal pipeline that maximizes throughput and minimizes time and costs by early triage of unsuccessful candidate constructs is proposed. © 2016 Elsevier Inc. All rights reserved.
Sampling from complex networks using distributed learning automata
NASA Astrophysics Data System (ADS)
Rezvanian, Alireza; Rahmati, Mohammad; Meybodi, Mohammad Reza
2014-02-01
A complex network provides a framework for modeling many real-world phenomena in the form of a network. In general, a complex network is considered as a graph of real world phenomena such as biological networks, ecological networks, technological networks, information networks and particularly social networks. Recently, major studies are reported for the characterization of social networks due to a growing trend in analysis of online social networks as dynamic complex large-scale graphs. Due to the large scale and limited access of real networks, the network model is characterized using an appropriate part of a network by sampling approaches. In this paper, a new sampling algorithm based on distributed learning automata has been proposed for sampling from complex networks. In the proposed algorithm, a set of distributed learning automata cooperate with each other in order to take appropriate samples from the given network. To investigate the performance of the proposed algorithm, several simulation experiments are conducted on well-known complex networks. Experimental results are compared with several sampling methods in terms of different measures. The experimental results demonstrate the superiority of the proposed algorithm over the others.
Assessing model sensitivity and uncertainty across multiple Free-Air CO2 Enrichment experiments.
NASA Astrophysics Data System (ADS)
Cowdery, E.; Dietze, M.
2015-12-01
As atmospheric levels of carbon dioxide levels continue to increase, it is critical that terrestrial ecosystem models can accurately predict ecological responses to the changing environment. Current predictions of net primary productivity (NPP) in response to elevated atmospheric CO2 concentrations are highly variable and contain a considerable amount of uncertainty. It is necessary that we understand which factors are driving this uncertainty. The Free-Air CO2 Enrichment (FACE) experiments have equipped us with a rich data source that can be used to calibrate and validate these model predictions. To identify and evaluate the assumptions causing inter-model differences we performed model sensitivity and uncertainty analysis across ambient and elevated CO2 treatments using the Data Assimilation Linked Ecosystem Carbon (DALEC) model and the Ecosystem Demography Model (ED2), two process-based models ranging from low to high complexity respectively. These modeled process responses were compared to experimental data from the Kennedy Space Center Open Top Chamber Experiment, the Nevada Desert Free Air CO2 Enrichment Facility, the Rhinelander FACE experiment, the Wyoming Prairie Heating and CO2 Enrichment Experiment, the Duke Forest Face experiment and the Oak Ridge Experiment on CO2 Enrichment. By leveraging data access proxy and data tilling services provided by the BrownDog data curation project alongside analysis modules available in the Predictive Ecosystem Analyzer (PEcAn), we produced automated, repeatable benchmarking workflows that are generalized to incorporate different sites and ecological models. Combining the observed patterns of uncertainty between the two models with results of the recent FACE-model data synthesis project (FACE-MDS) can help identify which processes need further study and additional data constraints. These findings can be used to inform future experimental design and in turn can provide informative starting point for data assimilation.
Koithan, Mary; Bell, Iris R; Niemeyer, Kathryn; Pincus, David
2012-01-01
Whole systems complementary and alternative medicine (WS-CAM) approaches share a basic worldview that embraces interconnectedness; emergent, non-linear outcomes to treatment that include both local and global changes in the human condition; a contextual view of human beings that are inseparable from and responsive to their environments; and interventions that are complex, synergistic, and interdependent. These fundamental beliefs and principles run counter to the assumptions of reductionism and conventional biomedical research methods that presuppose unidimensional simple causes and thus dismantle and individually test various interventions that comprise only single aspects of the WSCAM system. This paper will demonstrate the superior fit and practical advantages of using complex adaptive systems (CAS) and related modeling approaches to develop the scientific basis for WS-CAM. Furthermore, the details of these CAS models will be used to provide working hypotheses to explain clinical phenomena such as (a) persistence of changes for weeks to months between treatments and/or after cessation of treatment, (b) nonlocal and whole systems changes resulting from therapy, (c) Hering's law, and (d) healing crises. Finally, complex systems science will be used to offer an alternative perspective on cause, beyond the simple reductionism of mainstream mechanistic ontology and more parsimonious than the historical vitalism of WS-CAM. Rather, complex systems science provides a scientifically rigorous, yet essentially holistic ontological perspective with which to conceptualize and empirically explore the development of disease and illness experiences, as well as experiences of healing and wellness. Copyright © 2012 S. Karger AG, Basel.
A visual model for object detection based on active contours and level-set method.
Satoh, Shunji
2006-09-01
A visual model for object detection is proposed. In order to make the detection ability comparable with existing technical methods for object detection, an evolution equation of neurons in the model is derived from the computational principle of active contours. The hierarchical structure of the model emerges naturally from the evolution equation. One drawback involved with initial values of active contours is alleviated by introducing and formulating convexity, which is a visual property. Numerical experiments show that the proposed model detects objects with complex topologies and that it is tolerant of noise. A visual attention model is introduced into the proposed model. Other simulations show that the visual properties of the model are consistent with the results of psychological experiments that disclose the relation between figure-ground reversal and visual attention. We also demonstrate that the model tends to perceive smaller regions as figures, which is a characteristic observed in human visual perception.
Bayesian models based on test statistics for multiple hypothesis testing problems.
Ji, Yuan; Lu, Yiling; Mills, Gordon B
2008-04-01
We propose a Bayesian method for the problem of multiple hypothesis testing that is routinely encountered in bioinformatics research, such as the differential gene expression analysis. Our algorithm is based on modeling the distributions of test statistics under both null and alternative hypotheses. We substantially reduce the complexity of the process of defining posterior model probabilities by modeling the test statistics directly instead of modeling the full data. Computationally, we apply a Bayesian FDR approach to control the number of rejections of null hypotheses. To check if our model assumptions for the test statistics are valid for various bioinformatics experiments, we also propose a simple graphical model-assessment tool. Using extensive simulations, we demonstrate the performance of our models and the utility of the model-assessment tool. In the end, we apply the proposed methodology to an siRNA screening and a gene expression experiment.
Pore-scale and continuum simulations of solute transport micromodel benchmark experiments
Oostrom, M.; Mehmani, Y.; Romero-Gomez, P.; ...
2014-06-18
Four sets of nonreactive solute transport experiments were conducted with micromodels. Three experiments with one variable, i.e., flow velocity, grain diameter, pore-aspect ratio, and flow-focusing heterogeneity were in each set. The data sets were offered to pore-scale modeling groups to test their numerical simulators. Each set consisted of two learning experiments, for which our results were made available, and one challenge experiment, for which only the experimental description and base input parameters were provided. The experimental results showed a nonlinear dependence of the transverse dispersion coefficient on the Peclet number, a negligible effect of the pore-aspect ratio on transverse mixing,more » and considerably enhanced mixing due to flow focusing. Five pore-scale models and one continuum-scale model were used to simulate the experiments. Of the pore-scale models, two used a pore-network (PN) method, two others are based on a lattice Boltzmann (LB) approach, and one used a computational fluid dynamics (CFD) technique. Furthermore, we used the learning experiments, by the PN models, to modify the standard perfect mixing approach in pore bodies into approaches to simulate the observed incomplete mixing. The LB and CFD models used the learning experiments to appropriately discretize the spatial grid representations. For the continuum modeling, the required dispersivity input values were estimated based on published nonlinear relations between transverse dispersion coefficients and Peclet number. Comparisons between experimental and numerical results for the four challenge experiments show that all pore-scale models were all able to satisfactorily simulate the experiments. The continuum model underestimated the required dispersivity values, resulting in reduced dispersion. The PN models were able to complete the simulations in a few minutes, whereas the direct models, which account for the micromodel geometry and underlying flow and transport physics, needed up to several days on supercomputers to resolve the more complex problems.« less
NASA Astrophysics Data System (ADS)
Wyche, K. P.; Monks, P. S.; Smallbone, K. L.; Hamilton, J. F.; Alfarra, M. R.; Rickard, A. R.; McFiggans, G. B.; Jenkin, M. E.; Bloss, W. J.; Ryan, A. C.; Hewitt, C. N.; MacKenzie, A. R.
2015-07-01
Highly non-linear dynamical systems, such as those found in atmospheric chemistry, necessitate hierarchical approaches to both experiment and modelling in order to ultimately identify and achieve fundamental process-understanding in the full open system. Atmospheric simulation chambers comprise an intermediate in complexity, between a classical laboratory experiment and the full, ambient system. As such, they can generate large volumes of difficult-to-interpret data. Here we describe and implement a chemometric dimension reduction methodology for the deconvolution and interpretation of complex gas- and particle-phase composition spectra. The methodology comprises principal component analysis (PCA), hierarchical cluster analysis (HCA) and positive least-squares discriminant analysis (PLS-DA). These methods are, for the first time, applied to simultaneous gas- and particle-phase composition data obtained from a comprehensive series of environmental simulation chamber experiments focused on biogenic volatile organic compound (BVOC) photooxidation and associated secondary organic aerosol (SOA) formation. We primarily investigated the biogenic SOA precursors isoprene, α-pinene, limonene, myrcene, linalool and β-caryophyllene. The chemometric analysis is used to classify the oxidation systems and resultant SOA according to the controlling chemistry and the products formed. Results show that "model" biogenic oxidative systems can be successfully separated and classified according to their oxidation products. Furthermore, a holistic view of results obtained across both the gas- and particle-phases shows the different SOA formation chemistry, initiating in the gas-phase, proceeding to govern the differences between the various BVOC SOA compositions. The results obtained are used to describe the particle composition in the context of the oxidised gas-phase matrix. An extension of the technique, which incorporates into the statistical models data from anthropogenic (i.e. toluene) oxidation and "more realistic" plant mesocosm systems, demonstrates that such an ensemble of chemometric mapping has the potential to be used for the classification of more complex spectra of unknown origin. More specifically, the addition of mesocosm data from fig and birch tree experiments shows that isoprene and monoterpene emitting sources, respectively, can be mapped onto the statistical model structure and their positional vectors can provide insight into their biological sources and controlling oxidative chemistry. The potential to extend the methodology to the analysis of ambient air is discussed using results obtained from a zero-dimensional box model incorporating mechanistic data obtained from the Master Chemical Mechanism (MCMv3.2). Such an extension to analysing ambient air would prove a powerful asset in assisting with the identification of SOA sources and the elucidation of the underlying chemical mechanisms involved.
Using Models to Inform Policy: Insights from Modeling the Complexities of Global Polio Eradication
NASA Astrophysics Data System (ADS)
Thompson, Kimberly M.
Drawing on over 20 years of experience modeling risks in complex systems, this talk will challenge SBP participants to develop models that provide timely and useful answers to critical policy questions when decision makers need them. The talk will include reflections on the opportunities and challenges associated with developing integrated models for complex problems and communicating their results effectively. Dr. Thompson will focus the talk largely on collaborative modeling related to global polio eradication and the application of system dynamics tools. After successful global eradication of wild polioviruses, live polioviruses will still present risks that could potentially lead to paralytic polio cases. This talk will present the insights of efforts to use integrated dynamic, probabilistic risk, decision, and economic models to address critical policy questions related to managing global polio risks. Using a dynamic disease transmission model combined with probabilistic model inputs that characterize uncertainty for a stratified world to account for variability, we find that global health leaders will face some difficult choices, but that they can take actions that will manage the risks effectively. The talk will emphasize the need for true collaboration between modelers and subject matter experts, and the importance of working with decision makers as partners to ensure the development of useful models that actually get used.
ERIC Educational Resources Information Center
School Science Review, 1984
1984-01-01
Presents an experiment which links mass spectrometry to gas chromatography. Also presents a simulation of iron extraction using a ZX81 computer and discussions of Fehling versus Benedict's solutions, transition metal ammine complexes, electrochemical and other chemical series, and a simple model of dynamic equilibria. (JN)
Host-guest complex formation in cyclotrikis-(1-->6).
Cescutti, P; Utille, J P; Rizzo, R
2000-11-17
The possibility that cyclotrikis-(1-->6)-[alpha-D-glucopyranosyl-(1-->4)-beta-D-glucopyranosyl] (CGM6) forms inclusion complexes, like cycloamyloses (cyclodextrins), was investigated by means of electrospray mass spectrometry and fluorescence spectroscopy. The complexing ability of both 1-anilinonaphthalene-8-sulfonate (ANS) and 2-p-toluidinylnaphthalene-6-sulfonate (TNS), which were already used with cyclodextrins, was investigated. The former showed very little or no tendency to be complexed by CGM6, while the latter produced detectable adducts with CGM6. Fixed 90 degree angle light scattering experiments supported the findings obtained by molecular modelling calculations, which indicated a polar character for the CGM6 internal cavity. CGM6-TNS complexes were probably formed throughout interaction of the polar regions of the two molecules.
Complex Geometric Models of Diffusion and Relaxation in Healthy and Damaged White Matter
Farrell, Jonathan A.D.; Smith, Seth A.; Reich, Daniel S.; Calabresi, Peter A.; van Zijl, Peter C.M.
2010-01-01
Which aspects of tissue microstructure affect diffusion weighted MRI signals? Prior models, many of which use Monte-Carlo simulations, have focused on relatively simple models of the cellular microenvironment and have not considered important anatomic details. With the advent of higher-order analysis models for diffusion imaging, such as high-angular-resolution diffusion imaging (HARDI), more realistic models are necessary. This paper presents and evaluates the reproducibility of simulations of diffusion in complex geometries. Our framework is quantitative, does not require specialized hardware, is easily implemented with little programming experience, and is freely available as open-source software. Models may include compartments with different diffusivities, permeabilities, and T2 time constants using both parametric (e.g., spheres and cylinders) and arbitrary (e.g., mesh-based) geometries. Three-dimensional diffusion displacement-probability functions are mapped with high reproducibility, and thus can be readily used to assess reproducibility of diffusion-derived contrasts. PMID:19739233
Adapting APSIM to model the physiology and genetics of complex adaptive traits in field crops.
Hammer, Graeme L; van Oosterom, Erik; McLean, Greg; Chapman, Scott C; Broad, Ian; Harland, Peter; Muchow, Russell C
2010-05-01
Progress in molecular plant breeding is limited by the ability to predict plant phenotype based on its genotype, especially for complex adaptive traits. Suitably constructed crop growth and development models have the potential to bridge this predictability gap. A generic cereal crop growth and development model is outlined here. It is designed to exhibit reliable predictive skill at the crop level while also introducing sufficient physiological rigour for complex phenotypic responses to become emergent properties of the model dynamics. The approach quantifies capture and use of radiation, water, and nitrogen within a framework that predicts the realized growth of major organs based on their potential and whether the supply of carbohydrate and nitrogen can satisfy that potential. The model builds on existing approaches within the APSIM software platform. Experiments on diverse genotypes of sorghum that underpin the development and testing of the adapted crop model are detailed. Genotypes differing in height were found to differ in biomass partitioning among organs and a tall hybrid had significantly increased radiation use efficiency: a novel finding in sorghum. Introducing these genetic effects associated with plant height into the model generated emergent simulated phenotypic differences in green leaf area retention during grain filling via effects associated with nitrogen dynamics. The relevance to plant breeding of this capability in complex trait dissection and simulation is discussed.
Inverse estimation of parameters for an estuarine eutrophication model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, J.; Kuo, A.Y.
1996-11-01
An inverse model of an estuarine eutrophication model with eight state variables is developed. It provides a framework to estimate parameter values of the eutrophication model by assimilation of concentration data of these state variables. The inverse model using the variational technique in conjunction with a vertical two-dimensional eutrophication model is general enough to be applicable to aid model calibration. The formulation is illustrated by conducting a series of numerical experiments for the tidal Rappahannock River, a western shore tributary of the Chesapeake Bay. The numerical experiments of short-period model simulations with different hypothetical data sets and long-period model simulationsmore » with limited hypothetical data sets demonstrated that the inverse model can be satisfactorily used to estimate parameter values of the eutrophication model. The experiments also showed that the inverse model is useful to address some important questions, such as uniqueness of the parameter estimation and data requirements for model calibration. Because of the complexity of the eutrophication system, degrading of speed of convergence may occur. Two major factors which cause degradation of speed of convergence are cross effects among parameters and the multiple scales involved in the parameter system.« less
Rill erosion in natural and disturbed forests: 2. Modeling approaches
J. W. Wagenbrenner; P. R. Robichaud; W. J. Elliot
2010-01-01
As forest management scenarios become more complex, the ability to more accurately predict erosion from those scenarios becomes more important. In this second part of a two-part study we report model parameters based on 66 simulated runoff experiments in two disturbed forests in the northwestern U.S. The 5 disturbance classes were natural, 10-month old and 2-week old...
1999-09-30
saturated poroelastic medium. The transition matrix scattering formalism was used to develop the scattered acoustic field(s) such that appropriate...sediment increases from a fluid model (simplest) to a fluid-saturated poroelastic model (most complex). Laboratory experiments in carefully quantified...of a linear acoustic field from a bubble, collection of bubbles, or other targets embedded in a fluid-saturated sediment are not well known. This
Experimental econophysics: Complexity, self-organization, and emergent properties
NASA Astrophysics Data System (ADS)
Huang, J. P.
2015-03-01
Experimental econophysics is concerned with statistical physics of humans in the laboratory, and it is based on controlled human experiments developed by physicists to study some problems related to economics or finance. It relies on controlled human experiments in the laboratory together with agent-based modeling (for computer simulations and/or analytical theory), with an attempt to reveal the general cause-effect relationship between specific conditions and emergent properties of real economic/financial markets (a kind of complex adaptive systems). Here I review the latest progress in the field, namely, stylized facts, herd behavior, contrarian behavior, spontaneous cooperation, partial information, and risk management. Also, I highlight the connections between such progress and other topics of traditional statistical physics. The main theme of the review is to show diverse emergent properties of the laboratory markets, originating from self-organization due to the nonlinear interactions among heterogeneous humans or agents (complexity).
A calcium-driven mechanochemical model for prediction of force generation in smooth muscle.
Murtada, Sae-Il; Kroon, Martin; Holzapfel, Gerhard A
2010-12-01
A new model for the mechanochemical response of smooth muscle is presented. The focus is on the response of the actin-myosin complex and on the related generation of force (or stress). The chemical (kinetic) model describes the cross-bridge interactions with the thin filament in which the calcium-dependent myosin phosphorylation is the only regulatory mechanism. The new mechanical model is based on Hill's three-component model and it includes one internal state variable that describes the contraction/relaxation of the contractile units. It is characterized by a strain-energy function and an evolution law incorporating only a few material parameters with clear physical meaning. The proposed model satisfies the second law of thermodynamics. The results of the combined coupled model are broadly consistent with isometric and isotonic experiments on smooth muscle tissue. The simulations suggest that the matrix in which the actin-myosin complex is embedded does have a viscous property. It is straightforward for implementation into a finite element program in order to solve more complex boundary-value problems such as the control of short-term changes in lumen diameter of arteries due to mechanochemical signals.
Complex molecular assemblies at hand via interactive simulations.
Delalande, Olivier; Férey, Nicolas; Grasseau, Gilles; Baaden, Marc
2009-11-30
Studying complex molecular assemblies interactively is becoming an increasingly appealing approach to molecular modeling. Here we focus on interactive molecular dynamics (IMD) as a textbook example for interactive simulation methods. Such simulations can be useful in exploring and generating hypotheses about the structural and mechanical aspects of biomolecular interactions. For the first time, we carry out low-resolution coarse-grain IMD simulations. Such simplified modeling methods currently appear to be more suitable for interactive experiments and represent a well-balanced compromise between an important gain in computational speed versus a moderate loss in modeling accuracy compared to higher resolution all-atom simulations. This is particularly useful for initial exploration and hypothesis development for rare molecular interaction events. We evaluate which applications are currently feasible using molecular assemblies from 1900 to over 300,000 particles. Three biochemical systems are discussed: the guanylate kinase (GK) enzyme, the outer membrane protease T and the soluble N-ethylmaleimide-sensitive factor attachment protein receptors complex involved in membrane fusion. We induce large conformational changes, carry out interactive docking experiments, probe lipid-protein interactions and are able to sense the mechanical properties of a molecular model. Furthermore, such interactive simulations facilitate exploration of modeling parameters for method improvement. For the purpose of these simulations, we have developed a freely available software library called MDDriver. It uses the IMD protocol from NAMD and facilitates the implementation and application of interactive simulations. With MDDriver it becomes very easy to render any particle-based molecular simulation engine interactive. Here we use its implementation in the Gromacs software as an example. Copyright 2009 Wiley Periodicals, Inc.
London, Sarah E
2017-11-20
Songbirds famously learn their vocalizations. Some species can learn continuously, others seasonally, and still others just once. The zebra finch (Taeniopygia guttata) learns to sing during a single developmental "Critical Period," a restricted phase during which a specific experience has profound and permanent effects on brain function and behavioral patterns. The zebra finch can therefore provide fundamental insight into features that promote and limit the ability to acquire complex learned behaviors. For example, what properties permit the brain to come "on-line" for learning? How does experience become encoded to prevent future learning? What features define the brain in receptive compared to closed learning states? This piece will focus on epigenomic, genomic, and molecular levels of analysis that operate on the timescales of development and complex behavioral learning. Existing data will be discussed as they relate to Critical Period learning, and strategies for future studies to more directly address these questions will be considered. Birdsong learning is a powerful model for advancing knowledge of the biological intersections of maturation and experience. Lessons from its study not only have implications for understanding developmental song learning, but also broader questions of learning potential and the enduring effects of early life experience on neural systems and behavior. Copyright © 2017. Published by Elsevier B.V.
Lux, Slawomir A.; Wnuk, Andrzej; Vogt, Heidrun; Belien, Tim; Spornberger, Andreas; Studnicki, Marcin
2016-01-01
The paper reports application of a Markov-like stochastic process agent-based model and a “virtual farm” concept for enhancement of site-specific Integrated Pest Management. Conceptually, the model represents a “bottom-up ethological” approach and emulates behavior of the “primary IPM actors”—large cohorts of individual insects—within seasonally changing mosaics of spatiotemporally complex faming landscape, under the challenge of the local IPM actions. Algorithms of the proprietary PESTonFARM model were adjusted to reflect behavior and ecology of R. cerasi. Model parametrization was based on compiled published information about R. cerasi and the results of auxiliary on-farm experiments. The experiments were conducted on sweet cherry farms located in Austria, Germany, and Belgium. For each farm, a customized model-module was prepared, reflecting its spatiotemporal features. Historical data about pest monitoring, IPM treatments and fruit infestation were used to specify the model assumptions and calibrate it further. Finally, for each of the farms, virtual IPM experiments were simulated and the model-generated results were compared with the results of the real experiments conducted on the same farms. Implications of the findings for broader applicability of the model and the “virtual farm” approach—were discussed. PMID:27602000
Lux, Slawomir A; Wnuk, Andrzej; Vogt, Heidrun; Belien, Tim; Spornberger, Andreas; Studnicki, Marcin
2016-01-01
The paper reports application of a Markov-like stochastic process agent-based model and a "virtual farm" concept for enhancement of site-specific Integrated Pest Management. Conceptually, the model represents a "bottom-up ethological" approach and emulates behavior of the "primary IPM actors"-large cohorts of individual insects-within seasonally changing mosaics of spatiotemporally complex faming landscape, under the challenge of the local IPM actions. Algorithms of the proprietary PESTonFARM model were adjusted to reflect behavior and ecology of R. cerasi. Model parametrization was based on compiled published information about R. cerasi and the results of auxiliary on-farm experiments. The experiments were conducted on sweet cherry farms located in Austria, Germany, and Belgium. For each farm, a customized model-module was prepared, reflecting its spatiotemporal features. Historical data about pest monitoring, IPM treatments and fruit infestation were used to specify the model assumptions and calibrate it further. Finally, for each of the farms, virtual IPM experiments were simulated and the model-generated results were compared with the results of the real experiments conducted on the same farms. Implications of the findings for broader applicability of the model and the "virtual farm" approach-were discussed.
3-D Modeling of a Nearshore Dye Release
NASA Astrophysics Data System (ADS)
Maxwell, A. R.; Hibler, L. F.; Miller, L. M.
2006-12-01
The usage of computer modeling software in predicting the behavior of a plume discharged into deep water is well established. Nearfield plume spreading in coastal areas with complex bathymetry is less commonly studied; in addition to geometry, some of the difficulties of this environment include: tidal exchange, temperature, and salinity gradients. Although some researchers have applied complex hydrodynamic models to this problem, nearfield regions are typically modeled by calibration of an empirical or expert system model. In the present study, the 3D hydrodynamic model Delft3D-FLOW was used to predict the advective transport from a point release in Sequim Bay, Washington. A nested model approach was used, wherein a coarse model using a mesh extending to nearby tide gages (cell sizes up to 1 km) was run over several tidal cycles in order to provide boundary conditions to a smaller area. The nested mesh (cell sizes up to 30 m) was forced on two open boundaries using the water surface elevation derived from the coarse model. Initial experiments with the uncalibrated model were conducted in order to predict plume propagation based on the best available field data. Field experiments were subsequently carried out by releasing rhodamine dye into the bay at near-peak flood tidal current and near high slack tidal conditions. Surface and submerged releases were carried out from an anchored vessel. Concurrently collected data from the experiment include temperature, salinity, dye concentration, and hyperspectral imagery, collected from boats and aircraft. A REMUS autonomous underwater vehicle was used to measure current velocity and dye concentration at varying depths, as well as to acquire additional bathymetric information. Preliminary results indicate that the 3D hydrodynamic model offers a reasonable prediction of plume propagation speed and shape. A sensitivity analysis is underway to determine the significant factors in effectively using the model as a predictive tool for plume tracking in data-limited environments. The Delft-PART stochastic particle transport model is also being examined to determine its utility for the present study.
Local Climate Changes Forced by Changes in Land Use and topography in the Aburrá Valley, Colombia.
NASA Astrophysics Data System (ADS)
Zapata Henao, M. Z.; Hoyos Ortiz, C. D.
2017-12-01
One of the challenges in the numerical weather models is the adequate representation of soil-vegetation-atmosphere interaction at different spatial scales, including scenarios with heterogeneous land cover and complex mountainous terrain. The interaction determines the energy, mass and momentum exchange at the surface and could affect different variables including precipitation, temperature and wind. In order to quantify the long-term climate impact of changes in local land use and to assess the role of topography, two numerical experiments were examined. The first experiment allows assessing the continuous growth of urban areas within the Aburrá Valley, a complex terrain region located in Colombian Andes. The Weather Research Forecast model (WRF) is used as the basis of the experiment. The basic setup involves two nested domains, one representing the continental scale (18 km) and the other the regional scale (2 km). The second experiment allows drastic topography modification, including changing the valley configuration to a plateau. The control run for both experiments corresponds to a climatological scenario. In both experiments the boundary conditions correspond to the climatological continental domain output. Surface temperature, surface winds and precipitation are used as the main variables to compare both experiments relative to the control run. The results of the first experiment show a strong relationship between land cover and the variables, specially for surface temperature and wind speed, due to the strong forcing land cover imposes on the albedo, heat capacity and surface roughness, changing temperature and wind speed magnitudes. The second experiment removes the winds spatial variability related with hill slopes, the direction and magnitude are modulated only by the trade winds and roughness of land cover.
Envisioning migration: Mathematics in both experimental analysis and modeling of cell behavior
Zhang, Elizabeth R.; Wu, Lani F.; Altschuler, Steven J.
2013-01-01
The complex nature of cell migration highlights the power and challenges of applying mathematics to biological studies. Mathematics may be used to create model equations that recapitulate migration, which can predict phenomena not easily uncovered by experiments or intuition alone. Alternatively, mathematics may be applied to interpreting complex data sets with better resolution—potentially empowering scientists to discern subtle patterns amid the noise and heterogeneity typical of migrating cells. Iteration between these two methods is necessary in order to reveal connections within the cell migration signaling network, as well as to understand the behavior that arises from those connections. Here, we review recent quantitative analysis and mathematical modeling approaches to the cell migration problem. PMID:23660413
Reduction of a linear complex model for respiratory system during Airflow Interruption.
Jablonski, Ireneusz; Mroczka, Janusz
2010-01-01
The paper presents methodology of a complex model reduction to its simpler version - an identifiable inverse model. Its main tool is a numerical procedure of sensitivity analysis (structural and parametric) applied to the forward linear equivalent designed for the conditions of interrupter experiment. Final result - the reduced analog for the interrupter technique is especially worth of notice as it fills a major gap in occlusional measurements, which typically use simple, one- or two-element physical representations. Proposed electrical reduced circuit, being structural combination of resistive, inertial and elastic properties, can be perceived as a candidate for reliable reconstruction and quantification (in the time and frequency domain) of dynamical behavior of the respiratory system in response to a quasi-step excitation by valve closure.
Envisioning migration: mathematics in both experimental analysis and modeling of cell behavior.
Zhang, Elizabeth R; Wu, Lani F; Altschuler, Steven J
2013-10-01
The complex nature of cell migration highlights the power and challenges of applying mathematics to biological studies. Mathematics may be used to create model equations that recapitulate migration, which can predict phenomena not easily uncovered by experiments or intuition alone. Alternatively, mathematics may be applied to interpreting complex data sets with better resolution--potentially empowering scientists to discern subtle patterns amid the noise and heterogeneity typical of migrating cells. Iteration between these two methods is necessary in order to reveal connections within the cell migration signaling network, as well as to understand the behavior that arises from those connections. Here, we review recent quantitative analysis and mathematical modeling approaches to the cell migration problem. Copyright © 2013 Elsevier Ltd. All rights reserved.
Design of virtual simulation experiment based on key events
NASA Astrophysics Data System (ADS)
Zhong, Zheng; Zhou, Dongbo; Song, Lingxiu
2018-06-01
Considering complex content and lacking of guidance in virtual simulation experiments, the key event technology in VR narrative theory was introduced for virtual simulation experiment to enhance fidelity and vividness process. Based on the VR narrative technology, an event transition structure was designed to meet the need of experimental operation process, and an interactive event processing model was used to generate key events in interactive scene. The experiment of" margin value of bees foraging" based on Biologic morphology was taken as an example, many objects, behaviors and other contents were reorganized. The result shows that this method can enhance the user's experience and ensure experimental process complete and effectively.
NASA Astrophysics Data System (ADS)
Choi, Eunsong
Computer simulations are an integral part of research in modern condensed matter physics; they serve as a direct bridge between theory and experiment by systemactically applying a microscopic model to a collection of particles that effectively imitate a macroscopic system. In this thesis, we study two very differnt condensed systems, namely complex fluids and frustrated magnets, primarily by simulating classical dynamics of each system. In the first part of the thesis, we focus on ionic liquids (ILs) and polymers--the two complementary classes of materials that can be combined to provide various unique properties. The properties of polymers/ILs systems, such as conductivity, viscosity, and miscibility, can be fine tuned by choosing an appropriate combination of cations, anions, and polymers. However, designing a system that meets a specific need requires a concrete understanding of physics and chemistry that dictates a complex interplay between polymers and ionic liquids. In this regard, molecular dynamics (MD) simulation is an efficient tool that provides a molecular level picture of such complex systems. We study the behavior of Poly (ethylene oxide) (PEO) and the imidazolium based ionic liquids, using MD simulations and statistical mechanics. We also discuss our efforts to develop reliable and efficient classical force-fields for PEO and the ionic liquids. The second part is devoted to studies on geometrically frustrated magnets. In particular, a microscopic model, which gives rise to an incommensurate spiral magnetic ordering observed in a pyrochlore antiferromagnet is investigated. The validation of the model is made via a comparison of the spin-wave spectra with the neutron scattering data. Since the standard Holstein-Primakoff method is difficult to employ in such a complex ground state structure with a large unit cell, we carry out classical spin dynamics simulations to compute spin-wave spectra directly from the Fourier transform of spin trajectories. We conclude the study by showing an excellent agreement between the simulation and the experiment.
Experience-based consulting: the value proposition.
Pliner, Nicole; Thrall, James; Boland, Giles; Palumbo, Denise
2004-11-01
Consulting is a profession universally accepted and well entrenched throughout the business world. Whether it is providing objective analysis, supplying a specific expertise, managing a project, or simply adding extra manpower, consultants can add value. However, what are the attributes of a good consultant? In health care, with the rapid pace of emerging technologies, economic intricacies, and the complexities of clinical care, hands-on experience is the key. Recognizing the power of consultants with hands-on experience, the Department of Radiology at Massachusetts General Hospital launched the Radiology Consulting Group, an "experience-based" model for consulting that may potentially shift the profession's paradigm.
Paramecium swimming in a capillary tube
NASA Astrophysics Data System (ADS)
Jana, Saikat; Jung, Sunghwan
2010-03-01
Micro-organisms exhibit different strategies for swimming in complex environments. Many micro-swimmers such as paramecium congregate and tend to live near wall. We investigate how paramecium moves in a confined space as compared to its motion in an unbounded fluid. A new theoretical model based on Taylor's sheet is developed, to study such boundary effects. In experiments, paramecia are put inside capillary tubes and their swimming behavior is observed. The data obtained from experiments is used to test the validity of our theoretical model and understand how the cilia influence the locomotion of paramecia in confined geometries.
Testing the Structure of Hydrological Models using Genetic Programming
NASA Astrophysics Data System (ADS)
Selle, B.; Muttil, N.
2009-04-01
Genetic Programming is able to systematically explore many alternative model structures of different complexity from available input and response data. We hypothesised that genetic programming can be used to test the structure hydrological models and to identify dominant processes in hydrological systems. To test this, genetic programming was used to analyse a data set from a lysimeter experiment in southeastern Australia. The lysimeter experiment was conducted to quantify the deep percolation response under surface irrigated pasture to different soil types, water table depths and water ponding times during surface irrigation. Using genetic programming, a simple model of deep percolation was consistently evolved in multiple model runs. This simple and interpretable model confirmed the dominant process contributing to deep percolation represented in a conceptual model that was published earlier. Thus, this study shows that genetic programming can be used to evaluate the structure of hydrological models and to gain insight about the dominant processes in hydrological systems.
Experiment evaluates ocean models and data assimiliation in the Gulf Stream
NASA Astrophysics Data System (ADS)
Willems, Robert C.; Glenn, S. M.; Crowley, M. F.; Malanotte-Rizzoli, P.; Young, R. E.; Ezer, T.; Mellor, G. L.; Arango, H. G.; Robinson, A. R.; Lai, C.-C. A.
Using data sets of known quality as the basis for comparison, a recent experiment explored the Gulf Stream Region at 27°-47°N and 80°-50°W to assess the nowcast/forecast capability of specific ocean models and the impact of data assimilation. Scientists from five universities and the Naval Research Laboratory/Stennis Space Center participated in the Data Assimilation and Model Evaluation Experiment (DAMEÉ-GSR).DAMEÉ-GSR was based on case studies, each successively more complex, and was divided into three phases using case studies (data) from 1987 and 1988. Phase I evaluated models' forecast capability using common initial conditions and comparing model forecast fields with observational data at forecast time over a 2-week period. Phase II added data assimilation and assessed its impact on forecast capability, using the same case studies as in phase I, and phase III added a 2-month case study overlapping some periods in Phases I and II.
Molecular modeling and SPRi investigations of interleukin 6 (IL6) protein and DNA aptamers.
Rhinehardt, Kristen L; Vance, Stephen A; Mohan, Ram V; Sandros, Marinella; Srinivas, Goundla
2018-06-01
Interleukin 6 (IL6), an inflammatory response protein has major implications in immune-related inflammatory diseases. Identification of aptamers for the IL6 protein aids in diagnostic, therapeutic, and theranostic applications. Three different DNA aptamers and their interactions with IL6 protein were extensively investigated in a phosphate buffed saline (PBS) solution. Molecular-level modeling through molecular dynamics provided insights of structural, conformational changes and specific binding domains of these protein-aptamer complexes. Multiple simulations reveal consistent binding region for all protein-aptamer complexes. Conformational changes coupled with quantitative analysis of center of mass (COM) distance, radius of gyration (R g ), and number of intermolecular hydrogen bonds in each IL6 protein-aptamer complex was used to determine their binding performance strength and obtain molecular configurations with strong binding. A similarity comparison of the molecular configurations with strong binding from molecular-level modeling concurred with Surface Plasmon Resonance imaging (SPRi) for these three aptamer complexes, thus corroborating molecular modeling analysis findings. Insights from the natural progression of IL6 protein-aptamer binding modeled in this work has identified key features such as the orientation and location of the aptamer in the binding event. These key features are not readily feasible from wet lab experiments and impact the efficacy of the aptamers in diagnostic and theranostic applications.
NASA Technical Reports Server (NTRS)
Cotton, W. R.; Tripoli, G. J.
1980-01-01
Major research accomplishments which were achieved during the first year of the grant are summarized. The research concentrated in the following areas: (1) an examination of observational requirements for predicting convective storm development and intensity as suggested by recent numerical experiments; (2) interpretation of recent 3D numerical experiments with regard to the relationship between overshooting tops and surface wind gusts; (3) the development of software for emulating satellite-inferred cloud properties using 3D cloud model predicted data; and (4) the development of a conceptual/semi-quantitative model of eastward propagating, mesoscale convective complexes forming to the lee of the Rocky Mountains.
vitamin B1 clear evidence of any influence upon Trichomonas growth could not be obtained. Choline favours the growth of Trichomonas vaginalis . In...0.20 mg/ml onward liponic acid had inhibiting effect upon Trichomonas . Carnitine chloride favoured the growth of Trichomonas vaginalis . (Modified author abstract)...The present study was concerned with the relationship between the vitamins of the B-complex and Trichomonas . From the results obtained in studies on
Establishing and Maintaining an Extensive Library of Patient-Derived Xenograft Models.
Mattar, Marissa; McCarthy, Craig R; Kulick, Amanda R; Qeriqi, Besnik; Guzman, Sean; de Stanchina, Elisa
2018-01-01
Patient-derived xenograft (PDX) models have recently emerged as a highly desirable platform in oncology and are expected to substantially broaden the way in vivo studies are designed and executed and to reshape drug discovery programs. However, acquisition of patient-derived samples, and propagation, annotation and distribution of PDXs are complex processes that require a high degree of coordination among clinic, surgery and laboratory personnel, and are fraught with challenges that are administrative, procedural and technical. Here, we examine in detail the major aspects of this complex process and relate our experience in establishing a PDX Core Laboratory within a large academic institution.
Noyes, Jane; Brenner, Maria; Fox, Patricia; Guerin, Ashleigh
2014-05-01
To report a novel review to develop a health systems model of successful transition of children with complex healthcare needs from hospital to home. Children with complex healthcare needs commonly experience an expensive, ineffectual and prolonged nurse-led discharge process. Children gain no benefit from prolonged hospitalization and are exposed to significant harm. Research to enable intervention development and process evaluation across the entire health system is lacking. Novel mixed-method integrative review informed by health systems theory. DATA CINAHL, PsychInfo, EMBASE, PubMed, citation searching, personal contact. REVIEW Informed by consultation with experts. English language studies, opinion/discussion papers reporting research, best practice and experiences of children, parents and healthcare professionals and purposively selected policies/guidelines from 2002-December 2012 were abstracted using Framework synthesis, followed by iterative theory development. Seven critical factors derived from thirty-four sources across five health system levels explained successful discharge (new programme theory). All seven factors are required in an integrated care pathway, with a dynamic communication loop to facilitate effective discharge (new programme logic). Current health system responses were frequently static and critical success factors were commonly absent, thereby explaining ineffectual discharge. The novel evidence-based model, which reconceptualizes 'discharge' as a highly complex longitudinal health system intervention, makes a significant contribution to global knowledge to drive practice development. Research is required to develop process and outcome measures at different time points in the discharge process and future trials are needed to determine the effectiveness of integrated health system discharge models. © 2013 John Wiley & Sons Ltd.
Positional cloning in mice and its use for molecular dissection of inflammatory arthritis.
Abe, Koichiro; Yu, Philipp
2009-02-01
One of the upcoming next quests in the field of genetics might be molecular dissection of the genetic and environmental components of human complex diseases. In humans, however, there are certain experimental limitations for identification of a single component of the complex interactions by genetic analyses. Experimental animals offer simplified models for genetic and environmental interactions in human complex diseases. In particular, mice are the best mammalian models because of a long history and ample experience for genetic analyses. Forward genetics, which includes genetic screen and subsequent positional cloning of the causative genes, is a powerful strategy to dissect a complex phenomenon without preliminarily molecular knowledge of the process. In this review, first, we describe a general scheme of positional cloning in mice. Next, recent accomplishments on the patho-mechanisms of inflammatory arthritis by forward genetics approaches are introduced; Positional cloning effort for skg, Ali5, Ali18, cmo, and lupo mutants are provided as examples for the application to human complex diseases. As seen in the examples, the identification of genetic factors by positional cloning in the mouse have potential in solving molecular complexity of gene-environment interactions in human complex diseases.
Shin, So Young
2014-12-01
To evolve a management plan for rheumatoid arthritis, it is necessary to understand the patient's symptom experience and disablement process. This paper aims to introduce and critique two models as a conceptual foundation from which to construct a new model for arthritis care. A Disability Intervention Model for Older Adults with Arthritis includes three interrelated concepts of symptom experience, symptom management strategies, and symptom outcomes that correspond to the Theory of Symptom Management. These main concepts influence or are influenced by contextual factors that are situated within the domains of person, environment, and health/illness. It accepts the bidirectional, complex, dynamic interactions among all components within the model representing the comprehensive aspects of the disablement process and its interventions in older adults with rheumatoid arthritis. In spite of some limitations such as confusion or complexity within the model, the Disability Intervention Model for Older Adults with Arthritis has strengths in that it encompasses the majority of the concepts of the two models, attempts to compensate for the limitations of the two models, and aims to understand the impact of rheumatoid arthritis on a patient's physical, cognitive, and emotional health status, socioeconomic status, and well-being. Therefore, it can be utilized as a guiding theoretical framework for arthritis care and research to improve the functional status of older adults with rheumatoid arthritis. Copyright © 2014. Published by Elsevier B.V.
2016 International Land Model Benchmarking (ILAMB) Workshop Report
NASA Technical Reports Server (NTRS)
Hoffman, Forrest M.; Koven, Charles D.; Keppel-Aleks, Gretchen; Lawrence, David M.; Riley, William J.; Randerson, James T.; Ahlstrom, Anders; Abramowitz, Gabriel; Baldocchi, Dennis D.; Best, Martin J.;
2016-01-01
As earth system models (ESMs) become increasingly complex, there is a growing need for comprehensive and multi-faceted evaluation of model projections. To advance understanding of terrestrial biogeochemical processes and their interactions with hydrology and climate under conditions of increasing atmospheric carbon dioxide, new analysis methods are required that use observations to constrain model predictions, inform model development, and identify needed measurements and field experiments. Better representations of biogeochemistryclimate feedbacks and ecosystem processes in these models are essential for reducing the acknowledged substantial uncertainties in 21st century climate change projections.
2016 International Land Model Benchmarking (ILAMB) Workshop Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffman, Forrest M.; Koven, Charles D.; Keppel-Aleks, Gretchen
As Earth system models become increasingly complex, there is a growing need for comprehensive and multi-faceted evaluation of model projections. To advance understanding of biogeochemical processes and their interactions with hydrology and climate under conditions of increasing atmospheric carbon dioxide, new analysis methods are required that use observations to constrain model predictions, inform model development, and identify needed measurements and field experiments. Better representations of biogeochemistry–climate feedbacks and ecosystem processes in these models are essential for reducing uncertainties associated with projections of climate change during the remainder of the 21st century.
NASA Astrophysics Data System (ADS)
Messina, Luca; Castin, Nicolas; Domain, Christophe; Olsson, Pär
2017-02-01
The quality of kinetic Monte Carlo (KMC) simulations of microstructure evolution in alloys relies on the parametrization of point-defect migration rates, which are complex functions of the local chemical composition and can be calculated accurately with ab initio methods. However, constructing reliable models that ensure the best possible transfer of physical information from ab initio to KMC is a challenging task. This work presents an innovative approach, where the transition rates are predicted by artificial neural networks trained on a database of 2000 migration barriers, obtained with density functional theory (DFT) in place of interatomic potentials. The method is tested on copper precipitation in thermally aged iron alloys, by means of a hybrid atomistic-object KMC model. For the object part of the model, the stability and mobility properties of copper-vacancy clusters are analyzed by means of independent atomistic KMC simulations, driven by the same neural networks. The cluster diffusion coefficients and mean free paths are found to increase with size, confirming the dominant role of coarsening of medium- and large-sized clusters in the precipitation kinetics. The evolution under thermal aging is in better agreement with experiments with respect to a previous interatomic-potential model, especially concerning the experiment time scales. However, the model underestimates the solubility of copper in iron due to the excessively high solution energy predicted by the chosen DFT method. Nevertheless, this work proves the capability of neural networks to transfer complex ab initio physical properties to higher-scale models, and facilitates the extension to systems with increasing chemical complexity, setting the ground for reliable microstructure evolution simulations in a wide range of alloys and applications.
Computational Model of Gab1/2-Dependent VEGFR2 Pathway to Akt Activation
Tan, Wan Hua; Popel, Aleksander S.; Mac Gabhann, Feilim
2013-01-01
Vascular endothelial growth factor (VEGF) signal transduction is central to angiogenesis in development and in pathological conditions such as cancer, retinopathy and ischemic diseases. However, no detailed mass-action models of VEGF receptor signaling have been developed. We constructed and validated the first computational model of VEGFR2 trafficking and signaling, to study the opposing roles of Gab1 and Gab2 in regulation of Akt phosphorylation in VEGF-stimulated endothelial cells. Trafficking parameters were optimized against 5 previously published in vitro experiments, and the model was validated against six independent published datasets. The model showed agreement at several key nodes, involving scaffolding proteins Gab1, Gab2 and their complexes with Shp2. VEGFR2 recruitment of Gab1 is greater in magnitude, slower, and more sustained than that of Gab2. As Gab2 binds VEGFR2 complexes more transiently than Gab1, VEGFR2 complexes can recycle and continue to participate in other signaling pathways. Correspondingly, the simulation results show a log-linear relationship between a decrease in Akt phosphorylation and Gab1 knockdown while a linear relationship was observed between an increase in Akt phosphorylation and Gab2 knockdown. Global sensitivity analysis demonstrated the importance of initial-concentration ratios of antagonistic molecular species (Gab1/Gab2 and PI3K/Shp2) in determining Akt phosphorylation profiles. It also showed that kinetic parameters responsible for transient Gab2 binding affect the system at specific nodes. This model can be expanded to study multiple signaling contexts and receptor crosstalk and can form a basis for investigation of therapeutic approaches, such as tyrosine kinase inhibitors (TKIs), overexpression of key signaling proteins or knockdown experiments. PMID:23805312
ERIC Educational Resources Information Center
Duarte, B. P. M.; Coelho Pinheiro, M. N.; Silva, D. C. M.; Moura, M. J.
2006-01-01
The experiment described is an excellent opportunity to apply theoretical concepts of distillation, thermodynamics of mixtures and process simulation at laboratory scale, and simultaneously enhance the ability of students to operate, control and monitor complex units.
PLUME DISPERSION IN STABLY STRATIFIED FLOWS OVER COMPLEX TERRAIN, PHASE 2
Laboratory experiments were conducted in a stratified towing tank to investigate plume dispersion in stably stratified flows. First, plume dispersion over an idealized terrain model with a simulated elevated inversion in the atmosphere was investigated. These results were compare...
Walen, Holly; Liu, Da-Jiang; Oh, Junepyo; ...
2017-08-22
By using scanning tunneling microscopy, we characterize the size and bias-dependent shape of sulfur atoms on Cu(100) at low coverage (below 0.1 monolayers) and low temperature (quenched from 300 to 5 K). Sulfur atoms populate the Cu(100) terraces more heavily than steps at low coverage, but as coverage approaches 0.1 monolayers, close-packed step edges become fully populated, with sulfur atoms occupying sites on top of the step. Density functional theory (DFT) corroborates the preferential population of terraces at low coverage as well as the step adsorption site. In experiment, small regions with p(2 × 2)-like atomic arrangements emerge on themore » terraces as sulfur coverage approaches 0.1 monolayer. Using DFT, a lattice gas model has been developed, and Monte Carlo simulations based on this model have been compared with the observed terrace configurations. A model containing eight pairwise interaction energies, all repulsive, gives qualitative agreement. Experiment shows that atomic adsorbed sulfur is the only species on Cu(100) up to a coverage of 0.09 monolayers. There are no Cu–S complexes. Conversely, prior work has shown that a Cu 2S 3 complex forms on Cu(111) under comparable conditions. On the basis of DFT, this difference can be attributed mainly to stronger adsorption of sulfur on Cu(100) as compared with Cu(111).« less
Computer-aided solvent selection for multiple scenarios operation of limited-known properties solute
NASA Astrophysics Data System (ADS)
Anantpinijwatna, Amata
2017-12-01
Solvents have been applied for both production and separation of the complex chemical substance such as the pyrrolidine-2-carbonyl chloride (C5H8ClNO). Since the properties of the target substance itself are largely unknown, the selection of the solvent is limited by experiment only. However, the reaction carried out in conventional solvents are either afforded low yields or obtained slow reaction rates. Moreover, the solvents are also highly toxic and environmental unfriendly. Alternative solvents are required to enhance the production and lessen the harmful effect toward both organism and environment. A costly, time-consuming, and laborious experiments are required for acquiring a better solvent suite for production and separation of these complex compounds; whereas, a limited improvement can be obtained. On the other hand, the combination of the state-of-the-art thermodynamic models can provide faster and more robust solutions to this solvent selection problem. In this work, a framework for solvents selection in complex chemical production process is presented. The framework combines a group-contribution thermodynamic model and a segment activity coefficient model for predicting chemical properties and solubilities of the target chemical in newly formulated solvents. A guideline for solvent selection is also included. The potential of the selected solvents is then analysed and verified. The improvement toward the production yield, production rate, and product separation is then discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walen, Holly; Liu, Da-Jiang; Oh, Junepyo
By using scanning tunneling microscopy, we characterize the size and bias-dependent shape of sulfur atoms on Cu(100) at low coverage (below 0.1 monolayers) and low temperature (quenched from 300 to 5 K). Sulfur atoms populate the Cu(100) terraces more heavily than steps at low coverage, but as coverage approaches 0.1 monolayers, close-packed step edges become fully populated, with sulfur atoms occupying sites on top of the step. Density functional theory (DFT) corroborates the preferential population of terraces at low coverage as well as the step adsorption site. In experiment, small regions with p(2 × 2)-like atomic arrangements emerge on themore » terraces as sulfur coverage approaches 0.1 monolayer. Using DFT, a lattice gas model has been developed, and Monte Carlo simulations based on this model have been compared with the observed terrace configurations. A model containing eight pairwise interaction energies, all repulsive, gives qualitative agreement. Experiment shows that atomic adsorbed sulfur is the only species on Cu(100) up to a coverage of 0.09 monolayers. There are no Cu–S complexes. Conversely, prior work has shown that a Cu 2S 3 complex forms on Cu(111) under comparable conditions. On the basis of DFT, this difference can be attributed mainly to stronger adsorption of sulfur on Cu(100) as compared with Cu(111).« less
NASA Astrophysics Data System (ADS)
Cowdery, E.; Dietze, M.
2016-12-01
As atmospheric levels of carbon dioxide levels continue to increase, it is critical that terrestrial ecosystem models can accurately predict ecological responses to the changing environment. Current predictions of net primary productivity (NPP) in response to elevated atmospheric CO2 concentration are highly variable and contain a considerable amount of uncertainty.The Predictive Ecosystem Analyzer (PEcAn) is an informatics toolbox that wraps around an ecosystem model and can be used to help identify which factors drive uncertainty. We tested a suite of models (LPJ-GUESS, MAESPA, GDAY, CLM5, DALEC, ED2), which represent a range from low to high structural complexity, across a range of Free-Air CO2 Enrichment (FACE) experiments: the Kennedy Space Center Open Top Chamber Experiment, the Rhinelander FACE experiment, the Duke Forest FACE experiment and the Oak Ridge Experiment on CO2 Enrichment. These tests were implemented in a novel benchmarking workflow that is automated, repeatable, and generalized to incorporate different sites and ecological models. Observational data from the FACE experiments represent a first test of this flexible, extensible approach aimed at providing repeatable tests of model process representation.To identify and evaluate the assumptions causing inter-model differences we used PEcAn to perform model sensitivity and uncertainty analysis, not only to assess the components of NPP, but also to examine system processes such nutrient uptake and and water use. Combining the observed patterns of uncertainty between multiple models with results of the recent FACE-model data synthesis project (FACE-MDS) can help identify which processes need further study and additional data constraints. These findings can be used to inform future experimental design and in turn can provide informative starting point for data assimilation.
A compact physical model for the simulation of pNML-based architectures
NASA Astrophysics Data System (ADS)
Turvani, G.; Riente, F.; Plozner, E.; Schmitt-Landsiedel, D.; Breitkreutz-v. Gamm, S.
2017-05-01
Among emerging technologies, perpendicular Nanomagnetic Logic (pNML) seems to be very promising because of its capability of combining logic and memory onto the same device, scalability, 3D-integration and low power consumption. Recently, Full Adder (FA) structures clocked by a global magnetic field have been experimentally demonstrated and detailed characterizations of the switching process governing the domain wall (DW) nucleation probability Pnuc and time tnuc have been performed. However, the design of pNML architectures represent a crucial point in the study of this technology; this can have a remarkable impact on the reliability of pNML structures. Here, we present a compact model developed in VHDL which enables to simulate complex pNML architectures while keeping into account critical physical parameters. Therefore, such parameters have been extracted from the experiments, fitted by the corresponding physical equations and encapsulated into the proposed model. Within this, magnetic structures are decomposed into a few basic elements (nucleation centers, nanowires, inverters etc.) represented by the according physical description. To validate the model, we redesigned a FA and compared our simulation results to the experiment. With this compact model of pNML devices we have envisioned a new methodology which makes it possible to simulate and test the physical behavior of complex architectures with very low computational costs.
2009-09-01
physiologic mechanisms underlying experimental observations: a practical example☆ Sven Zenker, Andreas Hoeft Department of Anaesthesiology and...to describe experi - mental data (goodness of fit) and its complexity (number of parameters). Their use in macroscopic physiologic investigations...BSP, and BRS could either be identical or vary across interventions, resulting in models with 4 to 12 parameters. After digitizing the experimental data
Early experiences in developing and managing the neuroscience gateway.
Sivagnanam, Subhashini; Majumdar, Amit; Yoshimoto, Kenneth; Astakhov, Vadim; Bandrowski, Anita; Martone, MaryAnn; Carnevale, Nicholas T
2015-02-01
The last few decades have seen the emergence of computational neuroscience as a mature field where researchers are interested in modeling complex and large neuronal systems and require access to high performance computing machines and associated cyber infrastructure to manage computational workflow and data. The neuronal simulation tools, used in this research field, are also implemented for parallel computers and suitable for high performance computing machines. But using these tools on complex high performance computing machines remains a challenge because of issues with acquiring computer time on these machines located at national supercomputer centers, dealing with complex user interface of these machines, dealing with data management and retrieval. The Neuroscience Gateway is being developed to alleviate and/or hide these barriers to entry for computational neuroscientists. It hides or eliminates, from the point of view of the users, all the administrative and technical barriers and makes parallel neuronal simulation tools easily available and accessible on complex high performance computing machines. It handles the running of jobs and data management and retrieval. This paper shares the early experiences in bringing up this gateway and describes the software architecture it is based on, how it is implemented, and how users can use this for computational neuroscience research using high performance computing at the back end. We also look at parallel scaling of some publicly available neuronal models and analyze the recent usage data of the neuroscience gateway.
Early experiences in developing and managing the neuroscience gateway
Sivagnanam, Subhashini; Majumdar, Amit; Yoshimoto, Kenneth; Astakhov, Vadim; Bandrowski, Anita; Martone, MaryAnn; Carnevale, Nicholas. T.
2015-01-01
SUMMARY The last few decades have seen the emergence of computational neuroscience as a mature field where researchers are interested in modeling complex and large neuronal systems and require access to high performance computing machines and associated cyber infrastructure to manage computational workflow and data. The neuronal simulation tools, used in this research field, are also implemented for parallel computers and suitable for high performance computing machines. But using these tools on complex high performance computing machines remains a challenge because of issues with acquiring computer time on these machines located at national supercomputer centers, dealing with complex user interface of these machines, dealing with data management and retrieval. The Neuroscience Gateway is being developed to alleviate and/or hide these barriers to entry for computational neuroscientists. It hides or eliminates, from the point of view of the users, all the administrative and technical barriers and makes parallel neuronal simulation tools easily available and accessible on complex high performance computing machines. It handles the running of jobs and data management and retrieval. This paper shares the early experiences in bringing up this gateway and describes the software architecture it is based on, how it is implemented, and how users can use this for computational neuroscience research using high performance computing at the back end. We also look at parallel scaling of some publicly available neuronal models and analyze the recent usage data of the neuroscience gateway. PMID:26523124
1983-04-01
1.0 INTRODUCTION AND SCOPE 1 2.0 PROGRESS SUMMARY 3 2.1 Soil Element Model Development 3 2.2 U.S. Any Engineer Waterways Experiment Station (WES...LABORATORY BEHAVIOR OF SAND 8 3.1 Introduction 8 3.2 Material Description 8 3.3 Laboratory Tests Performed 9 3.4 Laboratory Test Results 14 4.0 MODELING THE... INTRODUCTION AND SCOPE The subject of this annual report is constitutive modeling of cohesionless soil, for both laboratory standard static test conditions
A Weak Constraint 4D-Var Assimilation System for the Navy Coastal Model Using the Representer Method
2013-01-01
the help of the Parametric Fortrai compiler (PFC), Erwig et al. 2007 . Some general circulation models of the complexity of NCOM have seen 1 similar...the Mir general circulation model (MITgcm, Marotzke et al. 1999) also used in the ECCO consortium assimilation experiments ( Stammer et al. 2002...using the« inverse Regional Ocean Modeling System (IROMS, Di Lorenzo et al. 2007 ) with horizontal resolutions of 10 and 30km. The CCS is a large
Model for dynamic self-assembled magnetic surface structures
NASA Astrophysics Data System (ADS)
Belkin, M.; Glatz, A.; Snezhko, A.; Aranson, I. S.
2010-07-01
We propose a first-principles model for the dynamic self-assembly of magnetic structures at a water-air interface reported in earlier experiments. The model is based on the Navier-Stokes equation for liquids in shallow water approximation coupled to Newton equations for interacting magnetic particles suspended at a water-air interface. The model reproduces most of the observed phenomenology, including spontaneous formation of magnetic snakelike structures, generation of large-scale vortex flows, complex ferromagnetic-antiferromagnetic ordering of the snake, and self-propulsion of bead-snake hybrids.
Webster, Fiona; Christian, Jennifer; Mansfield, Elizabeth; Bhattacharyya, Onil; Hawker, Gillian; Levinson, Wendy; Naglie, Gary; Pham, Thuy-Nga; Rose, Louise; Schull, Michael; Sinha, Samir; Stergiopoulos, Vicky; Upshur, Ross; Wilson, Lynn
2015-09-08
The perspectives, needs and preferences of individuals with complex health and social needs can be overlooked in the design of healthcare interventions. This study was designed to provide new insights on patient perspectives drawing from the qualitative evaluation of 5 complex healthcare interventions. Patients and their caregivers were recruited from 5 interventions based in primary, hospital and community care in Ontario, Canada. We included 62 interviews from 44 patients and 18 non-clinical caregivers. Our team analysed the transcripts from 5 distinct projects. This approach to qualitative meta-evaluation identifies common issues described by a diverse group of patients, therefore providing potential insights into systems issues. This study is a secondary analysis of qualitative data; therefore, no outcome measures were identified. We identified 5 broad themes that capture the patients' experience and highlight issues that might not be adequately addressed in complex interventions. In our study, we found that: (1) the emergency department is the unavoidable point of care; (2) patients and caregivers are part of complex and variable family systems; (3) non-medical issues mediate patients' experiences of health and healthcare delivery; (4) the unanticipated consequences of complex healthcare interventions are often the most valuable; and (5) patient experiences are shaped by the healthcare discourses on medically complex patients. Our findings suggest that key assumptions about patients that inform intervention design need to be made explicit in order to build capacity to better understand and support patients with multiple chronic diseases. Across many health systems internationally, multiple models are being implemented simultaneously that may have shared features and target similar patients, and a qualitative meta-evaluation approach, thus offers an opportunity for cumulative learning at a system level in addition to informing intervention design and modification. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Harmonic Structure Predicts the Enjoyment of Uplifting Trance Music.
Agres, Kat; Herremans, Dorien; Bigo, Louis; Conklin, Darrell
2016-01-01
An empirical investigation of how local harmonic structures (e.g., chord progressions) contribute to the experience and enjoyment of uplifting trance (UT) music is presented. The connection between rhythmic and percussive elements and resulting trance-like states has been highlighted by musicologists, but no research, to our knowledge, has explored whether repeated harmonic elements influence affective responses in listeners of trance music. Two alternative hypotheses are discussed, the first highlighting the direct relationship between repetition/complexity and enjoyment, and the second based on the theoretical inverted-U relationship described by the Wundt curve. We investigate the connection between harmonic structure and subjective enjoyment through interdisciplinary behavioral and computational methods: First we discuss an experiment in which listeners provided enjoyment ratings for computer-generated UT anthems with varying levels of harmonic repetition and complexity. The anthems were generated using a statistical model trained on a corpus of 100 uplifting trance anthems created for this purpose, and harmonic structure was constrained by imposing particular repetition structures (semiotic patterns defining the order of chords in the sequence) on a professional UT music production template. Second, the relationship between harmonic structure and enjoyment is further explored using two computational approaches, one based on average Information Content, and another that measures average tonal tension between chords. The results of the listening experiment indicate that harmonic repetition does in fact contribute to the enjoyment of uplifting trance music. More compelling evidence was found for the second hypothesis discussed above, however some maximally repetitive structures were also preferred. Both computational models provide evidence for a Wundt-type relationship between complexity and enjoyment. By systematically manipulating the structure of chord progressions, we have discovered specific harmonic contexts in which repetitive or complex structure contribute to the enjoyment of uplifting trance music.
Harmonic Structure Predicts the Enjoyment of Uplifting Trance Music
Agres, Kat; Herremans, Dorien; Bigo, Louis; Conklin, Darrell
2017-01-01
An empirical investigation of how local harmonic structures (e.g., chord progressions) contribute to the experience and enjoyment of uplifting trance (UT) music is presented. The connection between rhythmic and percussive elements and resulting trance-like states has been highlighted by musicologists, but no research, to our knowledge, has explored whether repeated harmonic elements influence affective responses in listeners of trance music. Two alternative hypotheses are discussed, the first highlighting the direct relationship between repetition/complexity and enjoyment, and the second based on the theoretical inverted-U relationship described by the Wundt curve. We investigate the connection between harmonic structure and subjective enjoyment through interdisciplinary behavioral and computational methods: First we discuss an experiment in which listeners provided enjoyment ratings for computer-generated UT anthems with varying levels of harmonic repetition and complexity. The anthems were generated using a statistical model trained on a corpus of 100 uplifting trance anthems created for this purpose, and harmonic structure was constrained by imposing particular repetition structures (semiotic patterns defining the order of chords in the sequence) on a professional UT music production template. Second, the relationship between harmonic structure and enjoyment is further explored using two computational approaches, one based on average Information Content, and another that measures average tonal tension between chords. The results of the listening experiment indicate that harmonic repetition does in fact contribute to the enjoyment of uplifting trance music. More compelling evidence was found for the second hypothesis discussed above, however some maximally repetitive structures were also preferred. Both computational models provide evidence for a Wundt-type relationship between complexity and enjoyment. By systematically manipulating the structure of chord progressions, we have discovered specific harmonic contexts in which repetitive or complex structure contribute to the enjoyment of uplifting trance music. PMID:28119641
Modeling relations in nature and eco-informatics: a practical application of rosennean complexity.
Kineman, John J
2007-10-01
The purpose of eco-informatics is to communicate critical information about organisms and ecosystems. To accomplish this, it must reflect the complexity of natural systems. Present information systems are designed around mechanistic concepts that do not capture complexity. Robert Rosen's relational theory offers a way of representing complexity in terms of information entailments that are part of an ontologically implicit 'modeling relation'. This relation has corresponding epistemological components that can be captured empirically, the components being structure (associated with model encoding) and function (associated with model decoding). Relational complexity, thus, provides a long-awaited theoretical underpinning for these concepts that ecology has found indispensable. Structural information pertains to the material organization of a system, which can be represented by data. Functional information specifies potential change, which can be inferred from experiment and represented as models or descriptions of state transformations. Contextual dependency (of structure or function) implies meaning. Biological functions imply internalized or system-dependent laws. Complexity can be represented epistemologically by relating structure and function in two different ways. One expresses the phenomenal relation that exists in any present or past instance, and the other draws the ontology of a system into the empirical world in terms of multiple potentials subject to natural forms of selection and optimality. These act as system attractors. Implementing these components and their theoretical relations in an informatics system will provide more-complete ecological informatics than is possible from a strictly mechanistic point of view. This approach will enable many new possibilities for supporting science and decision making.
Generation of two-dimensional binary mixtures in complex plasmas
NASA Astrophysics Data System (ADS)
Wieben, Frank; Block, Dietmar
2016-10-01
Complex plasmas are an excellent model system for strong coupling phenomena. Under certain conditions the dust particles immersed into the plasma form crystals which can be analyzed in terms of structure and dynamics. Previous experiments focussed mostly on monodisperse particle systems whereas dusty plasmas in nature and technology are polydisperse. Thus, a first and important step towards experiments in polydisperse systems are binary mixtures. Recent experiments on binary mixtures under microgravity conditions observed a phase separation of particle species with different radii even for small size disparities. This contradicts several numerical studies of 2D binary mixtures. Therefore, dedicated experiments are required to gain more insight into the physics of polydisperse systems. In this contribution first ground based experiments on two-dimensional binary mixtures are presented. Particular attention is paid to the requirements for the generation of such systems which involve the consideration of the temporal evolution of the particle properties. Furthermore, the structure of these two-component crystals is analyzed and compared to simulations. This work was supported by the Deutsche Forschungsgemeinschaft DFG in the framework of the SFB TR24 Greifswald Kiel, Project A3b.
NASA Astrophysics Data System (ADS)
Siddiqui, Maheen; Wedemann, Roseli S.; Jensen, Henrik Jeldtoft
2018-01-01
We explore statistical characteristics of avalanches associated with the dynamics of a complex-network model, where two modules corresponding to sensorial and symbolic memories interact, representing unconscious and conscious mental processes. The model illustrates Freud's ideas regarding the neuroses and that consciousness is related with symbolic and linguistic memory activity in the brain. It incorporates the Stariolo-Tsallis generalization of the Boltzmann Machine in order to model memory retrieval and associativity. In the present work, we define and measure avalanche size distributions during memory retrieval, in order to gain insight regarding basic aspects of the functioning of these complex networks. The avalanche sizes defined for our model should be related to the time consumed and also to the size of the neuronal region which is activated, during memory retrieval. This allows the qualitative comparison of the behaviour of the distribution of cluster sizes, obtained during fMRI measurements of the propagation of signals in the brain, with the distribution of avalanche sizes obtained in our simulation experiments. This comparison corroborates the indication that the Nonextensive Statistical Mechanics formalism may indeed be more well suited to model the complex networks which constitute brain and mental structure.
Dendritic trafficking faces physiologically critical speed-precision tradeoffs
Williams, Alex H.; O'Donnell, Cian; Sejnowski, Terrence J.; ...
2016-12-30
Nervous system function requires intracellular transport of channels, receptors, mRNAs, and other cargo throughout complex neuronal morphologies. Local signals such as synaptic input can regulate cargo trafficking, motivating the leading conceptual model of neuron-wide transport, sometimes called the ‘sushi-belt model’. Current theories and experiments are based on this model, yet its predictions are not rigorously understood. We formalized the sushi belt model mathematically, and show that it can achieve arbitrarily complex spatial distributions of cargo in reconstructed morphologies. However, the model also predicts an unavoidable, morphology dependent tradeoff between speed, precision and metabolic efficiency of cargo transport. With experimental estimatesmore » of trafficking kinetics, the model predicts delays of many hours or days for modestly accurate and efficient cargo delivery throughout a dendritic tree. In conclusion, these findings challenge current understanding of the efficacy of nucleus-to-synapse trafficking and may explain the prevalence of local biosynthesis in neurons.« less
Dendritic trafficking faces physiologically critical speed-precision tradeoffs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Alex H.; O'Donnell, Cian; Sejnowski, Terrence J.
Nervous system function requires intracellular transport of channels, receptors, mRNAs, and other cargo throughout complex neuronal morphologies. Local signals such as synaptic input can regulate cargo trafficking, motivating the leading conceptual model of neuron-wide transport, sometimes called the ‘sushi-belt model’. Current theories and experiments are based on this model, yet its predictions are not rigorously understood. We formalized the sushi belt model mathematically, and show that it can achieve arbitrarily complex spatial distributions of cargo in reconstructed morphologies. However, the model also predicts an unavoidable, morphology dependent tradeoff between speed, precision and metabolic efficiency of cargo transport. With experimental estimatesmore » of trafficking kinetics, the model predicts delays of many hours or days for modestly accurate and efficient cargo delivery throughout a dendritic tree. In conclusion, these findings challenge current understanding of the efficacy of nucleus-to-synapse trafficking and may explain the prevalence of local biosynthesis in neurons.« less
Hammond, Colin M.; Owen-Hughes, Tom; Norman, David G.
2014-01-01
Crystallographic and NMR approaches have provided a wealth of structural information about protein domains. However, often these domains are found as components of larger multi domain polypeptides or complexes. Orienting domains within such contexts can provide powerful new insight into their function. The combination of site specific spin labelling and Pulsed Electron Double Resonance (PELDOR) provide a means of obtaining structural measurements that can be used to generate models describing how such domains are oriented. Here we describe a pipeline for modelling the location of thio-reactive nitroxyl spin locations to engineered sties on the histone chaperone Vps75. We then use a combination of experimentally determined measurements and symmetry constraints to model the orientation in which homodimers of Vps75 associate to form homotetramers using the XPLOR-NIH platform. This provides a working example of how PELDOR measurements can be used to generate a structural model. PMID:25448300
NASA Technical Reports Server (NTRS)
Rothhaar, Paul M.; Murphy, Patrick C.; Bacon, Barton J.; Gregory, Irene M.; Grauer, Jared A.; Busan, Ronald C.; Croom, Mark A.
2014-01-01
Control of complex Vertical Take-Off and Landing (VTOL) aircraft traversing from hovering to wing born flight mode and back poses notoriously difficult modeling, simulation, control, and flight-testing challenges. This paper provides an overview of the techniques and advances required to develop the GL-10 tilt-wing, tilt-tail, long endurance, VTOL aircraft control system. The GL-10 prototype's unusual and complex configuration requires application of state-of-the-art techniques and some significant advances in wind tunnel infrastructure automation, efficient Design Of Experiments (DOE) tunnel test techniques, modeling, multi-body equations of motion, multi-body actuator models, simulation, control algorithm design, and flight test avionics, testing, and analysis. The following compendium surveys key disciplines required to develop an effective control system for this challenging vehicle in this on-going effort.
NASA Technical Reports Server (NTRS)
Li, Zhijin; Chao, Yi; McWilliams, James C.; Ide, Kayo
2008-01-01
A three-dimensional variational data assimilation scheme for the Regional Ocean Modeling System (ROMS), named ROMS3DVAR, has been described in the work of Li et al. (2008). In this paper, ROMS3DVAR is applied to the central California coastal region, an area characterized by inhomogeneity and anisotropy, as well as by dynamically unbalanced flows. A method for estimating the model error variances from limited observations is presented, and the construction of the inhomogeneous and anisotropic error correlations based on the Kronecker product is demonstrated. A set of single observation experiments illustrates the inhomogeneous and anisotropic error correlations and weak dynamic constraints used. Results are presented from the assimilation of data gathered during the Autonomous Ocean Sampling Network (AOSN) experiment during August 2003. The results show that ROMS3DVAR is capable of reproducing complex flows associated with upwelling and relaxation, as well as the rapid transitions between them. Some difficulties encountered during the experiment are also discussed.
Experimental basis for a Titan probe organic analysis
NASA Technical Reports Server (NTRS)
Mckay, C. P.; Scattergood, T. W.; Borucki, W. J.; Kasting, J. F.; Miller, S. L.
1986-01-01
The recent Voyager flyby of Titan produced evidence for at least nine organic compounds in that atmosphere that are heavier than methane. Several models of Titan's atmosphere, as well as laboratory simulations, suggest the presence of organics considerably more complex that those observed. To ensure that the in situ measurements are definitive with respect to Titan's atmosphere, experiment concepts, and the related instrumentation, must be carefully developed specifically for such a mission. To this end, the possible composition of the environment to be analyzed must be bracketed and model samples must be provided for instrumentation development studies. Laboratory studies to define the optimum flight experiment and sampling strategy for a Titan entry probe are currently being conducted. Titan mixtures are being subjected to a variety of energy sources including high voltage electron from a DC discharge, high current electric shock, and laser detonation. Gaseous and solid products are produced which are then analyzed. Samples from these experiements are also provided to candidate flight experiments as models for instrument development studies. Preliminary results show that existing theoretical models for chemistry in Titan's atmosphere cannot adequetely explain the presence and abundance of all trace gases observed in these experiments.
Spadini, Lorenzo; Schindler, Paul W; Charlet, Laurent; Manceau, Alain; Vala Ragnarsdottir, K
2003-10-01
The surface properties of ferrihydrite were studied by combining wet chemical data, Cd(K) EXAFS data, and a surface structure and protonation model of the ferrihydrite surface. Acid-base titration experiments and Cd(II)-ferrihydrite sorption experiments were performed within 3<-log[H(+)]<10.5 and 0.5<[Cd(t)]<12 mM in 0.3 M NaClO(4) at 25 degrees C, where [Cd(t)] refers to total Cd concentration. Measurements at -5.5
PODIO: An Event-Data-Model Toolkit for High Energy Physics Experiments
NASA Astrophysics Data System (ADS)
Gaede, F.; Hegner, B.; Mato, P.
2017-10-01
PODIO is a C++ library that supports the automatic creation of event data models (EDMs) and efficient I/O code for HEP experiments. It is developed as a new EDM Toolkit for future particle physics experiments in the context of the AIDA2020 EU programme. Experience from LHC and the linear collider community shows that existing solutions partly suffer from overly complex data models with deep object-hierarchies or unfavorable I/O performance. The PODIO project was created in order to address these problems. PODIO is based on the idea of employing plain-old-data (POD) data structures wherever possible, while avoiding deep object-hierarchies and virtual inheritance. At the same time it provides the necessary high-level interface towards the developer physicist, such as the support for inter-object relations and automatic memory-management, as well as a Python interface. To simplify the creation of efficient data models PODIO employs code generation from a simple yaml-based markup language. In addition, it was developed with concurrency in mind in order to support the use of modern CPU features, for example giving basic support for vectorization techniques.
Kerr, Douglas J R; Crowe, Trevor P; Oades, Lindsay G
2013-06-01
1) to understand the reconstruction of narrative identity during mental health recovery using a complex adaptive systems perspective, 2) to address the need for alternative approaches that embrace the complexities of health care. A narrative review of published literature was conducted. A complex adaptive systems perspective offers a framework and language that can assist individuals to make sense of their experiences and reconstruct their narratives during an often erratic and uncertain life transition. It is a novel research direction focused on a critical area of recovery and addresses the need for alternative approaches that embrace the complexities of health care. A complexity research approach to narrative identity reconstruction is valuable. It is an accessible model for addressing the complexities of recovery and may underpin the development of simple, practical recovery coaching tools. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
Colle, Livia; Pellecchia, Giovanni; Moroni, Fabio; Carcione, Antonino; Nicolò, Giuseppe; Semerari, Antonio; Procacci, Michele
2017-01-01
Social sharing capacities have attracted attention from a number of fields of social cognition and have been variously defined and analyzed in numerous studies. Social sharing consists in the subjective awareness that aspects of the self’s experience are held in common with other individuals. The definition of social sharing must take a variety of elements into consideration: the motivational element, the contents of the social sharing experience, the emotional responses it evokes, the behavioral outcomes, and finally, the circumstances and the skills which enable social sharing. The primary objective of this study is to explore some of the diverse forms of human social sharing and to classify them according to levels of complexity. We identify four different types of social sharing, categorized according to the nature of the content being shared and the complexity of the mindreading skills required. The second objective of this study is to consider possible applications of this graded model of social sharing experience in clinical settings. Specifically, this model may support the development of graded, focused clinical interventions for patients with personality disorders characterized by severe social withdrawal. PMID:29255430
Mental visualization of objects from cross-sectional images
Wu, Bing; Klatzky, Roberta L.; Stetten, George D.
2011-01-01
We extended the classic anorthoscopic viewing procedure to test a model of visualization of 3D structures from 2D cross-sections. Four experiments were conducted to examine key processes described in the model, localizing cross-sections within a common frame of reference and spatiotemporal integration of cross sections into a hierarchical object representation. Participants used a hand-held device to reveal a hidden object as a sequence of cross-sectional images. The process of localization was manipulated by contrasting two displays, in-situ vs. ex-situ, which differed in whether cross sections were presented at their source locations or displaced to a remote screen. The process of integration was manipulated by varying the structural complexity of target objects and their components. Experiments 1 and 2 demonstrated visualization of 2D and 3D line-segment objects and verified predictions about display and complexity effects. In Experiments 3 and 4, the visualized forms were familiar letters and numbers. Errors and orientation effects showed that displacing cross-sectional images to a remote display (ex-situ viewing) impeded the ability to determine spatial relationships among pattern components, a failure of integration at the object level. PMID:22217386
Transforming patient experience: health web science meets medicine 2.0.
McHattie, Lynn-Sayers; Cumming, Grant; French, Tara
2014-01-01
Until recently, the Western biomedical paradigm has been effective in delivering health care, however this model is not positioned to tackle complex societal challenges or solve the current problems facing health care and delivery. The future of medicine requires a shift to a patient-centric model and in so doing the Internet has a significant role to play. The disciplines of Health Web Science and Medicine 2.0 are pivotal to this approach. This viewpoint paper argues that these disciplines, together with the field of design, can tackle these challenges. Drawing together ideas from design practice and research, complexity theory, and participatory action research we depict design as an approach that is fundamentally social and linked to concepts of person-centered care. We discuss the role of design, specifically co-design, in understanding the social, psychological, and behavioral dimensions of illness and the implications for the design of future care towards transforming the patient experience. This paper builds on the presentations and subsequent interdisciplinary dialogue that developed from the panel session "Transforming Patient Experience: Health Web Science Meets Web 2.0" at the 2013 Medicine 2.0 conference in London.
Transforming Patient Experience: Health Web Science Meets Medicine 2.0
2014-01-01
Until recently, the Western biomedical paradigm has been effective in delivering health care, however this model is not positioned to tackle complex societal challenges or solve the current problems facing health care and delivery. The future of medicine requires a shift to a patient-centric model and in so doing the Internet has a significant role to play. The disciplines of Health Web Science and Medicine 2.0 are pivotal to this approach. This viewpoint paper argues that these disciplines, together with the field of design, can tackle these challenges. Drawing together ideas from design practice and research, complexity theory, and participatory action research we depict design as an approach that is fundamentally social and linked to concepts of person-centered care. We discuss the role of design, specifically co-design, in understanding the social, psychological, and behavioral dimensions of illness and the implications for the design of future care towards transforming the patient experience. This paper builds on the presentations and subsequent interdisciplinary dialogue that developed from the panel session "Transforming Patient Experience: Health Web Science Meets Web 2.0" at the 2013 Medicine 2.0 conference in London. PMID:25075246
Reflection Matrix Method for Controlling Light After Reflection From a Diffuse Scattering Surface
2016-12-22
reflective inverse diffusion, which was a proof-of-concept experiment that used phase modulation to shape the wavefront of a laser causing it to refocus...after reflection from a rough surface. By refocusing the light, reflective inverse diffusion has the potential to eliminate the complex radiometric model...photography. However, the initial reflective inverse diffusion experiments provided no mathematical background and were conducted under the premise that the
Acquisition Community Team Dynamics: The Tuckman Model vs. the DAU Model
2007-04-30
courses . These student teams are used to enable the generation of more complex products and to prepare the students for the ...requirement for stage discreteness was met, I developed a stage-separation test that, when applied to the data representing the experience of a... test the reliability, and validate an improved questionnaire instrument that: – Redefines “Storming” with new storming questions Less focused
Basic investigation of turbine erosion phenomena
NASA Technical Reports Server (NTRS)
Pouchot, W. D.; Kothmann, R. E.; Fentress, W. K.; Heymann, F. J.; Varljen, T. C.; Chi, J. W. H.; Milton, J. D.; Glassmire, C. M.; Kyslinger, J. A.; Desai, K. A.
1971-01-01
An analytical-empirical model is presented of turbine erosion that fits and explains experience in both steam and metal vapor turbines. Because of the complexities involved in analyzing turbine problems, in a pure scientific sense, it is obvious that this goal can be only partially realized. Therefore, emphasis is placed on providing a useful model for preliminary erosion estimates for given configurations, fluids, and flow conditions.
Adaptive Standard Operating Procedures for Complex Disasters
2017-03-01
Developments in Business Simulation and Experiential Learning 33 (2014). 23 Patrick Lagadec and Benjamin Topper, “How Crises Model the Modern World...field of crisis response . Therefore, this experiment supports the argument for implementing the adaptive design proposals. The adaptive SOP enhancement...Kalay. “An Event- Based Model to Simulate Human Behaviour in Built Environments.” Proceedings of the 30th eCAADe Conference 1 (2012). Snowden
Street, Nichola; Forsythe, Alexandra M; Reilly, Ronan; Taylor, Richard; Helmy, Mai S
2016-01-01
Fractal patterns offer one way to represent the rough complexity of the natural world. Whilst they dominate many of our visual experiences in nature, little large-scale perceptual research has been done to explore how we respond aesthetically to these patterns. Previous research (Taylor et al., 2011) suggests that the fractal patterns with mid-range fractal dimensions (FDs) have universal aesthetic appeal. Perceptual and aesthetic responses to visual complexity have been more varied with findings suggesting both linear (Forsythe et al., 2011) and curvilinear (Berlyne, 1970) relationships. Individual differences have been found to account for many of the differences we see in aesthetic responses but some, such as culture, have received little attention within the fractal and complexity research fields. This two-study article aims to test preference responses to FD and visual complexity, using a large cohort (N = 443) of participants from around the world to allow universality claims to be tested. It explores the extent to which age, culture and gender can predict our preferences for fractally complex patterns. Following exploratory analysis that found strong correlations between FD and visual complexity, a series of linear mixed-effect models were implemented to explore if each of the individual variables could predict preference. The first tested a linear complexity model (likelihood of selecting the more complex image from the pair of images) and the second a mid-range FD model (likelihood of selecting an image within mid-range). Results show that individual differences can reliably predict preferences for complexity across culture, gender and age. However, in fitting with current findings the mid-range models show greater consistency in preference not mediated by gender, age or culture. This article supports the established theory that the mid-range fractal patterns appear to be a universal construct underlying preference but also highlights the fragility of universal claims by demonstrating individual differences in preference for the interrelated concept of visual complexity. This highlights a current stalemate in the field of empirical aesthetics.
Reeve, Joanne; Cooper, Lucy; Harrington, Sean; Rosbottom, Peter; Watkins, Jane
2016-09-06
Health services face the challenges created by complex problems, and so need complex intervention solutions. However they also experience ongoing difficulties in translating findings from research in this area in to quality improvement changes on the ground. BounceBack was a service development innovation project which sought to examine this issue through the implementation and evaluation in a primary care setting of a novel complex intervention. The project was a collaboration between a local mental health charity, an academic unit, and GP practices. The aim was to translate the charity's model of care into practice-based evidence describing delivery and impact. Normalisation Process Theory (NPT) was used to support the implementation of the new model of primary mental health care into six GP practices. An integrated process evaluation evaluated the process and impact of care. Implementation quickly stalled as we identified problems with the described model of care when applied in a changing and variable primary care context. The team therefore switched to using the NPT framework to support the systematic identification and modification of the components of the complex intervention: including the core components that made it distinct (the consultation approach) and the variable components (organisational issues) that made it work in practice. The extra work significantly reduced the time available for outcome evaluation. However findings demonstrated moderately successful implementation of the model and a suggestion of hypothesised changes in outcomes. The BounceBack project demonstrates the development of a complex intervention from practice. It highlights the use of Normalisation Process Theory to support development, and not just implementation, of a complex intervention; and describes the use of the research process in the generation of practice-based evidence. Implications for future translational complex intervention research supporting practice change through scholarship are discussed.
Looping and clustering model for the organization of protein-DNA complexes on the bacterial genome
NASA Astrophysics Data System (ADS)
Walter, Jean-Charles; Walliser, Nils-Ole; David, Gabriel; Dorignac, Jérôme; Geniet, Frédéric; Palmeri, John; Parmeggiani, Andrea; Wingreen, Ned S.; Broedersz, Chase P.
2018-03-01
The bacterial genome is organized by a variety of associated proteins inside a structure called the nucleoid. These proteins can form complexes on DNA that play a central role in various biological processes, including chromosome segregation. A prominent example is the large ParB-DNA complex, which forms an essential component of the segregation machinery in many bacteria. ChIP-Seq experiments show that ParB proteins localize around centromere-like parS sites on the DNA to which ParB binds specifically, and spreads from there over large sections of the chromosome. Recent theoretical and experimental studies suggest that DNA-bound ParB proteins can interact with each other to condense into a coherent 3D complex on the DNA. However, the structural organization of this protein-DNA complex remains unclear, and a predictive quantitative theory for the distribution of ParB proteins on DNA is lacking. Here, we propose the looping and clustering model, which employs a statistical physics approach to describe protein-DNA complexes. The looping and clustering model accounts for the extrusion of DNA loops from a cluster of interacting DNA-bound proteins that is organized around a single high-affinity binding site. Conceptually, the structure of the protein-DNA complex is determined by a competition between attractive protein interactions and loop closure entropy of this protein-DNA cluster on the one hand, and the positional entropy for placing loops within the cluster on the other. Indeed, we show that the protein interaction strength determines the ‘tightness’ of the loopy protein-DNA complex. Thus, our model provides a theoretical framework for quantitatively computing the binding profiles of ParB-like proteins around a cognate (parS) binding site.
PHOTOCHEMICAL PRODUCTS IN URBAN MIXTURES ENHANCE INFLAMMATORY RESPONSES IN LUNG CELLS
Complex urban air mixtures that realistically mimic urban smog can be generated for investigating adverse health effects. "Smog chambers" have been used for over 30 yr to conduct experiments for developing and testing photochemical models that predict ambient ozone (O(3)) concent...
Docking and scoring protein complexes: CAPRI 3rd Edition.
Lensink, Marc F; Méndez, Raúl; Wodak, Shoshana J
2007-12-01
The performance of methods for predicting protein-protein interactions at the atomic scale is assessed by evaluating blind predictions performed during 2005-2007 as part of Rounds 6-12 of the community-wide experiment on Critical Assessment of PRedicted Interactions (CAPRI). These Rounds also included a new scoring experiment, where a larger set of models contributed by the predictors was made available to groups developing scoring functions. These groups scored the uploaded set and submitted their own best models for assessment. The structures of nine protein complexes including one homodimer were used as targets. These targets represent biologically relevant interactions involved in gene expression, signal transduction, RNA, or protein processing and membrane maintenance. For all the targets except one, predictions started from the experimentally determined structures of the free (unbound) components or from models derived by homology, making it mandatory for docking methods to model the conformational changes that often accompany association. In total, 63 groups and eight automatic servers, a substantial increase from previous years, submitted docking predictions, of which 1994 were evaluated here. Fifteen groups submitted 305 models for five targets in the scoring experiment. Assessment of the predictions reveals that 31 different groups produced models of acceptable and medium accuracy-but only one high accuracy submission-for all the targets, except the homodimer. In the latter, none of the docking procedures reproduced the large conformational adjustment required for correct assembly, underscoring yet again that handling protein flexibility remains a major challenge. In the scoring experiment, a large fraction of the groups attained the set goal of singling out the correct association modes from incorrect solutions in the limited ensembles of contributed models. But in general they seemed unable to identify the best models, indicating that current scoring methods are probably not sensitive enough. With the increased focus on protein assemblies, in particular by structural genomics efforts, the growing community of CAPRI predictors is engaged more actively than ever in the development of better scoring functions and means of modeling conformational flexibility, which hold promise for much progress in the future. (c) 2007 Wiley-Liss, Inc.
Taupitz, Thomas; Dressman, Jennifer B; Buchanan, Charles M; Klein, Sandra
2013-04-01
The aim of the present series of experiments was to improve the solubility and dissolution/precipitation behaviour of a poorly soluble, weakly basic drug, using itraconazole as a case example. Binary inclusion complexes of itraconazole with two commonly used cyclodextrin derivatives and a recently introduced cyclodextrin derivative were prepared. Their solubility and dissolution behaviour was compared with that of the pure drug and the marketed formulation Sporanox®. Ternary complexes were prepared by addition of Soluplus®, a new highly water soluble polymer, during the formation of the itraconazole/cyclodextrin complex. A solid dispersion made of itraconazole and Soluplus® was also studied as a control. Solid state analysis was performed for all formulations and for pure itraconazole using powder X-ray diffraction (pX-RD) and differential scanning calorimetry (DSC). Solubility tests indicated that with all formulation approaches, the aqueous solubility of itraconazole formed with hydroxypropyl-β-cyclodextrin (HP-β-CD) or hydroxybutenyl-β-cyclodextrin (HBen-β-CD) and Soluplus® proved to be the most favourable formulation approaches. Whereas the marketed formulation and the pure drug showed very poor dissolution, both of these ternary inclusion complexes resulted in fast and extensive release of itraconazole in all test media. Using the results of the dissolution experiments, a newly developed physiologically based pharmacokinetic (PBPK) in silico model was applied to compare the in vivo behaviour of Sporanox® with the predicted performance of the most promising ternary complexes from the in vitro studies. The PBPK modelling predicted that the bioavailability of itraconazole is likely to be increased after oral administration of ternary complex formulations, especially when itraconazole is formulated as a ternary complex comprising HP-β-CD or HBen-β-CD and Soluplus®. Copyright © 2012 Elsevier B.V. All rights reserved.
An Agent-Based Model for Studying Child Maltreatment and Child Maltreatment Prevention
NASA Astrophysics Data System (ADS)
Hu, Xiaolin; Puddy, Richard W.
This paper presents an agent-based model that simulates the dynamics of child maltreatment and child maltreatment prevention. The developed model follows the principles of complex systems science and explicitly models a community and its families with multi-level factors and interconnections across the social ecology. This makes it possible to experiment how different factors and prevention strategies can affect the rate of child maltreatment. We present the background of this work and give an overview of the agent-based model and show some simulation results.
Schmidtke, Daniel; Matsuki, Kazunaga; Kuperman, Victor
2017-11-01
The current study addresses a discrepancy in the psycholinguistic literature about the chronology of information processing during the visual recognition of morphologically complex words. Form-then-meaning accounts of complex word recognition claim that morphemes are processed as units of form prior to any influence of their meanings, whereas form-and-meaning models posit that recognition of complex word forms involves the simultaneous access of morphological and semantic information. The study reported here addresses this theoretical discrepancy by applying a nonparametric distributional technique of survival analysis (Reingold & Sheridan, 2014) to 2 behavioral measures of complex word processing. Across 7 experiments reported here, this technique is employed to estimate the point in time at which orthographic, morphological, and semantic variables exert their earliest discernible influence on lexical decision RTs and eye movement fixation durations. Contrary to form-then-meaning predictions, Experiments 1-4 reveal that surface frequency is the earliest lexical variable to exert a demonstrable influence on lexical decision RTs for English and Dutch derived words (e.g., badness ; bad + ness ), English pseudoderived words (e.g., wander ; wand + er ) and morphologically simple control words (e.g., ballad ; ball + ad ). Furthermore, for derived word processing across lexical decision and eye-tracking paradigms (Experiments 1-2; 5-7), semantic effects emerge early in the time-course of word recognition, and their effects either precede or emerge simultaneously with morphological effects. These results are not consistent with the premises of the form-then-meaning view of complex word recognition, but are convergent with a form-and-meaning account of complex word recognition. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Measuring user experience in digital gaming: theoretical and methodological issues
NASA Astrophysics Data System (ADS)
Takatalo, Jari; Häkkinen, Jukka; Kaistinen, Jyrki; Nyman, Göte
2007-01-01
There are innumerable concepts, terms and definitions for user experience. Few of them have a solid empirical foundation. In trying to understand user experience in interactive technologies such as computer games and virtual environments, reliable and valid concepts are needed for measuring relevant user reactions and experiences. Here we present our approach to create both theoretically and methodologically sound methods for quantification of the rich user experience in different digital environments. Our approach is based on the idea that the experience received from a content presented with a specific technology is always a result of a complex psychological interpretation process, which components should be understood. The main aim of our approach is to grasp the complex and multivariate nature of the experience and make it measurable. We will present our two basic measurement frameworks, which have been developed and tested in large data set (n=2182). The 15 measurement scales extracted from these models are applied to digital gaming with a head-mounted display and a table-top display. The results show how it is possible to map between experience, technology variables and the background of the user (e.g., gender). This approach can help to optimize, for example, the contents for specific viewing devices or viewing situations.
Modelling landscape evolution at the flume scale
NASA Astrophysics Data System (ADS)
Cheraghi, Mohsen; Rinaldo, Andrea; Sander, Graham C.; Barry, D. Andrew
2017-04-01
The ability of a large-scale Landscape Evolution Model (LEM) to simulate the soil surface morphological evolution as observed in a laboratory flume (1-m × 2-m surface area) was investigated. The soil surface was initially smooth, and was subjected to heterogeneous rainfall in an experiment designed to avoid rill formation. Low-cohesive fine sand was placed in the flume while the slope and relief height were 5 % and 20 cm, respectively. Non-uniform rainfall with an average intensity of 85 mm h-1 and a standard deviation of 26 % was applied to the sediment surface for 16 h. We hypothesized that the complex overland water flow can be represented by a drainage discharge network, which was calculated via the micro-morphology and the rainfall distribution. Measurements included high resolution Digital Elevation Models that were captured at intervals during the experiment. The calibrated LEM captured the migration of the main flow path from the low precipitation area into the high precipitation area. Furthermore, both model and experiment showed a steep transition zone in soil elevation that moved upstream during the experiment. We conclude that the LEM is applicable under non-uniform rainfall and in the absence of surface incisions, thereby extending its applicability beyond that shown in previous applications. Keywords: Numerical simulation, Flume experiment, Particle Swarm Optimization, Sediment transport, River network evolution model.
Circular analysis in complex stochastic systems
Valleriani, Angelo
2015-01-01
Ruling out observations can lead to wrong models. This danger occurs unwillingly when one selects observations, experiments, simulations or time-series based on their outcome. In stochastic processes, conditioning on the future outcome biases all local transition probabilities and makes them consistent with the selected outcome. This circular self-consistency leads to models that are inconsistent with physical reality. It is also the reason why models built solely on macroscopic observations are prone to this fallacy. PMID:26656656
Xia, Bing; Mamonov, Artem; Leysen, Seppe; Allen, Karen N; Strelkov, Sergei V; Paschalidis, Ioannis Ch; Vajda, Sandor; Kozakov, Dima
2015-07-30
The protein-protein docking server ClusPro is used by thousands of laboratories, and models built by the server have been reported in over 300 publications. Although the structures generated by the docking include near-native ones for many proteins, selecting the best model is difficult due to the uncertainty in scoring. Small angle X-ray scattering (SAXS) is an experimental technique for obtaining low resolution structural information in solution. While not sufficient on its own to uniquely predict complex structures, accounting for SAXS data improves the ranking of models and facilitates the identification of the most accurate structure. Although SAXS profiles are currently available only for a small number of complexes, due to its simplicity the method is becoming increasingly popular. Since combining docking with SAXS experiments will provide a viable strategy for fairly high-throughput determination of protein complex structures, the option of using SAXS restraints is added to the ClusPro server. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Dynamic and impact contact mechanics of geologic materials: Grain-scale experiments and modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cole, David M.; Hopkins, Mark A.; Ketcham, Stephen A.
2013-06-18
High fidelity treatments of the generation and propagation of seismic waves in naturally occurring granular materials is becoming more practical given recent advancements in our ability to model complex particle shapes and their mechanical interaction. Of particular interest are the grain-scale processes that are activated by impact events and the characteristics of force transmission through grain contacts. To address this issue, we have developed a physics based approach that involves laboratory experiments to quantify the dynamic contact and impact behavior of granular materials and incorporation of the observed behavior indiscrete element models. The dynamic experiments do not involve particle damagemore » and emphasis is placed on measured values of contact stiffness and frictional loss. The normal stiffness observed in dynamic contact experiments at low frequencies (e.g., 10 Hz) are shown to be in good agreement with quasistatic experiments on quartz sand. The results of impact experiments - which involve moderate to extensive levels of particle damage - are presented for several types of naturally occurring granular materials (several quartz sands, magnesite and calcium carbonate ooids). Implementation of the experimental findings in discrete element models is discussed and the results of impact simulations involving up to 5 Multiplication-Sign 105 grains are presented.« less
Mathematical Modeling of Multiphase Filtration in Porous Media with a Chemically Active Skeleton
NASA Astrophysics Data System (ADS)
Khramchenkov, M. G.; Khramchenkov, É. M.
2018-01-01
The authors propose a mathematical model of two-phase filtration that occurs under the conditions of dissolution of a porous medium. The model can be used for joint description of complex chemical-hydrogeomechanical processes that are of frequent occurrence in the oil-and-gas producing and nature conservation practice. As an example, consideration is given to the acidizing of the bottom zone of the injection well of an oil reservoir. Enclosing rocks are represented by carbonates. The phases of the process are an aqueous solution of hydrochloric acid and oil. A software product for computational experiments is developed. For the numerical experiments, use is made of the data on the wells of an actual oil field. Good agreement is obtained between the field data and the calculated data. Numerical experiments with different configurations of the permeability of an oil stratum are conducted.
Modeling dynamic beta-gamma polymorphic transition in Tin
NASA Astrophysics Data System (ADS)
Chauvin, Camille; Montheillet, Frank; Petit, Jacques; CEA Gramat Collaboration; EMSE Collaboration
2015-06-01
Solid-solid phase transitions in metals have been studied by shock waves techniques for many decades. Recent experiments have investigated the transition during isentropic compression experiments and shock-wave compression and have highlighted the strong influence of the loading rate on the transition. Complementary data obtained with velocity and temperature measurements around the polymorphic transition beta-gamma of Tin on gas gun experiments have displayed the importance of the kinetics of the transition. But, even though this phenomenon is known, modeling the kinetic remains complex and based on empirical formulations. A multiphase EOS is available in our 1D Lagrangian code Unidim. We propose to present the influence of various kinetic laws (either empirical or involving nucleation and growth mechanisms) and their parameters (Gibbs free energy, temperature, pressure) on the transformation rate. We compare experimental and calculated velocities and temperature profiles and we underline the effects of the empirical parameters of these models.
Modeling the effects of variable groundwater chemistry on adsorption of molybdate
Stollenwerk, Kenneth G.
1995-01-01
Laboratory experiments were used to identify and quantify processes having a significant effect on molybdate (MoO42−) adsorption in a shallow alluvial aquifer on Cape Cod, assachusetts. Aqueous chemistry in the aquifer changes as a result of treated sewage effluent mixing with groundwater. Molybdate adsorption decreased as pH, ionic strength, and the concentration of competing anions increased. A diffuse-layer surface complexation model was used to simulate adsorption of MoO42−, phosphate (PO43−), and sulfate (SO42−) on aquifer sediment. Equilibrium constants for the model were calculated by calibration to data from batch experiments. The model was then used in a one-dimensional solute transport program to successfully simulate initial breakthrough of MoO42− from column experiments. A shortcoming of the solute transport program was the inability to account for kinetics of physical and chemical processes. This resulted in a failure of the model to predict the slow rate of desorption of MoO42− from the columns. The mobility of MoO42− ncreased with ionic strength and with the formation of aqueous complexes with calcium, magnesium, and sodium. Failure to account for MoO42− speciation and ionic strength in the model resulted in overpredicting MoO42− adsorption. Qualitatively, the laboratory data predicted the observed behavior of MoO42− in the aquifer, where retardation of MoO42− was greatest in uncontaminated roundwater having low pH, low ionic strength, and low concentrations of PO43− and SO42−.
Hybrid deterministic/stochastic simulation of complex biochemical systems.
Lecca, Paola; Bagagiolo, Fabio; Scarpa, Marina
2017-11-21
In a biological cell, cellular functions and the genetic regulatory apparatus are implemented and controlled by complex networks of chemical reactions involving genes, proteins, and enzymes. Accurate computational models are indispensable means for understanding the mechanisms behind the evolution of a complex system, not always explored with wet lab experiments. To serve their purpose, computational models, however, should be able to describe and simulate the complexity of a biological system in many of its aspects. Moreover, it should be implemented by efficient algorithms requiring the shortest possible execution time, to avoid enlarging excessively the time elapsing between data analysis and any subsequent experiment. Besides the features of their topological structure, the complexity of biological networks also refers to their dynamics, that is often non-linear and stiff. The stiffness is due to the presence of molecular species whose abundance fluctuates by many orders of magnitude. A fully stochastic simulation of a stiff system is computationally time-expensive. On the other hand, continuous models are less costly, but they fail to capture the stochastic behaviour of small populations of molecular species. We introduce a new efficient hybrid stochastic-deterministic computational model and the software tool MoBioS (MOlecular Biology Simulator) implementing it. The mathematical model of MoBioS uses continuous differential equations to describe the deterministic reactions and a Gillespie-like algorithm to describe the stochastic ones. Unlike the majority of current hybrid methods, the MoBioS algorithm divides the reactions' set into fast reactions, moderate reactions, and slow reactions and implements a hysteresis switching between the stochastic model and the deterministic model. Fast reactions are approximated as continuous-deterministic processes and modelled by deterministic rate equations. Moderate reactions are those whose reaction waiting time is greater than the fast reaction waiting time but smaller than the slow reaction waiting time. A moderate reaction is approximated as a stochastic (deterministic) process if it was classified as a stochastic (deterministic) process at the time at which it crosses the threshold of low (high) waiting time. A Gillespie First Reaction Method is implemented to select and execute the slow reactions. The performances of MoBios were tested on a typical example of hybrid dynamics: that is the DNA transcription regulation. The simulated dynamic profile of the reagents' abundance and the estimate of the error introduced by the fully deterministic approach were used to evaluate the consistency of the computational model and that of the software tool.
NASA Astrophysics Data System (ADS)
Chuvashov, I. N.
2011-07-01
In this paper complex of algorithms and programs for solving inverse problems of artificial earth satellite dynamics is described. Complex has been intended for satellite orbit improvement, calculation of motion model parameters and etc. Programs complex has been worked up for cluster "Skiff Cyberia". Results of numerical experiments obtained by using new complex in common the program "Numerical model of the system artificial satellites motion" is presented in this paper.
Chadeau-Hyam, Marc; Campanella, Gianluca; Jombart, Thibaut; Bottolo, Leonardo; Portengen, Lutzen; Vineis, Paolo; Liquet, Benoit; Vermeulen, Roel C H
2013-08-01
Recent technological advances in molecular biology have given rise to numerous large-scale datasets whose analysis imposes serious methodological challenges mainly relating to the size and complex structure of the data. Considerable experience in analyzing such data has been gained over the past decade, mainly in genetics, from the Genome-Wide Association Study era, and more recently in transcriptomics and metabolomics. Building upon the corresponding literature, we provide here a nontechnical overview of well-established methods used to analyze OMICS data within three main types of regression-based approaches: univariate models including multiple testing correction strategies, dimension reduction techniques, and variable selection models. Our methodological description focuses on methods for which ready-to-use implementations are available. We describe the main underlying assumptions, the main features, and advantages and limitations of each of the models. This descriptive summary constitutes a useful tool for driving methodological choices while analyzing OMICS data, especially in environmental epidemiology, where the emergence of the exposome concept clearly calls for unified methods to analyze marginally and jointly complex exposure and OMICS datasets. Copyright © 2013 Wiley Periodicals, Inc.
A closer look at the complex hydrophilic/hydrophobic interactions forces at the human hair surface
NASA Astrophysics Data System (ADS)
Baghdadli, N.; Luengo, G. S.; Recherche, L.
2008-03-01
The complex chemical structure of the hair surface is far from being completely understood. Current understanding is based on Rivett's model1 that was proposed to explain the macroscopic hydrophobic nature of the surface of natural hair. In this model covalently-linked fatty acids are chemically grafted to the amorphous protein (keratin) through a thio-ester linkage2,3. Nevertheless, experience like wetting and electrical properties of human hair surface4 shows that the complexity of the hair surface is not fully understand based on this model in literature. Recent studies in our laboratory show for the first time microscopic evidence of the heterogeneous physico-chemical character of the hair surface. By using Chemical Force Microscopy, the presence of hydrophobic and ionic species are detected and localized, before and after a cosmetic treatment (bleaching). Based on force curve analysis the mapping of the local distribution of hydrophilic and hydrophobic groups of hair surface is obtained. A discussion on a more plausible hair model and its implications will be presented based on these new results.
Kindermans, Pieter-Jan; Verschore, Hannes; Schrauwen, Benjamin
2013-10-01
In recent years, in an attempt to maximize performance, machine learning approaches for event-related potential (ERP) spelling have become more and more complex. In this paper, we have taken a step back as we wanted to improve the performance without building an overly complex model, that cannot be used by the community. Our research resulted in a unified probabilistic model for ERP spelling, which is based on only three assumptions and incorporates language information. On top of that, the probabilistic nature of our classifier yields a natural dynamic stopping strategy. Furthermore, our method uses the same parameters across 25 subjects from three different datasets. We show that our classifier, when enhanced with language models and dynamic stopping, improves the spelling speed and accuracy drastically. Additionally, we would like to point out that as our model is entirely probabilistic, it can easily be used as the foundation for complex systems in future work. All our experiments are executed on publicly available datasets to allow for future comparison with similar techniques.
Vale, Gillian L.; Davis, Sarah J.; Lambeth, Susan P.; Schapiro, Steven J.; Whiten, Andrew
2017-01-01
Cumulative culture underpins humanity’s enormous success as a species. Claims that other animals are incapable of cultural ratcheting are prevalent, but are founded on just a handful of empirical studies. Whether cumulative culture is unique to humans thus remains a controversial and understudied question that has far-reaching implications for our understanding of the evolution of this phenomenon. We investigated whether one of human’s two closest living primate relatives, chimpanzees, are capable of a degree of cultural ratcheting by exposing captive populations to a novel juice extraction task. We found that groups (N = 3) seeded with a model trained to perform a tool modification that built upon simpler, unmodified tool use developed the seeded tool method that allowed greater juice returns than achieved by groups not exposed to a trained model (non-seeded controls; N = 3). One non-seeded group also discovered the behavioral sequence, either by coupling asocial and social learning or by repeated invention. This behavioral sequence was found to be beyond what an additional control sample of chimpanzees (N = 1 group) could discover for themselves without a competent model and lacking experience with simpler, unmodified tool behaviors. Five chimpanzees tested individually with no social information, but with experience of simple unmodified tool use, invented part, but not all, of the behavioral sequence. Our findings indicate that (i) social learning facilitated the propagation of the model-demonstrated tool modification technique, (ii) experience with simple tool behaviors may facilitate individual discovery of more complex tool manipulations, and (iii) a subset of individuals were capable of learning relatively complex behaviors either by learning asocially and socially or by repeated invention over time. That chimpanzees learn increasingly complex behaviors through social and asocial learning suggests that humans’ extraordinary ability to do so was built on such prior foundations. PMID:29333058
A Historical Evaluation of the U15 Complex, Nevada National Security Site, Nye County, Nevada
DOE Office of Scientific and Technical Information (OSTI.GOV)
Drollinger, Harold; Holz, Barbara A.; Bullard, Thomas F.
2014-01-01
This report presents a historical evaluation of the U15 Complex on the Nevada National Security Site (NNSS) in southern Nevada. The work was conducted by the Desert Research Institute at the request of the U.S. Department of Energy, National Nuclear Security Administration Nevada Field Office and the U.S. Department of Defense, Defense Threat Reduction Agency. Three underground nuclear tests and two underground nuclear fuel storage experiments were conducted at the complex. The nuclear tests were Hard Hat in 1962, Tiny Tot in 1965, and Pile Driver in 1966. The Hard Hat and Pile Driver nuclear tests involved different types ofmore » experiment sections in test drifts at various distances from the explosion in order to determine which sections could best survive in order to design underground command centers. The Tiny Tot nuclear test involved an underground cavity in which the nuclear test was executed. It also provided data in designing underground structures and facilities to withstand a nuclear attack. The underground nuclear fuel storage experiments were Heater Test 1 from 1977 to 1978 and Spent Fuel Test - Climax from 1978 to 1985. Heater Test 1 was used to design the later Spent Fuel Test - Climax experiment. The latter experiment was a model of a larger underground storage facility and primarily involved recording the conditions of the spent fuel and the surrounding granite medium. Fieldwork was performed intermittently in the summers of 2011 and 2013, totaling 17 days. Access to the underground tunnel complex is sealed and unavailable. Restricted to the surface, four buildings, four structures, and 92 features associated with nuclear testing and fuel storage experiment activities at the U15 Complex have been recorded. Most of these are along the west side of the complex and next to the primary access road and are characteristic of an industrial mining site, albeit one with scientific interests. The geomorphological fieldwork was conducted over three days in the summer of 2011. It was discovered that major modifications to the terrain have resulted from four principal activities. These are road construction and maintenance, mining activities related to development of the tunnel complex, site preparation for activities related to the tests and experiments, and construction of drill pads and retention ponds. Six large trenches for exploring across the Boundary geologic fault are also present. The U15 Complex, designated historic district 143 and site 26NY15177, is eligible to the National Register of Historic Places under Criteria A, C, and D of 36 CFR Part 60.4. As a historic district and archaeological site eligible to the National Register of Historic Places, the Desert Research Institute recommends that the area defined for the U15 Complex, historic district 143 and site 26NY15117, be left in place in its current condition. The U15 Complex should also be included in the NNSS cultural resources monitoring program and monitored for disturbances or alterations.« less
A Historical Evaluation of the U15 Complex, Nevada National Security Site, Nye County, Nevada
DOE Office of Scientific and Technical Information (OSTI.GOV)
Drollinger, Harold; Holz, Barbara A.; Bullard, Thomas F.
2014-01-09
This report presents a historical evaluation of the U15 Complex on the Nevada National Security Site (NNSS) in southern Nevada. The work was conducted by the Desert Research Institute at the request of the U.S. Department of Energy, National Nuclear Security Administration Nevada Field Office and the U.S. Department of Defense, Defense Threat Reduction Agency. Three underground nuclear tests and two underground nuclear fuel storage experiments were conducted at the complex. The nuclear tests were Hard Hat in 1962, Tiny Tot in 1965, and Pile Driver in 1966. The Hard Hat and Pile Driver nuclear tests involved different types ofmore » experiment sections in test drifts at various distances from the explosion in order to determine which sections could best survive in order to design underground command centers. The Tiny Tot nuclear test involved an underground cavity in which the nuclear test was executed. It also provided data in designing underground structures and facilities to withstand a nuclear attack. The underground nuclear fuel storage experiments were Heater Test 1 from 1977 to 1978 and Spent Fuel Test - Climax from 1978 to 1985. Heater Test 1 was used to design the later Spent Fuel Test - Climax experiment. The latter experiment was a model of a larger underground storage facility and primarily involved recording the conditions of the spent fuel and the surrounding granite medium. Fieldwork was performed intermittently in the summers of 2011 and 2013, totaling 17 days. Access to the underground tunnel complex is sealed and unavailable. Restricted to the surface, four buildings, four structures, and 92 features associated with nuclear testing and fuel storage experiment activities at the U15 Complex have been recorded. Most of these are along the west side of the complex and next to the primary access road and are characteristic of an industrial mining site, albeit one with scientific interests. The geomorphological fieldwork was conducted over three days in the summer of 2011. It was discovered that major modifications to the terrain have resulted from four principal activities. These are road construction and maintenance, mining activities related to development of the tunnel complex, site preparation for activities related to the tests and experiments, and construction of drill pads and retention ponds. Six large trenches for exploring across the Boundary geologic fault are also present. The U15 Complex, designated historic district 143 and site 26NY15177, is eligible to the National Register of Historic Places under Criteria A, C, and D of 36 CFR Part 60.4. As a historic district and archaeological site eligible to the National Register of Historic Places, the Desert Research Institute recommends that the area defined for the U15 Complex, historic district 143 and site 26NY15117, be left in place in its current condition. The U15 Complex should also be included in the NNSS cultural resources monitoring program and monitored for disturbances or alterations.« less
Design of a Model Execution Framework: Repetitive Object-Oriented Simulation Environment (ROSE)
NASA Technical Reports Server (NTRS)
Gray, Justin S.; Briggs, Jeffery L.
2008-01-01
The ROSE framework was designed to facilitate complex system analyses. It completely divorces the model execution process from the model itself. By doing so ROSE frees the modeler to develop a library of standard modeling processes such as Design of Experiments, optimizers, parameter studies, and sensitivity studies which can then be applied to any of their available models. The ROSE framework accomplishes this by means of a well defined API and object structure. Both the API and object structure are presented here with enough detail to implement ROSE in any object-oriented language or modeling tool.
Interactive Tooth Separation from Dental Model Using Segmentation Field
2016-01-01
Tooth segmentation on dental model is an essential step of computer-aided-design systems for orthodontic virtual treatment planning. However, fast and accurate identifying cutting boundary to separate teeth from dental model still remains a challenge, due to various geometrical shapes of teeth, complex tooth arrangements, different dental model qualities, and varying degrees of crowding problems. Most segmentation approaches presented before are not able to achieve a balance between fine segmentation results and simple operating procedures with less time consumption. In this article, we present a novel, effective and efficient framework that achieves tooth segmentation based on a segmentation field, which is solved by a linear system defined by a discrete Laplace-Beltrami operator with Dirichlet boundary conditions. A set of contour lines are sampled from the smooth scalar field, and candidate cutting boundaries can be detected from concave regions with large variations of field data. The sensitivity to concave seams of the segmentation field facilitates effective tooth partition, as well as avoids obtaining appropriate curvature threshold value, which is unreliable in some case. Our tooth segmentation algorithm is robust to dental models with low quality, as well as is effective to dental models with different levels of crowding problems. The experiments, including segmentation tests of varying dental models with different complexity, experiments on dental meshes with different modeling resolutions and surface noises and comparison between our method and the morphologic skeleton segmentation method are conducted, thus demonstrating the effectiveness of our method. PMID:27532266
NASA Technical Reports Server (NTRS)
Carra, Claudio; Wang, Minli; Huff, Janice L.; Hada, Megumi; ONeill, Peter; Cucinotta, Francis A.
2010-01-01
Signal transduction controls cellular and tissue responses to radiation. Transforming growth factor beta (TGFbeta) is an important regulator of cell growth and differentiation and tissue homeostasis, and is often dis-regulated in tumor formation. Mathematical models of signal transduction pathways can be used to elucidate how signal transduction varies with radiation quality, and dose and dose-rate. Furthermore, modeling of tissue specific responses can be considered through mechanistic based modeling. We developed a mathematical model of the negative feedback regulation by Smad7 in TGFbeta-Smad signaling and are exploring possible connections to the WNT/beta -catenin, and ATM/ATF2 signaling pathways. A pathway model of TGFbeta-Smad signaling that includes Smad7 kinetics based on data in the scientific literature is described. Kinetic terms included are TGFbeta/Smad transcriptional regulation of Smad7 through the Smad3-Smad4 complex, Smad7-Smurf1 translocation from nucleus to cytoplasm, and Smad7 negative feedback regulation of the TGFO receptor through direct binding to the TGFO receptor complex. The negative feedback controls operating in this pathway suggests non-linear responses in signal transduction, which are described mathematically. We then explored possibilities for cross-talk mediated by Smad7 between DNA damage responses mediated by ATM, and with the WNT pathway and consider the design of experiments to test model driven hypothesis. Numerical comparisons of the mathematical model to experiments and representative predictions are described.
Structural insights into the histone H1-nucleosome complex
Zhou, Bing-Rui; Feng, Hanqiao; Kato, Hidenori; Dai, Liang; Yang, Yuedong; Zhou, Yaoqi; Bai, Yawen
2013-01-01
Linker H1 histones facilitate formation of higher-order chromatin structures and play important roles in various cell functions. Despite several decades of effort, the structural basis of how H1 interacts with the nucleosome remains elusive. Here, we investigated Drosophila H1 in complex with the nucleosome, using solution nuclear magnetic resonance spectroscopy and other biophysical methods. We found that the globular domain of H1 bridges the nucleosome core and one 10-base pair linker DNA asymmetrically, with its α3 helix facing the nucleosomal DNA near the dyad axis. Two short regions in the C-terminal tail of H1 and the C-terminal tail of one of the two H2A histones are also involved in the formation of the H1–nucleosome complex. Our results lead to a residue-specific structural model for the globular domain of the Drosophila H1 in complex with the nucleosome, which is different from all previous experiment-based models and has implications for chromatin dynamics in vivo. PMID:24218562
Dimensional Precision Research of Wax Molding Rapid Prototyping based on Droplet Injection
NASA Astrophysics Data System (ADS)
Mingji, Huang; Geng, Wu; yan, Shan
2017-11-01
The traditional casting process is complex, the mold is essential products, mold quality directly affect the quality of the product. With the method of rapid prototyping 3D printing to produce mold prototype. The utility wax model has the advantages of high speed, low cost and complex structure. Using the orthogonal experiment as the main method, analysis each factors of size precision. The purpose is to obtain the optimal process parameters, to improve the dimensional accuracy of production based on droplet injection molding.
[The organization of scientific innovative laboratory complex of modern technologies].
Totskaia, E G; Rozhnova, O M; Mamonova, E V
2013-01-01
The article discusses the actual issues of scientific innovative activity during the realization of principles of private-public partnership. The experience of development of model of scientific innovative complex is presented The possibilities to implement research achievements and their application in the area of cell technologies, technologies of regenerative medicine, biochip technologies are demonstrated. The opportunities to provide high level of diagnostic and treatment in practical health care increase of accessibility and quality of medical care and population health promotion are discussed.
Modelling of Deflagration to Detonation Transition in Porous PETN of Density 1.4 g / cc with HERMES
NASA Astrophysics Data System (ADS)
Reaugh, John; Curtis, John; Maheswaran, Mary-Ann
2017-06-01
The modelling of Deflagration to Detonation Transition in explosives is a severe challenge for reactive burn models because of the complexity of the physics; there is mechanical and thermal interaction of the gaseous burn products with the burning porous matrix, with resulting compaction, shock formation and subsequent detonation. Experiments on the explosive PETN show a strong dependence of run distance to detonation on porosity. The minimum run distance appears to occur when the density is approximately 1.4 g / cc. Recent research on the High Explosive Response to Mechanical Stimulation (HERMES) model for High Explosive Violent Reaction has included the development of a model for PETN at 1.4 g / cc., which allows the prediction of the run distance in the experiments for PETN at this density. Detonation and retonation waves as seen in the experiment are evident. The HERMES simulations are analysed to help illuminate the physics occurring in the experiments. JER's work was performed under the auspices of the US DOE by LLNL under Contract DE-AC52-07NA27344 and partially funded by the Joint US DoD/DOE Munitions Technology Development Program. LLNL-ABS-723537.
A computational model of spatial visualization capacity.
Lyon, Don R; Gunzelmann, Glenn; Gluck, Kevin A
2008-09-01
Visualizing spatial material is a cornerstone of human problem solving, but human visualization capacity is sharply limited. To investigate the sources of this limit, we developed a new task to measure visualization accuracy for verbally-described spatial paths (similar to street directions), and implemented a computational process model to perform it. In this model, developed within the Adaptive Control of Thought-Rational (ACT-R) architecture, visualization capacity is limited by three mechanisms. Two of these (associative interference and decay) are longstanding characteristics of ACT-R's declarative memory. A third (spatial interference) is a new mechanism motivated by spatial proximity effects in our data. We tested the model in two experiments, one with parameter-value fitting, and a replication without further fitting. Correspondence between model and data was close in both experiments, suggesting that the model may be useful for understanding why visualizing new, complex spatial material is so difficult.
NASA Astrophysics Data System (ADS)
Nishida, R. T.; Beale, S. B.; Pharoah, J. G.; de Haart, L. G. J.; Blum, L.
2018-01-01
This work is among the first where the results of an extensive experimental research programme are compared to performance calculations of a comprehensive computational fluid dynamics model for a solid oxide fuel cell stack. The model, which combines electrochemical reactions with momentum, heat, and mass transport, is used to obtain results for an established industrial-scale fuel cell stack design with complex manifolds. To validate the model, comparisons with experimentally gathered voltage and temperature data are made for the Jülich Mark-F, 18-cell stack operating in a test furnace. Good agreement is obtained between the model and experiment results for cell voltages and temperature distributions, confirming the validity of the computational methodology for stack design. The transient effects during ramp up of current in the experiment may explain a lower average voltage than model predictions for the power curve.
New Equation of State Models for Hydrodynamic Applications
NASA Astrophysics Data System (ADS)
Young, David A.; Barbee, Troy W., III; Rogers, Forrest J.
1997-07-01
Accurate models of the equation of state of matter at high pressures and temperatures are increasingly required for hydrodynamic simulations. We have developed two new approaches to accurate EOS modeling: 1) ab initio phonons from electron band structure theory for condensed matter and 2) the ACTEX dense plasma model for ultrahigh pressure shocks. We have studied the diamond and high pressure phases of carbon with the ab initio model and find good agreement between theory and experiment for shock Hugoniots, isotherms, and isobars. The theory also predicts a comprehensive phase diagram for carbon. For ultrahigh pressure shock states, we have studied the comparison of ACTEX theory with experiments for deuterium, beryllium, polystyrene, water, aluminum, and silicon dioxide. The agreement is good, showing that complex multispecies plasmas are treated adequately by the theory. These models will be useful in improving the numerical EOS tables used by hydrodynamic codes.
Ultracold Nonreactive Molecules in an Optical Lattice: Connecting Chemistry to Many-Body Physics.
Doçaj, Andris; Wall, Michael L; Mukherjee, Rick; Hazzard, Kaden R A
2016-04-01
We derive effective lattice models for ultracold bosonic or fermionic nonreactive molecules (NRMs) in an optical lattice, analogous to the Hubbard model that describes ultracold atoms in a lattice. In stark contrast to the Hubbard model, which is commonly assumed to accurately describe NRMs, we find that the single on-site interaction parameter U is replaced by a multichannel interaction, whose properties we elucidate. Because this arises from complex short-range collisional physics, it requires no dipolar interactions and thus occurs even in the absence of an electric field or for homonuclear molecules. We find a crossover between coherent few-channel models and fully incoherent single-channel models as the lattice depth is increased. We show that the effective model parameters can be determined in lattice modulation experiments, which, consequently, measure molecular collision dynamics with a vastly sharper energy resolution than experiments in a free-space ultracold gas.
NASA Astrophysics Data System (ADS)
Atanasov, M.; Daul, C. A.
2003-11-01
The DFT based ligand field model for magnetic exchange coupling proposed recently, has been extended to systems containing more than one unpaired electron per site. The guidelines for this extension are described using a model example - the complex (NH 3) 3Cr III(OH) 3Cr III (NH 3) 33+. The exchange Hamiltonian, H ex=-J 12S1S2 has been simplified using symmetry principles, i.e. utilizing the D 3h(C 3v) Cr III - dimer(site) symmetry. Both antiferro- and ferromagnetic exchange coupling constants are found to yield important contributions to the value of the (negative, antiferromagnetic) exchange coupling constant in good agreement with experiment.
Ham, Byung-Kook; Brandom, Jeri L.; Xoconostle-Cázares, Beatriz; Ringgold, Vanessa; Lough, Tony J.; Lucas, William J.
2009-01-01
RNA binding proteins (RBPs) are integral components of ribonucleoprotein (RNP) complexes and play a central role in RNA processing. In plants, some RBPs function in a non-cell-autonomous manner. The angiosperm phloem translocation stream contains a unique population of RBPs, but little is known regarding the nature of the proteins and mRNA species that constitute phloem-mobile RNP complexes. Here, we identified and characterized a 50-kD pumpkin (Cucurbita maxima cv Big Max) phloem RNA binding protein (RBP50) that is evolutionarily related to animal polypyrimidine tract binding proteins. In situ hybridization studies indicated a high level of RBP50 transcripts in companion cells, while immunolocalization experiments detected RBP50 in both companion cells and sieve elements. A comparison of the levels of RBP50 present in vascular bundles and phloem sap indicated that this protein is highly enriched in the phloem sap. Heterografting experiments confirmed that RBP50 is translocated from source to sink tissues. Collectively, these findings established that RBP50 functions as a non-cell-autonomous RBP. Protein overlay, coimmunoprecipitation, and cross-linking experiments identified the phloem proteins and mRNA species that constitute RBP50-based RNP complexes. Gel mobility-shift assays demonstrated that specificity, with respect to the bound mRNA, is established by the polypyrimidine tract binding motifs within such transcripts. We present a model for RBP50-based RNP complexes within the pumpkin phloem translocation stream. PMID:19122103
Involvement of Spearman's g in conceptualisation versus execution of complex tasks.
Carroll, Ellen L; Bright, Peter
2016-10-01
Strong correlations between measures of fluid intelligence (or Spearman's g) and working memory are widely reported in the literature, but there is considerable controversy concerning the nature of underlying mechanisms driving this relationship. In the four experiments presented here we consider the role of response conflict and task complexity in the context of real-time task execution demands (Experiments 1-3) and also address recent evidence that g confers an advantage at the level of task conceptualisation rather than (or in addition to) task execution (Experiment 4). We observed increased sensitivity of measured fluid intelligence to task performance in the presence (vs. the absence) of response conflict, and this relationship remained when task complexity was reduced. Performance-g correlations were also observed in the absence of response conflict, but only in the context of high task complexity. Further, we present evidence that differences in conceptualisation or 'modelling' of task instructions prior to execution had an important mediating effect on observed correlations, but only when the task encompassed a strong element of response inhibition. Our results suggest that individual differences in ability reflect, in large part, variability in the efficiency with which the relational complexity of task constraints are held in mind. It follows that fluid intelligence may support successful task execution through the construction of effective action plans via optimal allocation of limited resources. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
High-resolution dust modelling over complex terrains in West Asia
NASA Astrophysics Data System (ADS)
Basart, S.; Vendrell, L.; Baldasano, J. M.
2016-12-01
The present work demonstrates the impact of model resolution in dust propagation in a complex terrain region such as West Asia. For this purpose, two simulations using the NMMB/BSC-Dust model are performed and analysed, one with a high horizontal resolution (at 0.03° × 0.03°) and one with a lower horizontal resolution (at 0.33° × 0.33°). Both model experiments cover two intense dust storms that occurred on 17-20 March 2012 as a consequence of strong northwesterly Shamal winds that spanned over thousands of kilometres in West Asia. The comparison with ground-based (surface weather stations and sunphotometers) and satellite aerosol observations (Aqua/MODIS and MSG/SEVIRI) shows that despite differences in the magnitude of the simulated dust concentrations, the model is able to reproduce these two dust outbreaks. Differences between both simulations on the dust spread rise on regional dust transport areas in south-western Saudi Arabia, Yemen and Oman. The complex orography in south-western Saudi Arabia, Yemen and Oman (with peaks higher than 3000 m) has an impact on the transported dust concentration fields over mountain regions. Differences between both model configurations are mainly associated to the channelization of the dust flow through valleys and the differences in the modelled altitude of the mountains that alters the meteorology and blocks the dust fronts limiting the dust transport. These results demonstrate how the dust prediction in the vicinity of complex terrains improves using high-horizontal resolution simulations.
Galmarini, Stefano; Koffi, Brigitte; Solazzo, Efisio; Keating, Terry; Hogrefe, Christian; Schulz, Michael; Benedictow, Anna; Griesfeller, Jan Jurgen; Janssens-Maenhout, Greet; Carmichael, Greg; Fu, Joshua; Dentener, Frank
2017-01-31
We present an overview of the coordinated global numerical modelling experiments performed during 2012-2016 by the Task Force on Hemispheric Transport of Air Pollution (TF HTAP), the regional experiments by the Air Quality Model Evaluation International Initiative (AQMEII) over Europe and North America, and the Model Intercomparison Study for Asia (MICS-Asia). To improve model estimates of the impacts of intercontinental transport of air pollution on climate, ecosystems, and human health and to answer a set of policy-relevant questions, these three initiatives performed emission perturbation modelling experiments consistent across the global, hemispheric, and continental/regional scales. In all three initiatives, model results are extensively compared against monitoring data for a range of variables (meteorological, trace gas concentrations, and aerosol mass and composition) from different measurement platforms (ground measurements, vertical profiles, airborne measurements) collected from a number of sources. Approximately 10 to 25 modelling groups have contributed to each initiative, and model results have been managed centrally through three data hubs maintained by each initiative. Given the organizational complexity of bringing together these three initiatives to address a common set of policy-relevant questions, this publication provides the motivation for the modelling activity, the rationale for specific choices made in the model experiments, and an overview of the organizational structures for both the modelling and the measurements used and analysed in a number of modelling studies in this special issue.
NASA Astrophysics Data System (ADS)
Galmarini, Stefano; Koffi, Brigitte; Solazzo, Efisio; Keating, Terry; Hogrefe, Christian; Schulz, Michael; Benedictow, Anna; Griesfeller, Jan Jurgen; Janssens-Maenhout, Greet; Carmichael, Greg; Fu, Joshua; Dentener, Frank
2017-01-01
We present an overview of the coordinated global numerical modelling experiments performed during 2012-2016 by the Task Force on Hemispheric Transport of Air Pollution (TF HTAP), the regional experiments by the Air Quality Model Evaluation International Initiative (AQMEII) over Europe and North America, and the Model Intercomparison Study for Asia (MICS-Asia). To improve model estimates of the impacts of intercontinental transport of air pollution on climate, ecosystems, and human health and to answer a set of policy-relevant questions, these three initiatives performed emission perturbation modelling experiments consistent across the global, hemispheric, and continental/regional scales. In all three initiatives, model results are extensively compared against monitoring data for a range of variables (meteorological, trace gas concentrations, and aerosol mass and composition) from different measurement platforms (ground measurements, vertical profiles, airborne measurements) collected from a number of sources. Approximately 10 to 25 modelling groups have contributed to each initiative, and model results have been managed centrally through three data hubs maintained by each initiative. Given the organizational complexity of bringing together these three initiatives to address a common set of policy-relevant questions, this publication provides the motivation for the modelling activity, the rationale for specific choices made in the model experiments, and an overview of the organizational structures for both the modelling and the measurements used and analysed in a number of modelling studies in this special issue.
Galmarini, Stefano; Koffi, Brigitte; Solazzo, Efisio; Keating, Terry; Hogrefe, Christian; Schulz, Michael; Benedictow, Anna; Griesfeller, Jan Jurgen; Janssens-Maenhout, Greet; Carmichael, Greg; Fu, Joshua; Dentener, Frank
2018-01-01
We present an overview of the coordinated global numerical modelling experiments performed during 2012–2016 by the Task Force on Hemispheric Transport of Air Pollution (TF HTAP), the regional experiments by the Air Quality Model Evaluation International Initiative (AQMEII) over Europe and North America, and the Model Intercomparison Study for Asia (MICS-Asia). To improve model estimates of the impacts of intercontinental transport of air pollution on climate, ecosystems, and human health and to answer a set of policy-relevant questions, these three initiatives performed emission perturbation modelling experiments consistent across the global, hemispheric, and continental/regional scales. In all three initiatives, model results are extensively compared against monitoring data for a range of variables (meteorological, trace gas concentrations, and aerosol mass and composition) from different measurement platforms (ground measurements, vertical profiles, airborne measurements) collected from a number of sources. Approximately 10 to 25 modelling groups have contributed to each initiative, and model results have been managed centrally through three data hubs maintained by each initiative. Given the organizational complexity of bringing together these three initiatives to address a common set of policy-relevant questions, this publication provides the motivation for the modelling activity, the rationale for specific choices made in the model experiments, and an overview of the organizational structures for both the modelling and the measurements used and analysed in a number of modelling studies in this special issue. PMID:29541091
Attentional gating models of object substitution masking.
Põder, Endel
2013-11-01
Di Lollo, Enns, and Rensink (2000) proposed the computational model of object substitution (CMOS) to explain their experimental results with sparse visual maskers. This model supposedly is based on reentrant hypotheses testing in the visual system, and the modeled experiments are believed to demonstrate these reentrant processes in human vision. In this study, I analyze the main assumptions of this model. I argue that CMOS is a version of the attentional gating model and that its relationship with reentrant processing is rather illusory. The fit of this model to the data indicates that reentrant hypotheses testing is not necessary for the explanation of object substitution masking (OSM). Further, the original CMOS cannot predict some important aspects of the experimental data. I test 2 new models incorporating an unselective processing (divided attention) stage; these models are more consistent with data from OSM experiments. My modeling shows that the apparent complexity of OSM can be reduced to a few simple and well-known mechanisms of perception and memory. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Nie, Zhe; Finck, Nicolas; Heberling, Frank; Pruessmann, Tim; Liu, Chunli; Lützenkirchen, Johannes
2017-04-04
Knowledge of the geochemical behavior of selenium and strontium is critical for the safe disposal of radioactive wastes. Goethite, as one of the most thermodynamically stable and commonly occurring natural iron oxy-hydroxides, promisingly retains these elements. This work comprehensively studies the adsorption of Se(IV) and Sr(II) on goethite. Starting from electrokinetic measurements, the binary and ternary adsorption systems are investigated and systematically compared via batch experiments, EXAFS analysis, and CD-MUSIC modeling. Se(IV) forms bidentate inner-sphere surface complexes, while Sr(II) is assumed to form outer-sphere complexes at low and intermediate pH and inner-sphere complexes at high pH. Instead of a direct interaction between Se(IV) and Sr(II), our results indicate an electrostatically driven mutual enhancement of adsorption. Adsorption of Sr(II) is promoted by an average factor of 5 within the typical groundwater pH range from 6 to 8 for the concentration range studied here. However, the interaction between Se(IV) and Sr(II) at the surface is two-sided, Se(IV) promotes Sr(II) outer-sphere adsorption, but competes for inner-sphere adsorption sites at high pH. The complexity of surfaces is highlighted by the inability of adsorption models to predict isoelectric points without additional constraints.
Ye, Xiaoduan; O'Neil, Patrick K; Foster, Adrienne N; Gajda, Michal J; Kosinski, Jan; Kurowski, Michal A; Bujnicki, Janusz M; Friedman, Alan M; Bailey-Kellogg, Chris
2004-12-01
Emerging high-throughput techniques for the characterization of protein and protein-complex structures yield noisy data with sparse information content, placing a significant burden on computation to properly interpret the experimental data. One such technique uses cross-linking (chemical or by cysteine oxidation) to confirm or select among proposed structural models (e.g., from fold recognition, ab initio prediction, or docking) by testing the consistency between cross-linking data and model geometry. This paper develops a probabilistic framework for analyzing the information content in cross-linking experiments, accounting for anticipated experimental error. This framework supports a mechanism for planning experiments to optimize the information gained. We evaluate potential experiment plans using explicit trade-offs among key properties of practical importance: discriminability, coverage, balance, ambiguity, and cost. We devise a greedy algorithm that considers those properties and, from a large number of combinatorial possibilities, rapidly selects sets of experiments expected to discriminate pairs of models efficiently. In an application to residue-specific chemical cross-linking, we demonstrate the ability of our approach to plan experiments effectively involving combinations of cross-linkers and introduced mutations. We also describe an experiment plan for the bacteriophage lambda Tfa chaperone protein in which we plan dicysteine mutants for discriminating threading models by disulfide formation. Preliminary results from a subset of the planned experiments are consistent and demonstrate the practicality of planning. Our methods provide the experimenter with a valuable tool (available from the authors) for understanding and optimizing cross-linking experiments.
Reaction modeling of drainage quality in the Duluth Complex, northern Minnesota, USA
Seal, Robert; Lapakko, Kim; Piatak, Nadine; Woodruff, Laurel G.
2015-01-01
Reaction modeling can be a valuable tool in predicting the long-term behavior of waste material if representative rate constants can be derived from long-term leaching tests or other approaches. Reaction modeling using the REACT program of the Geochemist’s Workbench was conducted to evaluate long-term drainage quality affected by disseminated Cu-Ni-(Co-)-PGM sulfide mineralization in the basal zone of the Duluth Complex where significant resources have been identified. Disseminated sulfide minerals, mostly pyrrhotite and Cu-Fe sulfides, are hosted by clinopyroxene-bearing troctolites. Carbonate minerals are scarce to non-existent. Long-term simulations of up to 20 years of weathering of tailings used two different sets of rate constants: one based on published laboratory single-mineral dissolution experiments, and one based on leaching experiments using bulk material from the Duluth Complex conducted by the Minnesota Department of Natural Resources (MNDNR). The simulations included only plagioclase, olivine, clinopyroxene, pyrrhotite, and water as starting phases. Dissolved oxygen concentrations were assumed to be in equilibrium with atmospheric oxygen. The simulations based on the published single-mineral rate constants predicted that pyrrhotite would be effectively exhausted in less than two years and pH would rise accordingly. In contrast, only 20 percent of the pyrrhotite was depleted after two years using the MNDNR rate constants. Predicted pyrrhotite depletion by the simulation based on the MNDNR rate constant matched well with published results of laboratory tests on tailings. Modeling long-term weathering of mine wastes also can provide important insights into secondary reactions that may influence the permeability of tailings and thereby affect weathering behavior. Both models predicted the precipitation of a variety of secondary phases including goethite, gibbsite, and clay (nontronite).
Cruz-Ramírez, Nicandro; Acosta-Mesa, Héctor Gabriel; Mezura-Montes, Efrén; Guerra-Hernández, Alejandro; Hoyos-Rivera, Guillermo de Jesús; Barrientos-Martínez, Rocío Erandi; Gutiérrez-Fragoso, Karina; Nava-Fernández, Luis Alonso; González-Gaspar, Patricia; Novoa-del-Toro, Elva María; Aguilera-Rueda, Vicente Josué; Ameca-Alducin, María Yaneli
2014-01-01
The bias-variance dilemma is a well-known and important problem in Machine Learning. It basically relates the generalization capability (goodness of fit) of a learning method to its corresponding complexity. When we have enough data at hand, it is possible to use these data in such a way so as to minimize overfitting (the risk of selecting a complex model that generalizes poorly). Unfortunately, there are many situations where we simply do not have this required amount of data. Thus, we need to find methods capable of efficiently exploiting the available data while avoiding overfitting. Different metrics have been proposed to achieve this goal: the Minimum Description Length principle (MDL), Akaike's Information Criterion (AIC) and Bayesian Information Criterion (BIC), among others. In this paper, we focus on crude MDL and empirically evaluate its performance in selecting models with a good balance between goodness of fit and complexity: the so-called bias-variance dilemma, decomposition or tradeoff. Although the graphical interaction between these dimensions (bias and variance) is ubiquitous in the Machine Learning literature, few works present experimental evidence to recover such interaction. In our experiments, we argue that the resulting graphs allow us to gain insights that are difficult to unveil otherwise: that crude MDL naturally selects balanced models in terms of bias-variance, which not necessarily need be the gold-standard ones. We carry out these experiments using a specific model: a Bayesian network. In spite of these motivating results, we also should not overlook three other components that may significantly affect the final model selection: the search procedure, the noise rate and the sample size.
Cruz-Ramírez, Nicandro; Acosta-Mesa, Héctor Gabriel; Mezura-Montes, Efrén; Guerra-Hernández, Alejandro; Hoyos-Rivera, Guillermo de Jesús; Barrientos-Martínez, Rocío Erandi; Gutiérrez-Fragoso, Karina; Nava-Fernández, Luis Alonso; González-Gaspar, Patricia; Novoa-del-Toro, Elva María; Aguilera-Rueda, Vicente Josué; Ameca-Alducin, María Yaneli
2014-01-01
The bias-variance dilemma is a well-known and important problem in Machine Learning. It basically relates the generalization capability (goodness of fit) of a learning method to its corresponding complexity. When we have enough data at hand, it is possible to use these data in such a way so as to minimize overfitting (the risk of selecting a complex model that generalizes poorly). Unfortunately, there are many situations where we simply do not have this required amount of data. Thus, we need to find methods capable of efficiently exploiting the available data while avoiding overfitting. Different metrics have been proposed to achieve this goal: the Minimum Description Length principle (MDL), Akaike’s Information Criterion (AIC) and Bayesian Information Criterion (BIC), among others. In this paper, we focus on crude MDL and empirically evaluate its performance in selecting models with a good balance between goodness of fit and complexity: the so-called bias-variance dilemma, decomposition or tradeoff. Although the graphical interaction between these dimensions (bias and variance) is ubiquitous in the Machine Learning literature, few works present experimental evidence to recover such interaction. In our experiments, we argue that the resulting graphs allow us to gain insights that are difficult to unveil otherwise: that crude MDL naturally selects balanced models in terms of bias-variance, which not necessarily need be the gold-standard ones. We carry out these experiments using a specific model: a Bayesian network. In spite of these motivating results, we also should not overlook three other components that may significantly affect the final model selection: the search procedure, the noise rate and the sample size. PMID:24671204
NASA Technical Reports Server (NTRS)
Castelli, Michael G.
1990-01-01
A number of viscoplastic constitutive models were developed to describe deformation behavior under complex combinations of thermal and mechanical loading. Questions remain, however, regarding the validity of procedures used to characterize these models for specific structural alloys. One area of concern is that the majority of experimental data available for this purpose are determined under isothermal conditions. This experimental study is aimed at determining whether viscoplastic constitutive theories characterized using an isothermal data base can adequately model material response under the complex thermomechanical loading conditions typical of power generation service. The approach adopted was to conduct a series of carefully controlled thermomechanical experiments on a nickel-based superalloy, Hastelloy Alloy X. Previous investigations had shown that this material experiences metallurgical instabilities leading to complex hardening behavior, termed dynamic strain aging. Investigating this phenomenon under full thermomechanical conditions leads to a number of challenging experimental difficulties which up to the present work were unresolved. To correct this situation, a number of advances were made in thermomechanical testing techniques. Advanced methods for dynamic temperature gradient control, phasing control and thermal strain compensation were developed and incorporated into real time test control software. These advances allowed the thermomechanical data to be analyzed with minimal experimental uncertainty. The thermomechanical results were evaluated on both a phenomenological and microstructural basis. Phenomenological results revealed that the thermomechanical hardening trends were not bounded by those displayed under isothermal conditions. For the case of Hastelloy Alloy X (and similar dynamic strain aging materials), this strongly suggests that some form of thermomechanical testing is necessary when characterizing a thermoviscoplastic deformation model. Transmission electron microscopy was used to study the microstructural physics, and analyze the unique phenomenological behavior.
Exploratory Decision-Making as a Function of Lifelong Experience, Not Cognitive Decline
2016-01-01
Older adults perform worse than younger adults in some complex decision-making scenarios, which is commonly attributed to age-related declines in striatal and frontostriatal processing. Recently, this popular account has been challenged by work that considered how older adults’ performance may differ as a function of greater knowledge and experience, and by work showing that, in some cases, older adults outperform younger adults in complex decision-making tasks. In light of this controversy, we examined the performance of older and younger adults in an exploratory choice task that is amenable to model-based analyses and ostensibly not reliant on prior knowledge. Exploration is a critical aspect of decision-making poorly understood across the life span. Across 2 experiments, we addressed (a) how older and younger adults differ in exploratory choice and (b) to what extent observed differences reflect processing capacity declines. Model-based analyses suggested that the strategies used by the 2 groups were qualitatively different, resulting in relatively worse performance for older adults in 1 decision-making environment but equal performance in another. Little evidence was found that differences in processing capacity drove performance differences. Rather the results suggested that older adults’ performance might result from applying a strategy that may have been shaped by their wealth of real-word decision-making experience. While this strategy is likely to be effective in the real world, it is ill suited to some decision environments. These results underscore the importance of taking into account effects of experience in aging studies, even for tasks that do not obviously tap past experiences. PMID:26726916
Piedra, Lissette M; Engstrom, David W
2009-07-01
The life model offers social workers a promising framework to use in assisting immigrant families. However, the complexities of adaptation to a new country may make it difficult for social workers to operate from a purely ecological approach. The authors use segmented assimilation theory to better account for the specificities of the immigrant experience. They argue that by adding concepts from segmented assimilation theory to the life model, social workers can better understand the environmental stressors that increase the vulnerabilities of immigrants to the potentially harsh experience of adapting to a new country. With these concepts, social workers who work with immigrant families will be better positioned to achieve their central goal: enhancing person and environment fit.
Resiliency and Aggression Replacement Training[R] with Families
ERIC Educational Resources Information Center
Calame, Robert; Parker, Kimberlee; Amendola, Mark; Oliver, Robert
2011-01-01
Aggression Replacement Training[R] (ART) is a psychoeducational approach to working with young people who experience difficulties with interpersonal relationships and prosocial behavior. ART[R] originated with Skillstreaming and developed into a three-component model. Arnold P. Goldstein recognized that the complex problems of youth would not…
Complex clinical outcomes, such as adverse reaction to vaccination, arise from the concerted interactions among the myriad components of a biological system. Therefore, comprehensive etiological models can be developed only through the integrated study of multiple types of experi...
ERIC Educational Resources Information Center
Angier, Natalie
1983-01-01
Scientists are designing computer models of biological systems, and of compounds with complex molecules, that can be used to get answers once obtainable only by sacrificing laboratory animals. Although most programs are still under development, some are in use by industrial/pharmaceutical companies. The programs and experiments they simulate are…
Individual differences in transcranial electrical stimulation current density
Russell, Michael J; Goodman, Theodore; Pierson, Ronald; Shepherd, Shane; Wang, Qiang; Groshong, Bennett; Wiley, David F
2013-01-01
Transcranial electrical stimulation (TCES) is effective in treating many conditions, but it has not been possible to accurately forecast current density within the complex anatomy of a given subject's head. We sought to predict and verify TCES current densities and determine the variability of these current distributions in patient-specific models based on magnetic resonance imaging (MRI) data. Two experiments were performed. The first experiment estimated conductivity from MRIs and compared the current density results against actual measurements from the scalp surface of 3 subjects. In the second experiment, virtual electrodes were placed on the scalps of 18 subjects to model simulated current densities with 2 mA of virtually applied stimulation. This procedure was repeated for 4 electrode locations. Current densities were then calculated for 75 brain regions. Comparison of modeled and measured external current in experiment 1 yielded a correlation of r = .93. In experiment 2, modeled individual differences were greatest near the electrodes (ten-fold differences were common), but simulated current was found in all regions of the brain. Sites that were distant from the electrodes (e.g. hypothalamus) typically showed two-fold individual differences. MRI-based modeling can effectively predict current densities in individual brains. Significant variation occurs between subjects with the same applied electrode configuration. Individualized MRI-based modeling should be considered in place of the 10-20 system when accurate TCES is needed. PMID:24285948
Two-dimensional electronic spectra of the photosynthetic apparatus of green sulfur bacteria
NASA Astrophysics Data System (ADS)
Kramer, Tobias; Rodriguez, Mirta
2017-03-01
Advances in time resolved spectroscopy have provided new insight into the energy transmission in natural photosynthetic complexes. Novel theoretical tools and models are being developed in order to explain the experimental results. We provide a model calculation for the two-dimensional electronic spectra of Cholorobaculum tepidum which correctly describes the main features and transfer time scales found in recent experiments. From our calculation one can infer the coupling of the antenna chlorosome with the environment and the coupling between the chlorosome and the Fenna-Matthews-Olson complex. We show that environment assisted transport between the subunits is the required mechanism to reproduce the experimental two-dimensional electronic spectra.