Reducing the Complexity of an Agent-Based Local Heroin Market Model
Heard, Daniel; Bobashev, Georgiy V.; Morris, Robert J.
2014-01-01
This project explores techniques for reducing the complexity of an agent-based model (ABM). The analysis involved a model developed from the ethnographic research of Dr. Lee Hoffer in the Larimer area heroin market, which involved drug users, drug sellers, homeless individuals and police. The authors used statistical techniques to create a reduced version of the original model which maintained simulation fidelity while reducing computational complexity. This involved identifying key summary quantities of individual customer behavior as well as overall market activity and replacing some agents with probability distributions and regressions. The model was then extended to allow external market interventions in the form of police busts. Extensions of this research perspective, as well as its strengths and limitations, are discussed. PMID:25025132
Computational methods to predict railcar response to track cross-level variations
DOT National Transportation Integrated Search
1976-09-01
The rocking response of railroad freight cars to track cross-level variations is studied using: (1) a reduced complexity digital simulation model, and (2) a quasi-linear describing function analysis. The reduced complexity digital simulation model em...
Tracer transport in soils and shallow groundwater: model abstraction with modern tools
USDA-ARS?s Scientific Manuscript database
Vadose zone controls contaminant transport from the surface to groundwater, and modeling transport in vadose zone has become a burgeoning field. Exceedingly complex models of subsurface contaminant transport are often inefficient. Model abstraction is the methodology for reducing the complexity of a...
The use of discrete-event simulation modelling to improve radiation therapy planning processes.
Werker, Greg; Sauré, Antoine; French, John; Shechter, Steven
2009-07-01
The planning portion of the radiation therapy treatment process at the British Columbia Cancer Agency is efficient but nevertheless contains room for improvement. The purpose of this study is to show how a discrete-event simulation (DES) model can be used to represent this complex process and to suggest improvements that may reduce the planning time and ultimately reduce overall waiting times. A simulation model of the radiation therapy (RT) planning process was constructed using the Arena simulation software, representing the complexities of the system. Several types of inputs feed into the model; these inputs come from historical data, a staff survey, and interviews with planners. The simulation model was validated against historical data and then used to test various scenarios to identify and quantify potential improvements to the RT planning process. Simulation modelling is an attractive tool for describing complex systems, and can be used to identify improvements to the processes involved. It is possible to use this technique in the area of radiation therapy planning with the intent of reducing process times and subsequent delays for patient treatment. In this particular system, reducing the variability and length of oncologist-related delays contributes most to improving the planning time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu,S.; Jing, C.; Meng, X.
2008-01-01
The mechanism of arsenic re-mobilization in spent adsorbents under reducing conditions was studied using X-ray absorption spectroscopy and surface complexation model calculations. X-ray absorption near edge structure (XANES) spectroscopy demonstrated that As(V) was partially reduced to As(III) in spent granular ferric hydroxide (GFH), titanium dioxide (TiO2), activated alumina (AA) and modified activated alumina (MAA) adsorbents after 2 years of anaerobic incubation. As(V) was completely reduced to As(III) in spent granular ferric oxide (GFO) under 2-year incubation. The extended X-ray absorption fine structure (EXAFS) spectroscopy analysis showed that As(III) formed bidentate binuclear surface complexes on GFO as evidenced by an averagemore » As(III)-O bond distance of 1.78 Angstroms and As(III)-Fe distance of 3.34 Angstroms . The release of As from the spent GFO and TiO2 was simulated using the charge distribution multi-site complexation (CD-MUSIC) model. The observed redox ranges for As release and sulfate mobility were described by model calculations.« less
NASA Astrophysics Data System (ADS)
Cooper, Rebecca Elizabeth; Eusterhues, Karin; Wegner, Carl-Eric; Totsche, Kai Uwe; Küsel, Kirsten
2017-11-01
The formation of Fe(III) oxides in natural environments occurs in the presence of natural organic matter (OM), resulting in the formation of OM-mineral complexes that form through adsorption or coprecipitation processes. Thus, microbial Fe(III) reduction in natural environments most often occurs in the presence of OM-mineral complexes rather than pure Fe(III) minerals. This study investigated to what extent does the content of adsorbed or coprecipitated OM on ferrihydrite influence the rate of Fe(III) reduction by Shewanella oneidensis MR-1, a model Fe(III)-reducing microorganism, in comparison to a microbial consortium extracted from the acidic, Fe-rich Schlöppnerbrunnen fen. We found that increased OM content led to increased rates of microbial Fe(III) reduction by S. oneidensis MR-1 in contrast to earlier findings with the model organism Geobacter bremensis. Ferrihydrite-OM coprecipitates were reduced slightly faster than ferrihydrites with adsorbed OM. Surprisingly, the complex microbial consortia stimulated by a mixture of electrons donors (lactate, acetate, and glucose) mimics S. oneidensis under the same experimental Fe(III)-reducing conditions suggesting similar mechanisms of electron transfer whether or not the OM is adsorbed or coprecipitated to the mineral surfaces. We also followed potential shifts of the microbial community during the incubation via 16S rRNA gene sequence analyses to determine variations due to the presence of adsorbed or coprecipitated OM-ferrihydrite complexes in contrast to pure ferrihydrite. Community profile analyses showed no enrichment of typical model Fe(III)-reducing bacteria, such as Shewanella or Geobacter sp., but an enrichment of fermenters (e.g., Enterobacteria) during pure ferrihydrite incubations which are known to use Fe(III) as an electron sink. Instead, OM-mineral complexes favored the enrichment of microbes including Desulfobacteria and Pelosinus sp., both of which can utilize lactate and acetate as an electron donor under Fe(III)-reducing conditions. In summary, this study shows that increasing concentrations of OM in OM-mineral complexes determines microbial Fe(III) reduction rates and shapes the microbial community structure involved in the reductive dissolution of ferrihydrite. Similarities observed between the complex Fe(III)-reducing microbial consortia and the model Fe(III)-reducer S. oneidensis MR-1 suggest electron-shuttling mechanisms dominate in OM-rich environments, including soils, sediments, and fens, where natural OM interacts with Fe(III) oxides during mineral formation.
Determination of effective loss factors in reduced SEA models
NASA Astrophysics Data System (ADS)
Chimeno Manguán, M.; Fernández de las Heras, M. J.; Roibás Millán, E.; Simón Hidalgo, F.
2017-01-01
The definition of Statistical Energy Analysis (SEA) models for large complex structures is highly conditioned by the classification of the structure elements into a set of coupled subsystems and the subsequent determination of the loss factors representing both the internal damping and the coupling between subsystems. The accurate definition of the complete system can lead to excessively large models as the size and complexity increases. This fact can also rise practical issues for the experimental determination of the loss factors. This work presents a formulation of reduced SEA models for incomplete systems defined by a set of effective loss factors. This reduced SEA model provides a feasible number of subsystems for the application of the Power Injection Method (PIM). For structures of high complexity, their components accessibility can be restricted, for instance internal equipments or panels. For these cases the use of PIM to carry out an experimental SEA analysis is not possible. New methods are presented for this case in combination with the reduced SEA models. These methods allow defining some of the model loss factors that could not be obtained through PIM. The methods are validated with a numerical analysis case and they are also applied to an actual spacecraft structure with accessibility restrictions: a solar wing in folded configuration.
Temporal Decompostion of a Distribution System Quasi-Static Time-Series Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mather, Barry A; Hunsberger, Randolph J
This paper documents the first phase of an investigation into reducing runtimes of complex OpenDSS models through parallelization. As the method seems promising, future work will quantify - and further mitigate - errors arising from this process. In this initial report, we demonstrate how, through the use of temporal decomposition, the run times of a complex distribution-system-level quasi-static time series simulation can be reduced roughly proportional to the level of parallelization. Using this method, the monolithic model runtime of 51 hours was reduced to a minimum of about 90 minutes. As expected, this comes at the expense of control- andmore » voltage-errors at the time-slice boundaries. All evaluations were performed using a real distribution circuit model with the addition of 50 PV systems - representing a mock complex PV impact study. We are able to reduce induced transition errors through the addition of controls initialization, though small errors persist. The time savings with parallelization are so significant that we feel additional investigation to reduce control errors is warranted.« less
Demonstration of reduced-order urban scale building energy models
Heidarinejad, Mohammad; Mattise, Nicholas; Dahlhausen, Matthew; ...
2017-09-08
The aim of this study is to demonstrate a developed framework to rapidly create urban scale reduced-order building energy models using a systematic summary of the simplifications required for the representation of building exterior and thermal zones. These urban scale reduced-order models rely on the contribution of influential variables to the internal, external, and system thermal loads. OpenStudio Application Programming Interface (API) serves as a tool to automate the process of model creation and demonstrate the developed framework. The results of this study show that the accuracy of the developed reduced-order building energy models varies only up to 10% withmore » the selection of different thermal zones. In addition, to assess complexity of the developed reduced-order building energy models, this study develops a novel framework to quantify complexity of the building energy models. Consequently, this study empowers the building energy modelers to quantify their building energy model systematically in order to report the model complexity alongside the building energy model accuracy. An exhaustive analysis on four university campuses suggests that the urban neighborhood buildings lend themselves to simplified typical shapes. Specifically, building energy modelers can utilize the developed typical shapes to represent more than 80% of the U.S. buildings documented in the CBECS database. One main benefits of this developed framework is the opportunity for different models including airflow and solar radiation models to share the same exterior representation, allowing a unifying exchange data. Altogether, the results of this study have implications for a large-scale modeling of buildings in support of urban energy consumption analyses or assessment of a large number of alternative solutions in support of retrofit decision-making in the building industry.« less
Demonstration of reduced-order urban scale building energy models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heidarinejad, Mohammad; Mattise, Nicholas; Dahlhausen, Matthew
The aim of this study is to demonstrate a developed framework to rapidly create urban scale reduced-order building energy models using a systematic summary of the simplifications required for the representation of building exterior and thermal zones. These urban scale reduced-order models rely on the contribution of influential variables to the internal, external, and system thermal loads. OpenStudio Application Programming Interface (API) serves as a tool to automate the process of model creation and demonstrate the developed framework. The results of this study show that the accuracy of the developed reduced-order building energy models varies only up to 10% withmore » the selection of different thermal zones. In addition, to assess complexity of the developed reduced-order building energy models, this study develops a novel framework to quantify complexity of the building energy models. Consequently, this study empowers the building energy modelers to quantify their building energy model systematically in order to report the model complexity alongside the building energy model accuracy. An exhaustive analysis on four university campuses suggests that the urban neighborhood buildings lend themselves to simplified typical shapes. Specifically, building energy modelers can utilize the developed typical shapes to represent more than 80% of the U.S. buildings documented in the CBECS database. One main benefits of this developed framework is the opportunity for different models including airflow and solar radiation models to share the same exterior representation, allowing a unifying exchange data. Altogether, the results of this study have implications for a large-scale modeling of buildings in support of urban energy consumption analyses or assessment of a large number of alternative solutions in support of retrofit decision-making in the building industry.« less
ALC: automated reduction of rule-based models
Koschorreck, Markus; Gilles, Ernst Dieter
2008-01-01
Background Combinatorial complexity is a challenging problem for the modeling of cellular signal transduction since the association of a few proteins can give rise to an enormous amount of feasible protein complexes. The layer-based approach is an approximative, but accurate method for the mathematical modeling of signaling systems with inherent combinatorial complexity. The number of variables in the simulation equations is highly reduced and the resulting dynamic models show a pronounced modularity. Layer-based modeling allows for the modeling of systems not accessible previously. Results ALC (Automated Layer Construction) is a computer program that highly simplifies the building of reduced modular models, according to the layer-based approach. The model is defined using a simple but powerful rule-based syntax that supports the concepts of modularity and macrostates. ALC performs consistency checks on the model definition and provides the model output in different formats (C MEX, MATLAB, Mathematica and SBML) as ready-to-run simulation files. ALC also provides additional documentation files that simplify the publication or presentation of the models. The tool can be used offline or via a form on the ALC website. Conclusion ALC allows for a simple rule-based generation of layer-based reduced models. The model files are given in different formats as ready-to-run simulation files. PMID:18973705
Decreasing the temporal complexity for nonlinear, implicit reduced-order models by forecasting
Carlberg, Kevin; Ray, Jaideep; van Bloemen Waanders, Bart
2015-02-14
Implicit numerical integration of nonlinear ODEs requires solving a system of nonlinear algebraic equations at each time step. Each of these systems is often solved by a Newton-like method, which incurs a sequence of linear-system solves. Most model-reduction techniques for nonlinear ODEs exploit knowledge of system's spatial behavior to reduce the computational complexity of each linear-system solve. However, the number of linear-system solves for the reduced-order simulation often remains roughly the same as that for the full-order simulation. We propose exploiting knowledge of the model's temporal behavior to (1) forecast the unknown variable of the reduced-order system of nonlinear equationsmore » at future time steps, and (2) use this forecast as an initial guess for the Newton-like solver during the reduced-order-model simulation. To compute the forecast, we propose using the Gappy POD technique. As a result, the goal is to generate an accurate initial guess so that the Newton solver requires many fewer iterations to converge, thereby decreasing the number of linear-system solves in the reduced-order-model simulation.« less
Noise Estimation in Electroencephalogram Signal by Using Volterra Series Coefficients
Hassani, Malihe; Karami, Mohammad Reza
2015-01-01
The Volterra model is widely used for nonlinearity identification in practical applications. In this paper, we employed Volterra model to find the nonlinearity relation between electroencephalogram (EEG) signal and the noise that is a novel approach to estimate noise in EEG signal. We show that by employing this method. We can considerably improve the signal to noise ratio by the ratio of at least 1.54. An important issue in implementing Volterra model is its computation complexity, especially when the degree of nonlinearity is increased. Hence, in many applications it is urgent to reduce the complexity of computation. In this paper, we use the property of EEG signal and propose a new and good approximation of delayed input signal to its adjacent samples in order to reduce the computation of finding Volterra series coefficients. The computation complexity is reduced by the ratio of at least 1/3 when the filter memory is 3. PMID:26284176
Managing Complex Interoperability Solutions using Model-Driven Architecture
2011-06-01
such as Oracle or MySQL . Each data model for a specific RDBMS is a distinct PSM. Or the system may want to exchange information with other C2...reduced number of transformations, e.g., from an RDBMS physical schema to the corresponding SQL script needed to instantiate the tables in a relational...tance of models. In engineering, a model serves several purposes: 1. It presents an abstract view of a complex system or of a complex information
Complex-valued Multidirectional Associative Memory
NASA Astrophysics Data System (ADS)
Kobayashi, Masaki; Yamazaki, Haruaki
Hopfield model is a representative associative memory. It was improved to Bidirectional Associative Memory(BAM) by Kosko and Multidirectional Associative Memory(MAM) by Hagiwara. They have two layers or multilayers. Since they have symmetric connections between layers, they ensure to converge. MAM can deal with multiples of many patterns, such as (x1, x2,…), where xm is the pattern on layer-m. Noest, Hirose and Nemoto proposed complex-valued Hopfield model. Lee proposed complex-valued Bidirectional Associative Memory. Zemel proved the rotation invariance of complex-valued Hopfield model. It means that the rotated pattern also stored. In this paper, the complex-valued Multidirectional Associative Memory is proposed. The rotation invariance is also proved. Moreover it is shown by computer simulation that the differences of angles of given patterns are automatically reduced. At first we define complex-valued Multidirectional Associative Memory. Then we define the energy function of network. By using energy function, we prove that the network ensures to converge. Next, we define the learning law and show the characteristic of recall process. The characteristic means that the differences of angles of given patterns are automatically reduced. Especially we prove the following theorem. In case that only a multiple of patterns is stored, if patterns with different angles are given to each layer, the differences are automatically reduced. Finally, we invest that the differences of angles influence the noise robustness. It reduce the noise robustness, because input to each layer become small. We show that by computer simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Lijian, E-mail: ljjiang@hnu.edu.cn; Li, Xinping, E-mail: exping@126.com
Stochastic multiscale modeling has become a necessary approach to quantify uncertainty and characterize multiscale phenomena for many practical problems such as flows in stochastic porous media. The numerical treatment of the stochastic multiscale models can be very challengeable as the existence of complex uncertainty and multiple physical scales in the models. To efficiently take care of the difficulty, we construct a computational reduced model. To this end, we propose a multi-element least square high-dimensional model representation (HDMR) method, through which the random domain is adaptively decomposed into a few subdomains, and a local least square HDMR is constructed in eachmore » subdomain. These local HDMRs are represented by a finite number of orthogonal basis functions defined in low-dimensional random spaces. The coefficients in the local HDMRs are determined using least square methods. We paste all the local HDMR approximations together to form a global HDMR approximation. To further reduce computational cost, we present a multi-element reduced least-square HDMR, which improves both efficiency and approximation accuracy in certain conditions. To effectively treat heterogeneity properties and multiscale features in the models, we integrate multiscale finite element methods with multi-element least-square HDMR for stochastic multiscale model reduction. This approach significantly reduces the original model's complexity in both the resolution of the physical space and the high-dimensional stochastic space. We analyze the proposed approach, and provide a set of numerical experiments to demonstrate the performance of the presented model reduction techniques. - Highlights: • Multi-element least square HDMR is proposed to treat stochastic models. • Random domain is adaptively decomposed into some subdomains to obtain adaptive multi-element HDMR. • Least-square reduced HDMR is proposed to enhance computation efficiency and approximation accuracy in certain conditions. • Integrating MsFEM and multi-element least square HDMR can significantly reduce computation complexity.« less
Adaptive tracking for complex systems using reduced-order models
NASA Technical Reports Server (NTRS)
Carnigan, Craig R.
1990-01-01
Reduced-order models are considered in the context of parameter adaptive controllers for tracking workspace trajectories. A dual-arm manipulation task is used to illustrate the methodology and provide simulation results. A parameter adaptive controller is designed to track a payload trajectory using a four-parameter model instead of the full-order, nine-parameter model. Several simulations with different payload-to-arm mass ratios are used to illustrate the capabilities of the reduced-order model in tracking the desired trajectory.
Adaptive tracking for complex systems using reduced-order models
NASA Technical Reports Server (NTRS)
Carignan, Craig R.
1990-01-01
Reduced-order models are considered in the context of parameter adaptive controllers for tracking workspace trajectories. A dual-arm manipulation task is used to illustrate the methodology and provide simulation results. A parameter adaptive controller is designed to track the desired position trajectory of a payload using a four-parameter model instead of a full-order, nine-parameter model. Several simulations with different payload-to-arm mass ratios are used to illustrate the capabilities of the reduced-order model in tracking the desired trajectory.
None
2018-01-16
The Red Sky/Red Mesa supercomputing platform dramatically reduces the time required to simulate complex fuel models, from 4-6 months to just 4 weeks, allowing researchers to accelerate the pace at which they can address these complex problems. Its speed also reduces the need for laboratory and field testing, allowing for energy reduction far beyond data center walls.
Virtual enterprise model for the electronic components business in the Nuclear Weapons Complex
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferguson, T.J.; Long, K.S.; Sayre, J.A.
1994-08-01
The electronic components business within the Nuclear Weapons Complex spans organizational and Department of Energy contractor boundaries. An assessment of the current processes indicates a need for fundamentally changing the way electronic components are developed, procured, and manufactured. A model is provided based on a virtual enterprise that recognizes distinctive competencies within the Nuclear Weapons Complex and at the vendors. The model incorporates changes that reduce component delivery cycle time and improve cost effectiveness while delivering components of the appropriate quality.
Novel Framework for Reduced Order Modeling of Aero-engine Components
NASA Astrophysics Data System (ADS)
Safi, Ali
The present study focuses on the popular dynamic reduction methods used in design of complex assemblies (millions of Degrees of Freedom) where numerous iterations are involved to achieve the final design. Aerospace manufacturers such as Rolls Royce and Pratt & Whitney are actively seeking techniques that reduce computational time while maintaining accuracy of the models. This involves modal analysis of components with complex geometries to determine the dynamic behavior due to non-linearity and complicated loading conditions. In such a case the sub-structuring and dynamic reduction techniques prove to be an efficient tool to reduce design cycle time. The components whose designs are finalized can be dynamically reduced to mass and stiffness matrices at the boundary nodes in the assembly. These matrices conserve the dynamics of the component in the assembly, and thus avoid repeated calculations during the analysis runs for design modification of other components. This thesis presents a novel framework in terms of modeling and meshing of any complex structure, in this case an aero-engine casing. In this study the affect of meshing techniques on the run time are highlighted. The modal analysis is carried out using an extremely fine mesh to ensure all minor details in the structure are captured correctly in the Finite Element (FE) model. This is used as the reference model, to compare against the results of the reduced model. The study also shows the conditions/criteria under which dynamic reduction can be implemented effectively, proving the accuracy of Criag-Bampton (C.B.) method and limitations of Static Condensation. The study highlights the longer runtime needed to produce the reduced matrices of components compared to the overall runtime of the complete unreduced model. Although once the components are reduced, the assembly run is significantly. Hence the decision to use Component Mode Synthesis (CMS) is to be taken judiciously considering the number of iterations that may be required during the design cycle.
ERIC Educational Resources Information Center
Petko, Dominik; Prasse, Doreen; Cantieni, Andrea
2018-01-01
Decades of research have shown that technological change in schools depends on multiple interrelated factors. Structural equation models explaining the interplay of factors often suffer from high complexity and low coherence. To reduce complexity, a more robust structural equation model was built with data from a survey of 349 Swiss primary school…
Reduced modeling of signal transduction – a modular approach
Koschorreck, Markus; Conzelmann, Holger; Ebert, Sybille; Ederer, Michael; Gilles, Ernst Dieter
2007-01-01
Background Combinatorial complexity is a challenging problem in detailed and mechanistic mathematical modeling of signal transduction. This subject has been discussed intensively and a lot of progress has been made within the last few years. A software tool (BioNetGen) was developed which allows an automatic rule-based set-up of mechanistic model equations. In many cases these models can be reduced by an exact domain-oriented lumping technique. However, the resulting models can still consist of a very large number of differential equations. Results We introduce a new reduction technique, which allows building modularized and highly reduced models. Compared to existing approaches further reduction of signal transduction networks is possible. The method also provides a new modularization criterion, which allows to dissect the model into smaller modules that are called layers and can be modeled independently. Hallmarks of the approach are conservation relations within each layer and connection of layers by signal flows instead of mass flows. The reduced model can be formulated directly without previous generation of detailed model equations. It can be understood and interpreted intuitively, as model variables are macroscopic quantities that are converted by rates following simple kinetics. The proposed technique is applicable without using complex mathematical tools and even without detailed knowledge of the mathematical background. However, we provide a detailed mathematical analysis to show performance and limitations of the method. For physiologically relevant parameter domains the transient as well as the stationary errors caused by the reduction are negligible. Conclusion The new layer based reduced modeling method allows building modularized and strongly reduced models of signal transduction networks. Reduced model equations can be directly formulated and are intuitively interpretable. Additionally, the method provides very good approximations especially for macroscopic variables. It can be combined with existing reduction methods without any difficulties. PMID:17854494
Exact model reduction of combinatorial reaction networks
Conzelmann, Holger; Fey, Dirk; Gilles, Ernst D
2008-01-01
Background Receptors and scaffold proteins usually possess a high number of distinct binding domains inducing the formation of large multiprotein signaling complexes. Due to combinatorial reasons the number of distinguishable species grows exponentially with the number of binding domains and can easily reach several millions. Even by including only a limited number of components and binding domains the resulting models are very large and hardly manageable. A novel model reduction technique allows the significant reduction and modularization of these models. Results We introduce methods that extend and complete the already introduced approach. For instance, we provide techniques to handle the formation of multi-scaffold complexes as well as receptor dimerization. Furthermore, we discuss a new modeling approach that allows the direct generation of exactly reduced model structures. The developed methods are used to reduce a model of EGF and insulin receptor crosstalk comprising 5,182 ordinary differential equations (ODEs) to a model with 87 ODEs. Conclusion The methods, presented in this contribution, significantly enhance the available methods to exactly reduce models of combinatorial reaction networks. PMID:18755034
Model predictive control based on reduced order models applied to belt conveyor system.
Chen, Wei; Li, Xin
2016-11-01
In the paper, a model predictive controller based on reduced order model is proposed to control belt conveyor system, which is an electro-mechanics complex system with long visco-elastic body. Firstly, in order to design low-degree controller, the balanced truncation method is used for belt conveyor model reduction. Secondly, MPC algorithm based on reduced order model for belt conveyor system is presented. Because of the error bound between the full-order model and reduced order model, two Kalman state estimators are applied in the control scheme to achieve better system performance. Finally, the simulation experiments are shown that balanced truncation method can significantly reduce the model order with high-accuracy and model predictive control based on reduced-model performs well in controlling the belt conveyor system. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Performance of Random Effects Model Estimators under Complex Sampling Designs
ERIC Educational Resources Information Center
Jia, Yue; Stokes, Lynne; Harris, Ian; Wang, Yan
2011-01-01
In this article, we consider estimation of parameters of random effects models from samples collected via complex multistage designs. Incorporation of sampling weights is one way to reduce estimation bias due to unequal probabilities of selection. Several weighting methods have been proposed in the literature for estimating the parameters of…
Hillslope threshold response to rainfall: (2) development and use of a macroscale model
Chris B. Graham; Jeffrey J. McDonnell
2010-01-01
Hillslope hydrological response to precipitation is extremely complex and poorly modeled. One possible approach for reducing the complexity of hillslope response and its mathematical parameterization is to look for macroscale hydrological behavior. Hillslope threshold response to storm precipitation is one such macroscale behavior observed at field sites across the...
NASA Astrophysics Data System (ADS)
Xiang, Hong-Jun; Zhang, Zhi-Wei; Shi, Zhi-Fei; Li, Hong
2018-04-01
A fully coupled modeling approach is developed for piezoelectric energy harvesters in this work based on the use of available robust finite element packages and efficient reducing order modeling techniques. At first, the harvester is modeled using finite element packages. The dynamic equilibrium equations of harvesters are rebuilt by extracting system matrices from the finite element model using built-in commands without any additional tools. A Krylov subspace-based scheme is then applied to obtain a reduced-order model for improving simulation efficiency but preserving the key features of harvesters. Co-simulation of the reduced-order model with nonlinear energy harvesting circuits is achieved in a system level. Several examples in both cases of harmonic response and transient response analysis are conducted to validate the present approach. The proposed approach allows to improve the simulation efficiency by several orders of magnitude. Moreover, the parameters used in the equivalent circuit model can be conveniently obtained by the proposed eigenvector-based model order reduction technique. More importantly, this work establishes a methodology for modeling of piezoelectric energy harvesters with any complicated mechanical geometries and nonlinear circuits. The input load may be more complex also. The method can be employed by harvester designers to optimal mechanical structures or by circuit designers to develop novel energy harvesting circuits.
Reduced complexity modeling of Arctic delta dynamics
NASA Astrophysics Data System (ADS)
Piliouras, A.; Lauzon, R.; Rowland, J. C.
2017-12-01
How water and sediment are routed through deltas has important implications for our understanding of nutrient and sediment fluxes to the coastal ocean. These fluxes may be especially important in Arctic environments, because the Arctic ocean receives a disproportionately large amount of river discharge and high latitude regions are expected to be particularly vulnerable to climate change. The Arctic has some of the world's largest but least studied deltas. This lack of data is due to remote and hazardous conditions, sparse human populations, and limited remote sensing resources. In the absence of data, complex models may be of limited scientific utility in understanding Arctic delta dynamics. To overcome this challenge, we adapt the reduced complexity delta-building model DeltaRCM for Arctic environments to explore the influence of sea ice and permafrost on delta morphology and dynamics. We represent permafrost by increasing the threshold for sediment erosion, as permafrost has been found to increase cohesion and reduce channel migration rates. The presence of permafrost in the model results in the creation of more elongate channels, fewer active channels, and a rougher shoreline. We consider several effects of sea ice, including introducing friction which increases flow resistance, constriction of flow by landfast ice, and changes in effective water surface elevation. Flow constriction and increased friction from ice results in a rougher shoreline, more frequent channel switching, decreased channel migration rates, and enhanced deposition offshore of channel mouths. The reduced complexity nature of the model is ideal for generating a basic understanding of which processes unique to Arctic environments may have important effects on delta evolution, and it allows us to explore a variety of rules for incorporating those processes into the model to inform future Arctic delta modelling efforts. Finally, we plan to use the modeling results to determine how the presence of permafrost and sea ice may influence delta morphology and the resulting large-scale patterns of water and sediment fluxes at the coast.
Hu, Eric Y; Bouteiller, Jean-Marie C; Song, Dong; Baudry, Michel; Berger, Theodore W
2015-01-01
Chemical synapses are comprised of a wide collection of intricate signaling pathways involving complex dynamics. These mechanisms are often reduced to simple spikes or exponential representations in order to enable computer simulations at higher spatial levels of complexity. However, these representations cannot capture important nonlinear dynamics found in synaptic transmission. Here, we propose an input-output (IO) synapse model capable of generating complex nonlinear dynamics while maintaining low computational complexity. This IO synapse model is an extension of a detailed mechanistic glutamatergic synapse model capable of capturing the input-output relationships of the mechanistic model using the Volterra functional power series. We demonstrate that the IO synapse model is able to successfully track the nonlinear dynamics of the synapse up to the third order with high accuracy. We also evaluate the accuracy of the IO synapse model at different input frequencies and compared its performance with that of kinetic models in compartmental neuron models. Our results demonstrate that the IO synapse model is capable of efficiently replicating complex nonlinear dynamics that were represented in the original mechanistic model and provide a method to replicate complex and diverse synaptic transmission within neuron network simulations.
Hu, Eric Y.; Bouteiller, Jean-Marie C.; Song, Dong; Baudry, Michel; Berger, Theodore W.
2015-01-01
Chemical synapses are comprised of a wide collection of intricate signaling pathways involving complex dynamics. These mechanisms are often reduced to simple spikes or exponential representations in order to enable computer simulations at higher spatial levels of complexity. However, these representations cannot capture important nonlinear dynamics found in synaptic transmission. Here, we propose an input-output (IO) synapse model capable of generating complex nonlinear dynamics while maintaining low computational complexity. This IO synapse model is an extension of a detailed mechanistic glutamatergic synapse model capable of capturing the input-output relationships of the mechanistic model using the Volterra functional power series. We demonstrate that the IO synapse model is able to successfully track the nonlinear dynamics of the synapse up to the third order with high accuracy. We also evaluate the accuracy of the IO synapse model at different input frequencies and compared its performance with that of kinetic models in compartmental neuron models. Our results demonstrate that the IO synapse model is capable of efficiently replicating complex nonlinear dynamics that were represented in the original mechanistic model and provide a method to replicate complex and diverse synaptic transmission within neuron network simulations. PMID:26441622
Clinical Complexity in Medicine: A Measurement Model of Task and Patient Complexity.
Islam, R; Weir, C; Del Fiol, G
2016-01-01
Complexity in medicine needs to be reduced to simple components in a way that is comprehensible to researchers and clinicians. Few studies in the current literature propose a measurement model that addresses both task and patient complexity in medicine. The objective of this paper is to develop an integrated approach to understand and measure clinical complexity by incorporating both task and patient complexity components focusing on the infectious disease domain. The measurement model was adapted and modified for the healthcare domain. Three clinical infectious disease teams were observed, audio-recorded and transcribed. Each team included an infectious diseases expert, one infectious diseases fellow, one physician assistant and one pharmacy resident fellow. The transcripts were parsed and the authors independently coded complexity attributes. This baseline measurement model of clinical complexity was modified in an initial set of coding processes and further validated in a consensus-based iterative process that included several meetings and email discussions by three clinical experts from diverse backgrounds from the Department of Biomedical Informatics at the University of Utah. Inter-rater reliability was calculated using Cohen's kappa. The proposed clinical complexity model consists of two separate components. The first is a clinical task complexity model with 13 clinical complexity-contributing factors and 7 dimensions. The second is the patient complexity model with 11 complexity-contributing factors and 5 dimensions. The measurement model for complexity encompassing both task and patient complexity will be a valuable resource for future researchers and industry to measure and understand complexity in healthcare.
Takecian, Pedro L.; Oikawa, Marcio K.; Braghetto, Kelly R.; Rocha, Paulo; Lucena, Fred; Kavounis, Katherine; Schlumpf, Karen S.; Acker, Susan; Carneiro-Proietti, Anna B. F.; Sabino, Ester C.; Custer, Brian; Busch, Michael P.; Ferreira, João E.
2013-01-01
Over time, data warehouse (DW) systems have become more difficult to develop because of the growing heterogeneity of data sources. Despite advances in research and technology, DW projects are still too slow for pragmatic results to be generated. Here, we address the following question: how can the complexity of DW development for integration of heterogeneous transactional information systems be reduced? To answer this, we proposed methodological guidelines based on cycles of conceptual modeling and data analysis, to drive construction of a modular DW system. These guidelines were applied to the blood donation domain, successfully reducing the complexity of DW development. PMID:23729945
Takecian, Pedro L; Oikawa, Marcio K; Braghetto, Kelly R; Rocha, Paulo; Lucena, Fred; Kavounis, Katherine; Schlumpf, Karen S; Acker, Susan; Carneiro-Proietti, Anna B F; Sabino, Ester C; Custer, Brian; Busch, Michael P; Ferreira, João E
2013-06-01
Over time, data warehouse (DW) systems have become more difficult to develop because of the growing heterogeneity of data sources. Despite advances in research and technology, DW projects are still too slow for pragmatic results to be generated. Here, we address the following question: how can the complexity of DW development for integration of heterogeneous transactional information systems be reduced? To answer this, we proposed methodological guidelines based on cycles of conceptual modeling and data analysis, to drive construction of a modular DW system. These guidelines were applied to the blood donation domain, successfully reducing the complexity of DW development.
NASA Astrophysics Data System (ADS)
Gosses, Moritz; Nowak, Wolfgang; Wöhling, Thomas
2017-04-01
Physically-based modeling is a wide-spread tool in understanding and management of natural systems. With the high complexity of many such models and the huge amount of model runs necessary for parameter estimation and uncertainty analysis, overall run times can be prohibitively long even on modern computer systems. An encouraging strategy to tackle this problem are model reduction methods. In this contribution, we compare different proper orthogonal decomposition (POD, Siade et al. (2010)) methods and their potential applications to groundwater models. The POD method performs a singular value decomposition on system states as simulated by the complex (e.g., PDE-based) groundwater model taken at several time-steps, so-called snapshots. The singular vectors with the highest information content resulting from this decomposition are then used as a basis for projection of the system of model equations onto a subspace of much lower dimensionality than the original complex model, thereby greatly reducing complexity and accelerating run times. In its original form, this method is only applicable to linear problems. Many real-world groundwater models are non-linear, tough. These non-linearities are introduced either through model structure (unconfined aquifers) or boundary conditions (certain Cauchy boundaries, like rivers with variable connection to the groundwater table). To date, applications of POD focused on groundwater models simulating pumping tests in confined aquifers with constant head boundaries. In contrast, POD model reduction either greatly looses accuracy or does not significantly reduce model run time if the above-mentioned non-linearities are introduced. We have also found that variable Dirichlet boundaries are problematic for POD model reduction. An extension to the POD method, called POD-DEIM, has been developed for non-linear groundwater models by Stanko et al. (2016). This method uses spatial interpolation points to build the equation system in the reduced model space, thereby allowing the recalculation of system matrices at every time-step necessary for non-linear models while retaining the speed of the reduced model. This makes POD-DEIM applicable for groundwater models simulating unconfined aquifers. However, in our analysis, the method struggled to reproduce variable river boundaries accurately and gave no advantage for variable Dirichlet boundaries compared to the original POD method. We have developed another extension for POD that targets to address these remaining problems by performing a second POD operation on the model matrix on the left-hand side of the equation. The method aims to at least reproduce the accuracy of the other methods where they are applicable while outperforming them for setups with changing river boundaries or variable Dirichlet boundaries. We compared the new extension with original POD and POD-DEIM for different combinations of model structures and boundary conditions. The new method shows the potential of POD extensions for applications to non-linear groundwater systems and complex boundary conditions that go beyond the current, relatively limited range of applications. References: Siade, A. J., Putti, M., and Yeh, W. W.-G. (2010). Snapshot selection for groundwater model reduction using proper orthogonal decomposition. Water Resour. Res., 46(8):W08539. Stanko, Z. P., Boyce, S. E., and Yeh, W. W.-G. (2016). Nonlinear model reduction of unconfined groundwater flow using pod and deim. Advances in Water Resources, 97:130 - 143.
Quasi steady-state aerodynamic model development for race vehicle simulations
NASA Astrophysics Data System (ADS)
Mohrfeld-Halterman, J. A.; Uddin, M.
2016-01-01
Presented in this paper is a procedure to develop a high fidelity quasi steady-state aerodynamic model for use in race car vehicle dynamic simulations. Developed to fit quasi steady-state wind tunnel data, the aerodynamic model is regressed against three independent variables: front ground clearance, rear ride height, and yaw angle. An initial dual range model is presented and then further refined to reduce the model complexity while maintaining a high level of predictive accuracy. The model complexity reduction decreases the required amount of wind tunnel data thereby reducing wind tunnel testing time and cost. The quasi steady-state aerodynamic model for the pitch moment degree of freedom is systematically developed in this paper. This same procedure can be extended to the other five aerodynamic degrees of freedom to develop a complete six degree of freedom quasi steady-state aerodynamic model for any vehicle.
Uncertainty Aware Structural Topology Optimization Via a Stochastic Reduced Order Model Approach
NASA Technical Reports Server (NTRS)
Aguilo, Miguel A.; Warner, James E.
2017-01-01
This work presents a stochastic reduced order modeling strategy for the quantification and propagation of uncertainties in topology optimization. Uncertainty aware optimization problems can be computationally complex due to the substantial number of model evaluations that are necessary to accurately quantify and propagate uncertainties. This computational complexity is greatly magnified if a high-fidelity, physics-based numerical model is used for the topology optimization calculations. Stochastic reduced order model (SROM) methods are applied here to effectively 1) alleviate the prohibitive computational cost associated with an uncertainty aware topology optimization problem; and 2) quantify and propagate the inherent uncertainties due to design imperfections. A generic SROM framework that transforms the uncertainty aware, stochastic topology optimization problem into a deterministic optimization problem that relies only on independent calls to a deterministic numerical model is presented. This approach facilitates the use of existing optimization and modeling tools to accurately solve the uncertainty aware topology optimization problems in a fraction of the computational demand required by Monte Carlo methods. Finally, an example in structural topology optimization is presented to demonstrate the effectiveness of the proposed uncertainty aware structural topology optimization approach.
Data-assisted reduced-order modeling of extreme events in complex dynamical systems
Koumoutsakos, Petros
2018-01-01
The prediction of extreme events, from avalanches and droughts to tsunamis and epidemics, depends on the formulation and analysis of relevant, complex dynamical systems. Such dynamical systems are characterized by high intrinsic dimensionality with extreme events having the form of rare transitions that are several standard deviations away from the mean. Such systems are not amenable to classical order-reduction methods through projection of the governing equations due to the large intrinsic dimensionality of the underlying attractor as well as the complexity of the transient events. Alternatively, data-driven techniques aim to quantify the dynamics of specific, critical modes by utilizing data-streams and by expanding the dimensionality of the reduced-order model using delayed coordinates. In turn, these methods have major limitations in regions of the phase space with sparse data, which is the case for extreme events. In this work, we develop a novel hybrid framework that complements an imperfect reduced order model, with data-streams that are integrated though a recurrent neural network (RNN) architecture. The reduced order model has the form of projected equations into a low-dimensional subspace that still contains important dynamical information about the system and it is expanded by a long short-term memory (LSTM) regularization. The LSTM-RNN is trained by analyzing the mismatch between the imperfect model and the data-streams, projected to the reduced-order space. The data-driven model assists the imperfect model in regions where data is available, while for locations where data is sparse the imperfect model still provides a baseline for the prediction of the system state. We assess the developed framework on two challenging prototype systems exhibiting extreme events. We show that the blended approach has improved performance compared with methods that use either data streams or the imperfect model alone. Notably the improvement is more significant in regions associated with extreme events, where data is sparse. PMID:29795631
Data-assisted reduced-order modeling of extreme events in complex dynamical systems.
Wan, Zhong Yi; Vlachas, Pantelis; Koumoutsakos, Petros; Sapsis, Themistoklis
2018-01-01
The prediction of extreme events, from avalanches and droughts to tsunamis and epidemics, depends on the formulation and analysis of relevant, complex dynamical systems. Such dynamical systems are characterized by high intrinsic dimensionality with extreme events having the form of rare transitions that are several standard deviations away from the mean. Such systems are not amenable to classical order-reduction methods through projection of the governing equations due to the large intrinsic dimensionality of the underlying attractor as well as the complexity of the transient events. Alternatively, data-driven techniques aim to quantify the dynamics of specific, critical modes by utilizing data-streams and by expanding the dimensionality of the reduced-order model using delayed coordinates. In turn, these methods have major limitations in regions of the phase space with sparse data, which is the case for extreme events. In this work, we develop a novel hybrid framework that complements an imperfect reduced order model, with data-streams that are integrated though a recurrent neural network (RNN) architecture. The reduced order model has the form of projected equations into a low-dimensional subspace that still contains important dynamical information about the system and it is expanded by a long short-term memory (LSTM) regularization. The LSTM-RNN is trained by analyzing the mismatch between the imperfect model and the data-streams, projected to the reduced-order space. The data-driven model assists the imperfect model in regions where data is available, while for locations where data is sparse the imperfect model still provides a baseline for the prediction of the system state. We assess the developed framework on two challenging prototype systems exhibiting extreme events. We show that the blended approach has improved performance compared with methods that use either data streams or the imperfect model alone. Notably the improvement is more significant in regions associated with extreme events, where data is sparse.
Evaluating Predictive Uncertainty of Hyporheic Exchange Modelling
NASA Astrophysics Data System (ADS)
Chow, R.; Bennett, J.; Dugge, J.; Wöhling, T.; Nowak, W.
2017-12-01
Hyporheic exchange is the interaction of water between rivers and groundwater, and is difficult to predict. One of the largest contributions to predictive uncertainty for hyporheic fluxes have been attributed to the representation of heterogeneous subsurface properties. This research aims to evaluate which aspect of the subsurface representation - the spatial distribution of hydrofacies or the model for local-scale (within-facies) heterogeneity - most influences the predictive uncertainty. Also, we seek to identify data types that help reduce this uncertainty best. For this investigation, we conduct a modelling study of the Steinlach River meander, in Southwest Germany. The Steinlach River meander is an experimental site established in 2010 to monitor hyporheic exchange at the meander scale. We use HydroGeoSphere, a fully integrated surface water-groundwater model, to model hyporheic exchange and to assess the predictive uncertainty of hyporheic exchange transit times (HETT). A highly parameterized complex model is built and treated as `virtual reality', which is in turn modelled with simpler subsurface parameterization schemes (Figure). Then, we conduct Monte-Carlo simulations with these models to estimate the predictive uncertainty. Results indicate that: Uncertainty in HETT is relatively small for early times and increases with transit times. Uncertainty from local-scale heterogeneity is negligible compared to uncertainty in the hydrofacies distribution. Introducing more data to a poor model structure may reduce predictive variance, but does not reduce predictive bias. Hydraulic head observations alone cannot constrain the uncertainty of HETT, however an estimate of hyporheic exchange flux proves to be more effective at reducing this uncertainty. Figure: Approach for evaluating predictive model uncertainty. A conceptual model is first developed from the field investigations. A complex model (`virtual reality') is then developed based on that conceptual model. This complex model then serves as the basis to compare simpler model structures. Through this approach, predictive uncertainty can be quantified relative to a known reference solution.
Evaluating soil carbon in global climate models: benchmarking, future projections, and model drivers
NASA Astrophysics Data System (ADS)
Todd-Brown, K. E.; Randerson, J. T.; Post, W. M.; Allison, S. D.
2012-12-01
The carbon cycle plays a critical role in how the climate responds to anthropogenic carbon dioxide. To evaluate how well Earth system models (ESMs) from the Climate Model Intercomparison Project (CMIP5) represent the carbon cycle, we examined predictions of current soil carbon stocks from the historical simulation. We compared the soil and litter carbon pools from 17 ESMs with data on soil carbon stocks from the Harmonized World Soil Database (HWSD). We also examined soil carbon predictions for 2100 from 16 ESMs from the rcp85 (highest radiative forcing) simulation to investigate the effects of climate change on soil carbon stocks. In both analyses, we used a reduced complexity model to separate the effects of variation in model drivers from the effects of model parameters on soil carbon predictions. Drivers included NPP, soil temperature, and soil moisture, and the reduced complexity model represented one pool of soil carbon as a function of these drivers. The ESMs predicted global soil carbon totals of 500 to 2980 Pg-C, compared to 1260 Pg-C in the HWSD. This 5-fold variation in predicted soil stocks was a consequence of a 3.4-fold variation in NPP inputs and 3.8-fold variability in mean global turnover times. None of the ESMs correlated well with the global distribution of soil carbon in the HWSD (Pearson's correlation <0.40, RMSE 9-22 kg m-2). On a biome level there was a broad range of agreement between the ESMs and the HWSD. Some models predicted HWSD biome totals well (R2=0.91) while others did not (R2=0.23). All of the ESM terrestrial decomposition models are structurally similar with outputs that were well described by a reduced complexity model that included NPP and soil temperature (R2 of 0.73-0.93). However, MPI-ESM-LR outputs showed only a moderate fit to this model (R2=0.51), and CanESM2 outputs were better described by a reduced model that included soil moisture (R2=0.74), We also found a broad range in soil carbon responses to climate change predicted by the ESMs, with changes of -480 to 230 Pg-C from 2005-2100. All models that reported NPP and heterotrophic respiration showed increases in both of these processes over the simulated period. In two of the models, soils switched from a global sink for carbon to a net source. Of the remaining models, half predicted that soils were a sink for carbon throughout the time period and the other half predicted that soils were a carbon source.. Heterotrophic respiration in most of the models from 2005-2100 was well explained by a reduced complexity model dependent on soil carbon, soil temperature, and soil moisture (R2 values >0.74). However, MPI-ESM (R2=0.45) showed only moderate fit to this model. Our analysis shows that soil carbon predictions from ESMs are highly variable, with much of this variability due to model parameterization and variations in driving variables. Furthermore, our reduced complexity models show that most variation in ESM outputs can be explained by a simple one-pool model with a small number of drivers and parameters. Therefore, agreement between soil carbon predictions across models could improve substantially by reconciling differences in driving variables and the parameters that link soil carbon with environmental drivers. However it is unclear if this model agreement would reflect what is truly happening in the Earth system.
Leherte, Laurence; Vercauteren, Daniel P
2014-02-01
Reduced point charge models of amino acids are designed, (i) from local extrema positions in charge density distribution functions built from the Poisson equation applied to smoothed molecular electrostatic potential (MEP) functions, and (ii) from local maxima positions in promolecular electron density distribution functions. Corresponding charge values are fitted versus all-atom Amber99 MEPs. To easily generate reduced point charge models for protein structures, libraries of amino acid templates are built. The program GROMACS is used to generate stable Molecular Dynamics trajectories of an Ubiquitin-ligand complex (PDB: 1Q0W), under various implementation schemes, solvation, and temperature conditions. Point charges that are not located on atoms are considered as virtual sites with a nul mass and radius. The results illustrate how the intra- and inter-molecular H-bond interactions are affected by the degree of reduction of the point charge models and give directions for their implementation; a special attention to the atoms selected to locate the virtual sites and to the Coulomb-14 interactions is needed. Results obtained at various temperatures suggest that the use of reduced point charge models allows to probe local potential hyper-surface minima that are similar to the all-atom ones, but are characterized by lower energy barriers. It enables to generate various conformations of the protein complex more rapidly than the all-atom point charge representation. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Bezruchko, Konstantin; Davidov, Albert
2009-01-01
In the given article scientific and technical complex for modeling, researching and testing of rocket-space vehicles' power installations which was created in Power Source Laboratory of National Aerospace University "KhAI" is described. This scientific and technical complex gives the opportunity to replace the full-sized tests on model tests and to reduce financial and temporary inputs at modeling, researching and testing of rocket-space vehicles' power installations. Using the given complex it is possible to solve the problems of designing and researching of rocket-space vehicles' power installations efficiently, and also to provide experimental researches of physical processes and tests of solar and chemical batteries of rocket-space complexes and space vehicles. Scientific and technical complex also allows providing accelerated tests, diagnostics, life-time control and restoring of chemical accumulators for rocket-space vehicles' power supply systems.
Deconstructing the core dynamics from a complex time-lagged regulatory biological circuit.
Eriksson, O; Brinne, B; Zhou, Y; Björkegren, J; Tegnér, J
2009-03-01
Complex regulatory dynamics is ubiquitous in molecular networks composed of genes and proteins. Recent progress in computational biology and its application to molecular data generate a growing number of complex networks. Yet, it has been difficult to understand the governing principles of these networks beyond graphical analysis or extensive numerical simulations. Here the authors exploit several simplifying biological circumstances which thereby enable to directly detect the underlying dynamical regularities driving periodic oscillations in a dynamical nonlinear computational model of a protein-protein network. System analysis is performed using the cell cycle, a mathematically well-described complex regulatory circuit driven by external signals. By introducing an explicit time delay and using a 'tearing-and-zooming' approach the authors reduce the system to a piecewise linear system with two variables that capture the dynamics of this complex network. A key step in the analysis is the identification of functional subsystems by identifying the relations between state-variables within the model. These functional subsystems are referred to as dynamical modules operating as sensitive switches in the original complex model. By using reduced mathematical representations of the subsystems the authors derive explicit conditions on how the cell cycle dynamics depends on system parameters, and can, for the first time, analyse and prove global conditions for system stability. The approach which includes utilising biological simplifying conditions, identification of dynamical modules and mathematical reduction of the model complexity may be applicable to other well-characterised biological regulatory circuits. [Includes supplementary material].
NASA Astrophysics Data System (ADS)
Haer, Toon; Botzen, Wouter; de Moel, Hans; Aerts, Jeroen
2015-04-01
In the period 1998-2009, floods triggered roughly 52 billion euro in insured economic losses making floods the most costly natural hazard in Europe. Climate change and socio/economic trends are expected to further aggrevate floods losses in many regions. Research shows that flood risk can be significantly reduced if households install protective measures, and that the implementation of such measures can be stimulated through flood insurance schemes and subsidies. However, the effectiveness of such incentives to stimulate implementation of loss-reducing measures greatly depends on the decision process of individuals and is hardly studied. In our study, we developed an Agent-Based Model that integrates flood damage models, insurance mechanisms, subsidies, and household behaviour models to assess the effectiveness of different economic tools on stimulating households to invest in loss-reducing measures. Since the effectiveness depends on the decision making process of individuals, the study compares different household decision models ranging from standard economic models, to economic models for decision making under risk, to more complex decision models integrating economic models and risk perceptions, opinion dynamics, and the influence of flood experience. The results show the effectiveness of incentives to stimulate investment in loss-reducing measures for different household behavior types, while assuming climate change scenarios. It shows how complex decision models can better reproduce observed real-world behaviour compared to traditional economic models. Furthermore, since flood events are included in the simulations, the results provide an analysis of the dynamics in insured and uninsured losses for households, the costs of reducing risk by implementing loss-reducing measures, the capacity of the insurance market, and the cost of government subsidies under different scenarios. The model has been applied to the City of Rotterdam in The Netherlands.
NASA Astrophysics Data System (ADS)
Liu, Y.; Zheng, L.; Pau, G. S. H.
2016-12-01
A careful assessment of the risk associated with geologic CO2 storage is critical to the deployment of large-scale storage projects. While numerical modeling is an indispensable tool for risk assessment, there has been increasing need in considering and addressing uncertainties in the numerical models. However, uncertainty analyses have been significantly hindered by the computational complexity of the model. As a remedy, reduced-order models (ROM), which serve as computationally efficient surrogates for high-fidelity models (HFM), have been employed. The ROM is constructed at the expense of an initial set of HFM simulations, and afterwards can be relied upon to predict the model output values at minimal cost. The ROM presented here is part of National Risk Assessment Program (NRAP) and intends to predict the water quality change in groundwater in response to hypothetical CO2 and brine leakage. The HFM based on which the ROM is derived is a multiphase flow and reactive transport model, with 3-D heterogeneous flow field and complex chemical reactions including aqueous complexation, mineral dissolution/precipitation, adsorption/desorption via surface complexation and cation exchange. Reduced-order modeling techniques based on polynomial basis expansion, such as polynomial chaos expansion (PCE), are widely used in the literature. However, the accuracy of such ROMs can be affected by the sparse structure of the coefficients of the expansion. Failing to identify vanishing polynomial coefficients introduces unnecessary sampling errors, the accumulation of which deteriorates the accuracy of the ROMs. To address this issue, we treat the PCE as a sparse Bayesian learning (SBL) problem, and the sparsity is obtained by detecting and including only the non-zero PCE coefficients one at a time by iteratively selecting the most contributing coefficients. The computational complexity due to predicting the entire 3-D concentration fields is further mitigated by a dimension reduction procedure-proper orthogonal decomposition (POD). Our numerical results show that utilizing the sparse structure and POD significantly enhances the accuracy and efficiency of the ROMs, laying the basis for further analyses that necessitate a large number of model simulations.
Application of model abstraction techniques to simulate transport in soils
USDA-ARS?s Scientific Manuscript database
Successful understanding and modeling of contaminant transport in soils is the precondition of risk-informed predictions of the subsurface contaminant transport. Exceedingly complex models of subsurface contaminant transport are often inefficient. Model abstraction is the methodology for reducing th...
Improving a regional model using reduced complexity and parameter estimation
Kelson, Victor A.; Hunt, Randall J.; Haitjema, Henk M.
2002-01-01
The availability of powerful desktop computers and graphical user interfaces for ground water flow models makes possible the construction of ever more complex models. A proposed copper-zinc sulfide mine in northern Wisconsin offers a unique case in which the same hydrologic system has been modeled using a variety of techniques covering a wide range of sophistication and complexity. Early in the permitting process, simple numerical models were used to evaluate the necessary amount of water to be pumped from the mine, reductions in streamflow, and the drawdowns in the regional aquifer. More complex models have subsequently been used in an attempt to refine the predictions. Even after so much modeling effort, questions regarding the accuracy and reliability of the predictions remain. We have performed a new analysis of the proposed mine using the two-dimensional analytic element code GFLOW coupled with the nonlinear parameter estimation code UCODE. The new model is parsimonious, containing fewer than 10 parameters, and covers a region several times larger in areal extent than any of the previous models. The model demonstrates the suitability of analytic element codes for use with parameter estimation codes. The simplified model results are similar to the more complex models; predicted mine inflows and UCODE-derived 95% confidence intervals are consistent with the previous predictions. More important, the large areal extent of the model allowed us to examine hydrological features not included in the previous models, resulting in new insights about the effects that far-field boundary conditions can have on near-field model calibration and parameterization. In this case, the addition of surface water runoff into a lake in the headwaters of a stream while holding recharge constant moved a regional ground watershed divide and resulted in some of the added water being captured by the adjoining basin. Finally, a simple analytical solution was used to clarify the GFLOW model's prediction that, for a model that is properly calibrated for heads, regional drawdowns are relatively unaffected by the choice of aquifer properties, but that mine inflows are strongly affected. Paradoxically, by reducing model complexity, we have increased the understanding gained from the modeling effort.
Improving a regional model using reduced complexity and parameter estimation.
Kelson, Victor A; Hunt, Randall J; Haitjema, Henk M
2002-01-01
The availability of powerful desktop computers and graphical user interfaces for ground water flow models makes possible the construction of ever more complex models. A proposed copper-zinc sulfide mine in northern Wisconsin offers a unique case in which the same hydrologic system has been modeled using a variety of techniques covering a wide range of sophistication and complexity. Early in the permitting process, simple numerical models were used to evaluate the necessary amount of water to be pumped from the mine, reductions in streamflow, and the drawdowns in the regional aquifer. More complex models have subsequently been used in an attempt to refine the predictions. Even after so much modeling effort, questions regarding the accuracy and reliability of the predictions remain. We have performed a new analysis of the proposed mine using the two-dimensional analytic element code GFLOW coupled with the nonlinear parameter estimation code UCODE. The new model is parsimonious, containing fewer than 10 parameters, and covers a region several times larger in areal extent than any of the previous models. The model demonstrates the suitability of analytic element codes for use with parameter estimation codes. The simplified model results are similar to the more complex models; predicted mine inflows and UCODE-derived 95% confidence intervals are consistent with the previous predictions. More important, the large areal extent of the model allowed us to examine hydrological features not included in the previous models, resulting in new insights about the effects that far-field boundary conditions can have on near-field model calibration and parameterization. In this case, the addition of surface water runoff into a lake in the headwaters of a stream while holding recharge constant moved a regional ground watershed divide and resulted in some of the added water being captured by the adjoining basin. Finally, a simple analytical solution was used to clarify the GFLOW model's prediction that, for a model that is properly calibrated for heads, regional drawdowns are relatively unaffected by the choice of aquifer properties, but that mine inflows are strongly affected. Paradoxically, by reducing model complexity, we have increased the understanding gained from the modeling effort.
NASA Astrophysics Data System (ADS)
Qi, Di
Turbulent dynamical systems are ubiquitous in science and engineering. Uncertainty quantification (UQ) in turbulent dynamical systems is a grand challenge where the goal is to obtain statistical estimates for key physical quantities. In the development of a proper UQ scheme for systems characterized by both a high-dimensional phase space and a large number of instabilities, significant model errors compared with the true natural signal are always unavoidable due to both the imperfect understanding of the underlying physical processes and the limited computational resources available. One central issue in contemporary research is the development of a systematic methodology for reduced order models that can recover the crucial features both with model fidelity in statistical equilibrium and with model sensitivity in response to perturbations. In the first part, we discuss a general mathematical framework to construct statistically accurate reduced-order models that have skill in capturing the statistical variability in the principal directions of a general class of complex systems with quadratic nonlinearity. A systematic hierarchy of simple statistical closure schemes, which are built through new global statistical energy conservation principles combined with statistical equilibrium fidelity, are designed and tested for UQ of these problems. Second, the capacity of imperfect low-order stochastic approximations to model extreme events in a passive scalar field advected by turbulent flows is investigated. The effects in complicated flow systems are considered including strong nonlinear and non-Gaussian interactions, and much simpler and cheaper imperfect models with model error are constructed to capture the crucial statistical features in the stationary tracer field. Several mathematical ideas are introduced to improve the prediction skill of the imperfect reduced-order models. Most importantly, empirical information theory and statistical linear response theory are applied in the training phase for calibrating model errors to achieve optimal imperfect model parameters; and total statistical energy dynamics are introduced to improve the model sensitivity in the prediction phase especially when strong external perturbations are exerted. The validity of reduced-order models for predicting statistical responses and intermittency is demonstrated on a series of instructive models with increasing complexity, including the stochastic triad model, the Lorenz '96 model, and models for barotropic and baroclinic turbulence. The skillful low-order modeling methods developed here should also be useful for other applications such as efficient algorithms for data assimilation.
Reduction of a linear complex model for respiratory system during Airflow Interruption.
Jablonski, Ireneusz; Mroczka, Janusz
2010-01-01
The paper presents methodology of a complex model reduction to its simpler version - an identifiable inverse model. Its main tool is a numerical procedure of sensitivity analysis (structural and parametric) applied to the forward linear equivalent designed for the conditions of interrupter experiment. Final result - the reduced analog for the interrupter technique is especially worth of notice as it fills a major gap in occlusional measurements, which typically use simple, one- or two-element physical representations. Proposed electrical reduced circuit, being structural combination of resistive, inertial and elastic properties, can be perceived as a candidate for reliable reconstruction and quantification (in the time and frequency domain) of dynamical behavior of the respiratory system in response to a quasi-step excitation by valve closure.
Reliable low precision simulations in land surface models
NASA Astrophysics Data System (ADS)
Dawson, Andrew; Düben, Peter D.; MacLeod, David A.; Palmer, Tim N.
2017-12-01
Weather and climate models must continue to increase in both resolution and complexity in order that forecasts become more accurate and reliable. Moving to lower numerical precision may be an essential tool for coping with the demand for ever increasing model complexity in addition to increasing computing resources. However, there have been some concerns in the weather and climate modelling community over the suitability of lower precision for climate models, particularly for representing processes that change very slowly over long time-scales. These processes are difficult to represent using low precision due to time increments being systematically rounded to zero. Idealised simulations are used to demonstrate that a model of deep soil heat diffusion that fails when run in single precision can be modified to work correctly using low precision, by splitting up the model into a small higher precision part and a low precision part. This strategy retains the computational benefits of reduced precision whilst preserving accuracy. This same technique is also applied to a full complexity land surface model, resulting in rounding errors that are significantly smaller than initial condition and parameter uncertainties. Although lower precision will present some problems for the weather and climate modelling community, many of the problems can likely be overcome using a straightforward and physically motivated application of reduced precision.
Reducing Neuronal Networks to Discrete Dynamics
Terman, David; Ahn, Sungwoo; Wang, Xueying; Just, Winfried
2008-01-01
We consider a general class of purely inhibitory and excitatory-inhibitory neuronal networks, with a general class of network architectures, and characterize the complex firing patterns that emerge. Our strategy for studying these networks is to first reduce them to a discrete model. In the discrete model, each neuron is represented as a finite number of states and there are rules for how a neuron transitions from one state to another. In this paper, we rigorously demonstrate that the continuous neuronal model can be reduced to the discrete model if the intrinsic and synaptic properties of the cells are chosen appropriately. In a companion paper [1], we analyze the discrete model. PMID:18443649
Spatial complexity reduces interaction strengths in the meta-food web of a river floodplain mosaic
Bellmore, James Ryan; Baxter, Colden Vance; Connolly, Patrick J.
2015-01-01
Theory states that both the spatial complexity of landscapes and the strength of interactions between consumers and their resources are important for maintaining biodiversity and the 'balance of nature.' Spatial complexity is hypothesized to promote biodiversity by reducing potential for competitive exclusion; whereas, models show weak trophic interactions can enhance stability and maintain biodiversity by dampening destabilizing oscillations associated with strong interactions. Here we show that spatial complexity can reduce the strength of consumer-resource interactions in natural food webs. By sequentially aggregating food webs of individual aquatic habitat patches across a floodplain mosaic, we found that increasing spatial complexity resulted in decreases in the strength of interactions between predators and prey, owing to a greater proportion of weak interactions and a reduced proportion of strong interactions in the meta-food web. The main mechanism behind this pattern was that some patches provided predation refugia for species which were often strongly preyed upon in other patches. If weak trophic interactions do indeed promote stability, then our findings may signal an additional mechanism by which complexity and stability are linked in nature. In turn, this may have implications for how the values of landscape complexity, and the costs of biophysical homogenization, are assessed.
Large-Scale Optimization for Bayesian Inference in Complex Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Willcox, Karen; Marzouk, Youssef
2013-11-12
The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of themore » SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.« less
Final Report: Large-Scale Optimization for Bayesian Inference in Complex Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghattas, Omar
2013-10-15
The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimiza- tion) Project focuses on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimiza- tion and inversion methods. Our research is directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. Our efforts are integrated in the context of a challenging testbed problem that considers subsurface reacting flow and transport. The MIT component of the SAGUAROmore » Project addresses the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas-Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to- observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as "reduce then sample" and "sample then reduce." In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.« less
Reduced order models for prediction of groundwater quality impacts from CO₂ and brine leakage
Zheng, Liange; Carroll, Susan; Bianchi, Marco; ...
2014-12-31
A careful assessment of the risk associated with geologic CO₂ storage is critical to the deployment of large-scale storage projects. A potential risk is the deterioration of groundwater quality caused by the leakage of CO₂ and brine leakage from deep subsurface reservoirs. In probabilistic risk assessment studies, numerical modeling is the primary tool employed to assess risk. However, the application of traditional numerical models to fully evaluate the impact of CO₂ leakage on groundwater can be computationally complex, demanding large processing times and resources, and involving large uncertainties. As an alternative, reduced order models (ROMs) can be used as highlymore » efficient surrogates for the complex process-based numerical models. In this study, we represent the complex hydrogeological and geochemical conditions in a heterogeneous aquifer and subsequent risk by developing and using two separate ROMs. The first ROM is derived from a model that accounts for the heterogeneous flow and transport conditions in the presence of complex leakage functions for CO₂ and brine. The second ROM is obtained from models that feature similar, but simplified flow and transport conditions, and allow for a more complex representation of all relevant geochemical reactions. To quantify possible impacts to groundwater aquifers, the basic risk metric is taken as the aquifer volume in which the water quality of the aquifer may be affected by an underlying CO₂ storage project. The integration of the two ROMs provides an estimate of the impacted aquifer volume taking into account uncertainties in flow, transport and chemical conditions. These two ROMs can be linked in a comprehensive system level model for quantitative risk assessment of the deep storage reservoir, wellbore leakage, and shallow aquifer impacts to assess the collective risk of CO₂ storage projects.« less
Fractional modeling of viscoelasticity in 3D cerebral arteries and aneurysms
NASA Astrophysics Data System (ADS)
Yu, Yue; Perdikaris, Paris; Karniadakis, George Em
2016-10-01
We develop efficient numerical methods for fractional order PDEs, and employ them to investigate viscoelastic constitutive laws for arterial wall mechanics. Recent simulations using one-dimensional models [1] have indicated that fractional order models may offer a more powerful alternative for modeling the arterial wall response, exhibiting reduced sensitivity to parametric uncertainties compared with the integer-calculus-based models. Here, we study three-dimensional (3D) fractional PDEs that naturally model the continuous relaxation properties of soft tissue, and for the first time employ them to simulate flow structure interactions for patient-specific brain aneurysms. To deal with the high memory requirements and in order to accelerate the numerical evaluation of hereditary integrals, we employ a fast convolution method [2] that reduces the memory cost to O (log (N)) and the computational complexity to O (Nlog (N)). Furthermore, we combine the fast convolution with high-order backward differentiation to achieve third-order time integration accuracy. We confirm that in 3D viscoelastic simulations, the integer order models strongly depends on the relaxation parameters, while the fractional order models are less sensitive. As an application to long-time simulations in complex geometries, we also apply the method to modeling fluid-structure interaction of a 3D patient-specific compliant cerebral artery with an aneurysm. Taken together, our findings demonstrate that fractional calculus can be employed effectively in modeling complex behavior of materials in realistic 3D time-dependent problems if properly designed efficient algorithms are employed to overcome the extra memory requirements and computational complexity associated with the non-local character of fractional derivatives.
Fractional modeling of viscoelasticity in 3D cerebral arteries and aneurysms
Perdikaris, Paris; Karniadakis, George Em
2017-01-01
We develop efficient numerical methods for fractional order PDEs, and employ them to investigate viscoelastic constitutive laws for arterial wall mechanics. Recent simulations using one-dimensional models [1] have indicated that fractional order models may offer a more powerful alternative for modeling the arterial wall response, exhibiting reduced sensitivity to parametric uncertainties compared with the integer-calculus-based models. Here, we study three-dimensional (3D) fractional PDEs that naturally model the continuous relaxation properties of soft tissue, and for the first time employ them to simulate flow structure interactions for patient-specific brain aneurysms. To deal with the high memory requirements and in order to accelerate the numerical evaluation of hereditary integrals, we employ a fast convolution method [2] that reduces the memory cost to O(log(N)) and the computational complexity to O(N log(N)). Furthermore, we combine the fast convolution with high-order backward differentiation to achieve third-order time integration accuracy. We confirm that in 3D viscoelastic simulations, the integer order models strongly depends on the relaxation parameters, while the fractional order models are less sensitive. As an application to long-time simulations in complex geometries, we also apply the method to modeling fluid–structure interaction of a 3D patient-specific compliant cerebral artery with an aneurysm. Taken together, our findings demonstrate that fractional calculus can be employed effectively in modeling complex behavior of materials in realistic 3D time-dependent problems if properly designed efficient algorithms are employed to overcome the extra memory requirements and computational complexity associated with the non-local character of fractional derivatives. PMID:29104310
NASA Astrophysics Data System (ADS)
Georgiou, K.; Abramoff, R. Z.; Harte, J.; Riley, W. J.; Torn, M. S.
2016-12-01
As global temperatures and atmospheric CO2 concentrations continue to increase, soil microbial activity and decomposition of soil organic matter (SOM) are expected to follow suit, potentially limiting soil carbon storage. Traditional global- and ecosystem-scale models simulate SOM decomposition using linear kinetics, which are inherently unable to reproduce carbon-concentration feedbacks, such as priming of native SOM at elevated CO2 concentrations. Recent studies using nonlinear microbial models of SOM decomposition seek to capture these interactions, and several groups are currently integrating these microbial models into Earth System Models (ESMs). However, despite their widespread ability to exhibit nonlinear responses, these models vary tremendously in complexity and, consequently, dynamics. In this study, we explore, both analytically and numerically, the emergent oscillatory behavior and insensitivity of SOM stocks to carbon inputs that have been deemed `unrealistic' in recent microbial models. We discuss the sources of instability in four models of varying complexity, by sequentially reducing complexity of a detailed model that includes microbial physiology, a mineral sorption isotherm, and enzyme dynamics. We also present an alternative representation of microbial turnover that limits population sizes and, thus, reduces oscillations. We compare these models to several long-term carbon input manipulations, including the Detritus Input and Removal Treatment (DIRT) experiments, to show that there are clear metrics that can be used to distinguish and validate the inherent dynamics of each model structure. We find that traditional linear and nonlinear models cannot readily capture the range of long-term responses observed across the DIRT experiments as a direct consequence of their model structures, and that modifying microbial turnover results in more realistic predictions. Finally, we discuss our findings in the context of improving microbial model behavior for inclusion in ESMs.
NASA Astrophysics Data System (ADS)
Bellemans, Aurélie; Parente, Alessandro; Magin, Thierry
2018-04-01
The present work introduces a novel approach for obtaining reduced chemistry representations of large kinetic mechanisms in strong non-equilibrium conditions. The need for accurate reduced-order models arises from compression of large ab initio quantum chemistry databases for their use in fluid codes. The method presented in this paper builds on existing physics-based strategies and proposes a new approach based on the combination of a simple coarse grain model with Principal Component Analysis (PCA). The internal energy levels of the chemical species are regrouped in distinct energy groups with a uniform lumping technique. Following the philosophy of machine learning, PCA is applied on the training data provided by the coarse grain model to find an optimally reduced representation of the full kinetic mechanism. Compared to recently published complex lumping strategies, no expert judgment is required before the application of PCA. In this work, we will demonstrate the benefits of the combined approach, stressing its simplicity, reliability, and accuracy. The technique is demonstrated by reducing the complex quantum N2(g+1Σ) -N(S4u ) database for studying molecular dissociation and excitation in strong non-equilibrium. Starting from detailed kinetics, an accurate reduced model is developed and used to study non-equilibrium properties of the N2(g+1Σ) -N(S4u ) system in shock relaxation simulations.
Constitutive Model Calibration via Autonomous Multiaxial Experimentation (Postprint)
2016-09-17
test machine. Experimental data is reduced and finite element simulations are conducted in parallel with the test based on experimental strain...data is reduced and finite element simulations are conducted in parallel with the test based on experimental strain conditions. Optimization methods...be used directly in finite element simulations of more complex geometries. Keywords Axial/torsional experimentation • Plasticity • Constitutive model
Principal process analysis of biological models.
Casagranda, Stefano; Touzeau, Suzanne; Ropers, Delphine; Gouzé, Jean-Luc
2018-06-14
Understanding the dynamical behaviour of biological systems is challenged by their large number of components and interactions. While efforts have been made in this direction to reduce model complexity, they often prove insufficient to grasp which and when model processes play a crucial role. Answering these questions is fundamental to unravel the functioning of living organisms. We design a method for dealing with model complexity, based on the analysis of dynamical models by means of Principal Process Analysis. We apply the method to a well-known model of circadian rhythms in mammals. The knowledge of the system trajectories allows us to decompose the system dynamics into processes that are active or inactive with respect to a certain threshold value. Process activities are graphically represented by Boolean and Dynamical Process Maps. We detect model processes that are always inactive, or inactive on some time interval. Eliminating these processes reduces the complex dynamics of the original model to the much simpler dynamics of the core processes, in a succession of sub-models that are easier to analyse. We quantify by means of global relative errors the extent to which the simplified models reproduce the main features of the original system dynamics and apply global sensitivity analysis to test the influence of model parameters on the errors. The results obtained prove the robustness of the method. The analysis of the sub-model dynamics allows us to identify the source of circadian oscillations. We find that the negative feedback loop involving proteins PER, CRY, CLOCK-BMAL1 is the main oscillator, in agreement with previous modelling and experimental studies. In conclusion, Principal Process Analysis is a simple-to-use method, which constitutes an additional and useful tool for analysing the complex dynamical behaviour of biological systems.
Queueing Network Models for Parallel Processing of Task Systems: an Operational Approach
NASA Technical Reports Server (NTRS)
Mak, Victor W. K.
1986-01-01
Computer performance modeling of possibly complex computations running on highly concurrent systems is considered. Earlier works in this area either dealt with a very simple program structure or resulted in methods with exponential complexity. An efficient procedure is developed to compute the performance measures for series-parallel-reducible task systems using queueing network models. The procedure is based on the concept of hierarchical decomposition and a new operational approach. Numerical results for three test cases are presented and compared to those of simulations.
NASA Astrophysics Data System (ADS)
Lin, L.; Luo, X.; Qin, F.; Yang, J.
2018-03-01
As one of the combustion products of hydrocarbon fuels in a combustion-heated wind tunnel, water vapor may condense during the rapid expansion process, which will lead to a complex two-phase flow inside the wind tunnel and even change the design flow conditions at the nozzle exit. The coupling of the phase transition and the compressible flow makes the estimation of the condensation effects in such wind tunnels very difficult and time-consuming. In this work, a reduced theoretical model is developed to approximately compute the nozzle-exit conditions of a flow including real-gas and homogeneous condensation effects. Specifically, the conservation equations of the axisymmetric flow are first approximated in the quasi-one-dimensional way. Then, the complex process is split into two steps, i.e., a real-gas nozzle flow but excluding condensation, resulting in supersaturated nozzle-exit conditions, and a discontinuous jump at the end of the nozzle from the supersaturated state to a saturated state. Compared with two-dimensional numerical simulations implemented with a detailed condensation model, the reduced model predicts the flow parameters with good accuracy except for some deviations caused by the two-dimensional effect. Therefore, this reduced theoretical model can provide a fast, simple but also accurate estimation of the condensation effect in combustion-heated hypersonic tunnels.
Persistent model order reduction for complex dynamical systems using smooth orthogonal decomposition
NASA Astrophysics Data System (ADS)
Ilbeigi, Shahab; Chelidze, David
2017-11-01
Full-scale complex dynamic models are not effective for parametric studies due to the inherent constraints on available computational power and storage resources. A persistent reduced order model (ROM) that is robust, stable, and provides high-fidelity simulations for a relatively wide range of parameters and operating conditions can provide a solution to this problem. The fidelity of a new framework for persistent model order reduction of large and complex dynamical systems is investigated. The framework is validated using several numerical examples including a large linear system and two complex nonlinear systems with material and geometrical nonlinearities. While the framework is used for identifying the robust subspaces obtained from both proper and smooth orthogonal decompositions (POD and SOD, respectively), the results show that SOD outperforms POD in terms of stability, accuracy, and robustness.
NASA Astrophysics Data System (ADS)
Sinsbeck, Michael; Tartakovsky, Daniel
2015-04-01
Infiltration into top soil can be described by alternative models with different degrees of fidelity: Richards equation and the Green-Ampt model. These models typically contain uncertain parameters and forcings, rendering predictions of the state variables uncertain as well. Within the probabilistic framework, solutions of these models are given in terms of their probability density functions (PDFs) that, in the presence of data, can be treated as prior distributions. The assimilation of soil moisture data into model predictions, e.g., via a Bayesian updating of solution PDFs, poses a question of model selection: Given a significant difference in computational cost, is a lower-fidelity model preferable to its higher-fidelity counter-part? We investigate this question in the context of heterogeneous porous media, whose hydraulic properties are uncertain. While low-fidelity (reduced-complexity) models introduce a model error, their moderate computational cost makes it possible to generate more realizations, which reduces the (e.g., Monte Carlo) sampling or stochastic error. The ratio between these two errors determines the model with the smallest total error. We found assimilation of measurements of a quantity of interest (the soil moisture content, in our example) to decrease the model error, increasing the probability that the predictive accuracy of a reduced-complexity model does not fall below that of its higher-fidelity counterpart.
Channel Model Optimization with Reflection Residual Component for Indoor MIMO-VLC System
NASA Astrophysics Data System (ADS)
Chen, Yong; Li, Tengfei; Liu, Huanlin; Li, Yichao
2017-12-01
A fast channel modeling method is studied to solve the problem of reflection channel gain for multiple input multiple output-visible light communications (MIMO-VLC) in the paper. For reducing the computational complexity when associating with the reflection times, no more than 3 reflections are taken into consideration in VLC. We think that higher order reflection link consists of corresponding many times line of sight link and firstly present reflection residual component to characterize higher reflection (more than 2 reflections). We perform computer simulation results for point-to-point channel impulse response, receiving optical power and receiving signal to noise ratio. Based on theoretical analysis and simulation results, the proposed method can effectively reduce the computational complexity of higher order reflection in channel modeling.
A genetic algorithm for solving supply chain network design model
NASA Astrophysics Data System (ADS)
Firoozi, Z.; Ismail, N.; Ariafar, S. H.; Tang, S. H.; Ariffin, M. K. M. A.
2013-09-01
Network design is by nature costly and optimization models play significant role in reducing the unnecessary cost components of a distribution network. This study proposes a genetic algorithm to solve a distribution network design model. The structure of the chromosome in the proposed algorithm is defined in a novel way that in addition to producing feasible solutions, it also reduces the computational complexity of the algorithm. Computational results are presented to show the algorithm performance.
Real-time speech encoding based on Code-Excited Linear Prediction (CELP)
NASA Technical Reports Server (NTRS)
Leblanc, Wilfrid P.; Mahmoud, S. A.
1988-01-01
This paper reports on the work proceeding with regard to the development of a real-time voice codec for the terrestrial and satellite mobile radio environments. The codec is based on a complexity reduced version of code-excited linear prediction (CELP). The codebook search complexity was reduced to only 0.5 million floating point operations per second (MFLOPS) while maintaining excellent speech quality. Novel methods to quantize the residual and the long and short term model filters are presented.
COMPLEX VARIABLE BOUNDARY ELEMENT METHOD: APPLICATIONS.
Hromadka, T.V.; Yen, C.C.; Guymon, G.L.
1985-01-01
The complex variable boundary element method (CVBEM) is used to approximate several potential problems where analytical solutions are known. A modeling result produced from the CVBEM is a measure of relative error in matching the known boundary condition values of the problem. A CVBEM error-reduction algorithm is used to reduce the relative error of the approximation by adding nodal points in boundary regions where error is large. From the test problems, overall error is reduced significantly by utilizing the adaptive integration algorithm.
NASA Astrophysics Data System (ADS)
Dai, Heng; Chen, Xingyuan; Ye, Ming; Song, Xuehang; Zachara, John M.
2017-05-01
Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study, we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multilayer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially distributed input variables.
NASA Astrophysics Data System (ADS)
Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.
2017-12-01
Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multi-layer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially-distributed input variables.
Robust simulation of buckled structures using reduced order modeling
NASA Astrophysics Data System (ADS)
Wiebe, R.; Perez, R. A.; Spottswood, S. M.
2016-09-01
Lightweight metallic structures are a mainstay in aerospace engineering. For these structures, stability, rather than strength, is often the critical limit state in design. For example, buckling of panels and stiffeners may occur during emergency high-g maneuvers, while in supersonic and hypersonic aircraft, it may be induced by thermal stresses. The longstanding solution to such challenges was to increase the sizing of the structural members, which is counter to the ever present need to minimize weight for reasons of efficiency and performance. In this work we present some recent results in the area of reduced order modeling of post- buckled thin beams. A thorough parametric study of the response of a beam to changing harmonic loading parameters, which is useful in exposing complex phenomena and exercising numerical models, is presented. Two error metrics that use but require no time stepping of a (computationally expensive) truth model are also introduced. The error metrics are applied to several interesting forcing parameter cases identified from the parametric study and are shown to yield useful information about the quality of a candidate reduced order model. Parametric studies, especially when considering forcing and structural geometry parameters, coupled environments, and uncertainties would be computationally intractable with finite element models. The goal is to make rapid simulation of complex nonlinear dynamic behavior possible for distributed systems via fast and accurate reduced order models. This ability is crucial in allowing designers to rigorously probe the robustness of their designs to account for variations in loading, structural imperfections, and other uncertainties.
Coller, Ryan J; Nelson, Bergen B; Klitzner, Thomas S; Saenz, Adrianna A; Shekelle, Paul G; Lerner, Carlos F; Chung, Paul J
Interventions to reduce disproportionate hospital use among children with medical complexity (CMC) are needed. We conducted a rigorous, structured process to develop intervention strategies aiming to reduce hospitalizations within a complex care program population. A complex care medical home program used 1) semistructured interviews of caregivers of CMC experiencing acute, unscheduled hospitalizations and 2) literature review on preventing hospitalizations among CMC to develop key drivers for lowering hospital utilization and link them with intervention strategies. Using an adapted version of the RAND/UCLA Appropriateness Method, an expert panel rated each model for effectiveness at impacting each key driver and ultimately reducing hospitalizations. The complex care program applied these findings to select a final set of feasible intervention strategies for implementation. Intervention strategies focused on expanding access to familiar providers, enhancing general or technical caregiver knowledge and skill, creating specific and proactive crisis or contingency plans, and improving transitions between hospital and home. Activities aimed to facilitate family-centered, flexible implementation and consideration of all of the child's environments, including school and while traveling. Tailored activities and special attention to the highest utilizing subset of CMC were also critical for these interventions. A set of intervention strategies to reduce hospitalizations among CMC, informed by key drivers, can be created through a structured, reproducible process. Both this process and the results may be relevant to clinical programs and researchers aiming to reduce hospital utilization through the medical home for CMC. Copyright © 2017 Academic Pediatric Association. Published by Elsevier Inc. All rights reserved.
Dual Quaternions as Constraints in 4D-DPM Models for Pose Estimation.
Martinez-Berti, Enrique; Sánchez-Salmerón, Antonio-José; Ricolfe-Viala, Carlos
2017-08-19
The goal of this research work is to improve the accuracy of human pose estimation using the Deformation Part Model (DPM) without increasing computational complexity. First, the proposed method seeks to improve pose estimation accuracy by adding the depth channel to DPM, which was formerly defined based only on red-green-blue (RGB) channels, in order to obtain a four-dimensional DPM (4D-DPM). In addition, computational complexity can be controlled by reducing the number of joints by taking it into account in a reduced 4D-DPM. Finally, complete solutions are obtained by solving the omitted joints by using inverse kinematics models. In this context, the main goal of this paper is to analyze the effect on pose estimation timing cost when using dual quaternions to solve the inverse kinematics.
Castellazzi, Giovanni; D'Altri, Antonio Maria; Bitelli, Gabriele; Selvaggi, Ilenia; Lambertini, Alessandro
2015-07-28
In this paper, a new semi-automatic procedure to transform three-dimensional point clouds of complex objects to three-dimensional finite element models is presented and validated. The procedure conceives of the point cloud as a stacking of point sections. The complexity of the clouds is arbitrary, since the procedure is designed for terrestrial laser scanner surveys applied to buildings with irregular geometry, such as historical buildings. The procedure aims at solving the problems connected to the generation of finite element models of these complex structures by constructing a fine discretized geometry with a reduced amount of time and ready to be used with structural analysis. If the starting clouds represent the inner and outer surfaces of the structure, the resulting finite element model will accurately capture the whole three-dimensional structure, producing a complex solid made by voxel elements. A comparison analysis with a CAD-based model is carried out on a historical building damaged by a seismic event. The results indicate that the proposed procedure is effective and obtains comparable models in a shorter time, with an increased level of automation.
Moving alcohol prevention research forward-Part I: introducing a complex systems paradigm.
Apostolopoulos, Yorghos; Lemke, Michael K; Barry, Adam E; Lich, Kristen Hassmiller
2018-02-01
The drinking environment is a complex system consisting of a number of heterogeneous, evolving and interacting components, which exhibit circular causality and emergent properties. These characteristics reduce the efficacy of commonly used research approaches, which typically do not account for the underlying dynamic complexity of alcohol consumption and the interdependent nature of diverse factors influencing misuse over time. We use alcohol misuse among college students in the United States as an example for framing our argument for a complex systems paradigm. A complex systems paradigm, grounded in socio-ecological and complex systems theories and computational modeling and simulation, is introduced. Theoretical, conceptual, methodological and analytical underpinnings of this paradigm are described in the context of college drinking prevention research. The proposed complex systems paradigm can transcend limitations of traditional approaches, thereby fostering new directions in alcohol prevention research. By conceptualizing student alcohol misuse as a complex adaptive system, computational modeling and simulation methodologies and analytical techniques can be used. Moreover, use of participatory model-building approaches to generate simulation models can further increase stakeholder buy-in, understanding and policymaking. A complex systems paradigm for research into alcohol misuse can provide a holistic understanding of the underlying drinking environment and its long-term trajectory, which can elucidate high-leverage preventive interventions. © 2017 Society for the Study of Addiction.
NASA Technical Reports Server (NTRS)
Adler, David S.; Roberts, William W., Jr.
1992-01-01
Techniques which use longitude-velocity diagrams to identify molecular cloud complexes in the disk of the Galaxy are investigated by means of model Galactic disks generated from N-body cloud-particle simulations. A procedure similar to the method used to reduce the low-level emission in Galactic l-v diagrams is employed to isolate complexes of emission in the model l-v diagram (LVCs) from the 'background'clouds. The LVCs produced in this manner yield a size-line-width relationship with a slope of 0.58 and a mass spectrum with a slope of 1.55, consistent with Galactic observations. It is demonstrated that associations identified as LVCs are often chance superpositions of clouds spread out along the line of sight in the disk of the model system. This indicates that the l-v diagram cannot be used to unambiguously determine the location of molecular cloud complexes in the model Galactic disk. The modeling results also indicate that the existence of a size-line-width relationship is not a reliable indicator of the physical nature of cloud complexes, in particular, whether the complexes are gravitationally bound objects.
Computational Process Modeling for Additive Manufacturing (OSU)
NASA Technical Reports Server (NTRS)
Bagg, Stacey; Zhang, Wei
2015-01-01
Powder-Bed Additive Manufacturing (AM) through Direct Metal Laser Sintering (DMLS) or Selective Laser Melting (SLM) is being used by NASA and the Aerospace industry to "print" parts that traditionally are very complex, high cost, or long schedule lead items. The process spreads a thin layer of metal powder over a build platform, then melts the powder in a series of welds in a desired shape. The next layer of powder is applied, and the process is repeated until layer-by-layer, a very complex part can be built. This reduces cost and schedule by eliminating very complex tooling and processes traditionally used in aerospace component manufacturing. To use the process to print end-use items, NASA seeks to understand SLM material well enough to develop a method of qualifying parts for space flight operation. Traditionally, a new material process takes many years and high investment to generate statistical databases and experiential knowledge, but computational modeling can truncate the schedule and cost -many experiments can be run quickly in a model, which would take years and a high material cost to run empirically. This project seeks to optimize material build parameters with reduced time and cost through modeling.
NASA Technical Reports Server (NTRS)
Shooman, Martin L.; Cortes, Eladio R.
1991-01-01
The network-complexity of LANs and of LANs that are interconnected by bridges and routers poses a challenging reliability-modeling problem. The present effort toward these problems' solution attempts to simplify them by reducing their number of states through truncation and state merging, as suggested by Shooman and Laemmel (1990). Through the use of state merging, it becomes possible to reduce the Bateman-Cortes 161 state model to a two state model with a closed-form solution. In the case of coupled networks, a technique which allows for problem-decomposition must be used.
Use of paired simple and complex models to reduce predictive bias and quantify uncertainty
NASA Astrophysics Data System (ADS)
Doherty, John; Christensen, Steen
2011-12-01
Modern environmental management and decision-making is based on the use of increasingly complex numerical models. Such models have the advantage of allowing representation of complex processes and heterogeneous system property distributions inasmuch as these are understood at any particular study site. The latter are often represented stochastically, this reflecting knowledge of the character of system heterogeneity at the same time as it reflects a lack of knowledge of its spatial details. Unfortunately, however, complex models are often difficult to calibrate because of their long run times and sometimes questionable numerical stability. Analysis of predictive uncertainty is also a difficult undertaking when using models such as these. Such analysis must reflect a lack of knowledge of spatial hydraulic property details. At the same time, it must be subject to constraints on the spatial variability of these details born of the necessity for model outputs to replicate observations of historical system behavior. In contrast, the rapid run times and general numerical reliability of simple models often promulgates good calibration and ready implementation of sophisticated methods of calibration-constrained uncertainty analysis. Unfortunately, however, many system and process details on which uncertainty may depend are, by design, omitted from simple models. This can lead to underestimation of the uncertainty associated with many predictions of management interest. The present paper proposes a methodology that attempts to overcome the problems associated with complex models on the one hand and simple models on the other hand, while allowing access to the benefits each of them offers. It provides a theoretical analysis of the simplification process from a subspace point of view, this yielding insights into the costs of model simplification, and into how some of these costs may be reduced. It then describes a methodology for paired model usage through which predictive bias of a simplified model can be detected and corrected, and postcalibration predictive uncertainty can be quantified. The methodology is demonstrated using a synthetic example based on groundwater modeling environments commonly encountered in northern Europe and North America.
Analyzing the causation of a railway accident based on a complex network
NASA Astrophysics Data System (ADS)
Ma, Xin; Li, Ke-Ping; Luo, Zi-Yan; Zhou, Jin
2014-02-01
In this paper, a new model is constructed for the causation analysis of railway accident based on the complex network theory. In the model, the nodes are defined as various manifest or latent accident causal factors. By employing the complex network theory, especially its statistical indicators, the railway accident as well as its key causations can be analyzed from the overall perspective. As a case, the “7.23” China—Yongwen railway accident is illustrated based on this model. The results show that the inspection of signals and the checking of line conditions before trains run played an important role in this railway accident. In conclusion, the constructed model gives a theoretical clue for railway accident prediction and, hence, greatly reduces the occurrence of railway accidents.
Communication Network Integration and Group Uniformity in a Complex Organization.
ERIC Educational Resources Information Center
Danowski, James A.; Farace, Richard V.
This paper contains a discussion of the limitations of research on group processes in complex organizations and the manner in which a procedure for network analysis in on-going systems can reduce problems. The research literature on group uniformity processes and on theoretical models of these processes from an information processing perspective…
Speelman, Amy L; Lehnert, Nicolai
2014-04-15
Researchers have completed extensive studies on heme and non-heme iron-nitrosyl complexes, which are labeled {FeNO}(7) in the Enemark-Feltham notation, but they have had very limited success in producing corresponding, one-electron reduced, {FeNO}(8) complexes where a nitroxyl anion (NO(-)) is formally bound to an iron(II) center. These complexes, and their protonated iron(II)-NHO analogues, are proposed key intermediates in nitrite (NO2(-)) and nitric oxide (NO) reducing enzymes in bacteria and fungi. In addition, HNO is known to have a variety of physiological effects, most notably in the cardiovascular system. HNO may also serve as a signaling molecule in mammals. For these functions, iron-containing proteins may mediate the production of HNO and serve as receptors for HNO in vivo. In this Account, we highlight recent key advances in the preparation, spectroscopic characterization, and reactivity of ferrous heme and non-heme nitroxyl (NO(-)/HNO) complexes that have greatly enhanced our understanding of the potential biological roles of these species. Low-spin (ls) heme {FeNO}(7) complexes (S = 1/2) can be reversibly reduced to the corresponding {FeNO}(8) species, which are stable, diamagnetic compounds. Because the reduction is ligand (NO) centered in these cases, it occurs at extremely negative redox potentials that are at the edge of the biologically feasible range. Interestingly, the electronic structures of ls-{FeNO}(7) and ls-{FeNO}(8) species are strongly correlated with very similar frontier molecular orbitals (FMOs) and thermodynamically strong Fe-NO bonds. In contrast, high-spin (hs) non-heme {FeNO}(7) complexes (S = 3/2) can be reduced at relatively mild redox potentials. Here, the reduction is metal-centered and leads to a paramagnetic (S = 1) {FeNO}(8) complex. The increased electron density at the iron center in these species significantly decreases the covalency of the Fe-NO bond, making the reduced complexes highly reactive. In the absence of steric bulk, monomeric high-spin {FeNO}(8) complexes decompose rapidly. Notably, in a recently prepared, dimeric [{FeNO}(7)]2 species, we observed that reduction leads to rapid N-N bond formation and N2O generation, which directly models the reactivity of flavodiiron NO reductases (FNORs). We have also made key progress in the preparation and stabilization of corresponding HNO complexes, {FeNHO}(8), using both heme and non-heme ligand sets. In both cases, we have taken advantage of sterically bulky coligands to stabilize these species. ls-{FeNO}(8) complexes are basic and easily form corresponding ls-{FeNHO}(8) species, which, however, decompose rapidly via disproportionation and H2 release. Importantly, we recently showed that we can suppress this reaction via steric protection of the bound HNO ligand. As a result, we have demonstrated that ls-{FeNHO}(8) model complexes are stable and amenable to spectroscopic characterization. Neither ls-{FeNO}(8) nor ls-{FeNHO}(8) model complexes are active for N-N coupling, and hence, seem unsuitable as reactive intermediates in nitric oxide reductases (NORs). Hs-{FeNO}(8) complexes are more basic than their hs-{FeNO}(7) precursors, but their electronic structure and reactivity is not as well characterized.
NASA Astrophysics Data System (ADS)
Raghupathy, Arun; Ghia, Karman; Ghia, Urmila
2008-11-01
Compact Thermal Models (CTM) to represent IC packages has been traditionally developed using the DELPHI-based (DEvelopment of Libraries of PHysical models for an Integrated design) methodology. The drawbacks of this method are presented, and an alternative method is proposed. A reduced-order model that provides the complete thermal information accurately with less computational resources can be effectively used in system level simulations. Proper Orthogonal Decomposition (POD), a statistical method, can be used to reduce the order of the degree of freedom or variables of the computations for such a problem. POD along with the Galerkin projection allows us to create reduced-order models that reproduce the characteristics of the system with a considerable reduction in computational resources while maintaining a high level of accuracy. The goal of this work is to show that this method can be applied to obtain a boundary condition independent reduced-order thermal model for complex components. The methodology is applied to the 1D transient heat equation.
Qian, Xinyi Lisa; Yarnal, Careen M; Almeida, David M
2013-01-01
Affective complexity, a manifestation of psychological well-being, refers to the relative independence between positive and negative affect (PA, NA). According to the Dynamic Model of Affect (DMA), stressful situations lead to highly inverse PA-NA relationship, reducing affective complexity. Meanwhile, positive events can sustain affective complexity by restoring PA-NA independence. Leisure, a type of positive events, has been identified as a coping resource. This study used the DMA to assess whether leisure time helps restore affective complexity on stressful days. We found that on days with more leisure time than usual, an individual experienced less negative PA-NA relationship after daily stressful events. The finding demonstrates the value of leisure time as a coping resource and the DMA's contribution to coping research.
Parodi, Jorge; Ormeño, David; Ochoa-de la Paz, Lenin D
2015-01-01
Alzheimer's disease severely compromises cognitive function. One of the mechanisms to explain the pathology of Alzheimer's disease has been the hypotheses of amyloid-pore/channel formation by complex Aβ-aggregates. Clinical studies suggested the moderate alcohol consumption can reduces probability developing neurodegenerative pathologies. A recent report explored the ability of ethanol to disrupt the generation of complex Aβ in vitro and reduce the toxicity in two cell lines. Molecular dynamics simulations were applied to understand how ethanol blocks the aggregation of amyloid. On the other hand, the in silico modeling showed ethanol effect over the dynamics assembling for complex Aβ-aggregates mediated by break the hydrosaline bridges between Asp 23 and Lys 28, was are key element for amyloid dimerization. The amyloid pore/channel hypothesis has been explored only in neuronal models, however recently experiments suggested the frog oocytes such an excellent model to explore the mechanism of the amyloid pore/channel hypothesis. So, the used of frog oocytes to explored the mechanism of amyloid aggregates is new, mainly for amyloid/pore hypothesis. Therefore, this experimental model is a powerful tool to explore the mechanism implicates in the Alzheimer's disease pathology and also suggests a model to prevent the Alzheimer's disease pathology.
Nagakura, Tadashi; Tabata, Kimiyo; Kira, Kazunobu; Hirota, Shinsuke; Clark, Richard; Matsuura, Fumiyoshi; Hiyoshi, Hironobu
2013-08-01
Many anticoagulant drugs target factors common to both the intrinsic and extrinsic coagulation pathways, which may lead to bleeding complications. Since the tissue factor (TF)/factor VIIa complex is associated with thrombosis onset and specifically activates the extrinsic coagulation pathway, compounds that inhibit this complex may provide therapeutic and/or prophylactic benefits with a decreased risk of bleeding. The in vitro enzyme profile and anticoagulation selectivity of the TF/VIIa complex inhibitor, ER-410660, and its prodrug E5539 were assessed using enzyme inhibitory and plasma clotting assays. In vivo effects of ER-410660 and E5539 were determined using a TF-induced, thrombin generation rhesus monkey model; a stasis-induced, venous thrombosis rat model; a photochemically induced, arterial thrombosis rat model; and a rat tail-cut bleeding model. ER-410660 selectively prolonged prothrombin time, but had a less potent anticoagulant effect on the intrinsic pathway. It also exhibited a dose-dependent inhibitory effect on thrombin generation caused by TF-injection in the rhesus monkey model. ER-410660 also reduced venous thrombus weights in the TF-administered, stasis-induced, venous thrombosis rat model and prolonged the occlusion time induced by arterial thrombus formation after vascular injury. The compound was capable of doubling the total bleeding time in the rat tail-cut model, albeit with a considerably higher dose compared to the effective dose in the venous and arterial thrombosis models. Moreover, E5539, an orally available ER-410660 prodrug, reduced the thrombin-anti-thrombin complex levels, induced by TF-injection, in a dose-dependent manner. Selective TF/VIIa inhibitors have potential as novel anticoagulants with a lower propensity for enhancing bleeding. Copyright © 2013 Elsevier Ltd. All rights reserved.
Complexity reduction of biochemical rate expressions.
Schmidt, Henning; Madsen, Mads F; Danø, Sune; Cedersund, Gunnar
2008-03-15
The current trend in dynamical modelling of biochemical systems is to construct more and more mechanistically detailed and thus complex models. The complexity is reflected in the number of dynamic state variables and parameters, as well as in the complexity of the kinetic rate expressions. However, a greater level of complexity, or level of detail, does not necessarily imply better models, or a better understanding of the underlying processes. Data often does not contain enough information to discriminate between different model hypotheses, and such overparameterization makes it hard to establish the validity of the various parts of the model. Consequently, there is an increasing demand for model reduction methods. We present a new reduction method that reduces complex rational rate expressions, such as those often used to describe enzymatic reactions. The method is a novel term-based identifiability analysis, which is easy to use and allows for user-specified reductions of individual rate expressions in complete models. The method is one of the first methods to meet the classical engineering objective of improved parameter identifiability without losing the systems biology demand of preserved biochemical interpretation. The method has been implemented in the Systems Biology Toolbox 2 for MATLAB, which is freely available from http://www.sbtoolbox2.org. The Supplementary Material contains scripts that show how to use it by applying the method to the example models, discussed in this article.
Mello-Andrade, Francyelli; da Costa, Wanderson Lucas; Pires, Wanessa Carvalho; Pereira, Flávia de Castro; Cardoso, Clever Gomes; Lino-Junior, Ruy de Souza; Irusta, Vicente Raul Chavarria; Carneiro, Cristiene Costa; de Melo-Reis, Paulo Roberto; Castro, Carlos Henrique; Almeida, Marcio Aurélio Pinheiro; Batista, Alzir Azevedo; Silveira-Lacerda, Elisângela de Paula
2017-10-01
Peritoneal carcinomatosis is considered as a potentially lethal clinical condition, and the therapeutic options are limited. The antitumor effectiveness of the [Ru(l-Met)(bipy)(dppb)]PF 6 (1) and the [Ru(l-Trp)(bipy)(dppb)]PF 6 (2) complexes were evaluated in the peritoneal carcinomatosis model, Ehrlich ascites carcinoma-bearing Swiss mice. This is the first study that evaluated the effect of Ru(II)/amino acid complexes for antitumor activity in vivo. Complexes 1 and 2 (2 and 6 mg kg -1 ) showed tumor growth inhibition ranging from moderate to high. The mean survival time of animal groups treated with complexes 1 and 2 was higher than in the negative and vehicle control groups. The induction of Ehrlich ascites carcinoma in mice led to alterations in hematological and biochemical parameters, and not the treatment with complexes 1 and 2. The treatment of Ehrlich ascites carcinoma-bearing mice with complexes 1 and 2 increased the number of Annexin V positive cells and cleaved caspase-3 levels and induced changes in the cell morphology and in the cell cycle phases by induction of sub-G1 and G0/G1 cell cycle arrest. In addition, these complexes reduce angiogenesis induced by Ehrlich ascites carcinoma cells in chick embryo chorioallantoic membrane model. The treatment with the LAT1 inhibitor decreased the sensitivity of the Ehrlich ascites carcinoma cells to complexes 1 and 2 in vitro-which suggests that the LAT1 could be related to the mechanism of action of amino acid/ruthenium(II) complexes, consequently decreasing the glucose uptake. Therefore, these complexes could be used to reduce tumor growth and increase mean survival time with less toxicity than cisplatin. Besides, these complexes induce apoptosis by combination of different mechanism of action.
ERIC Educational Resources Information Center
Sanchez, Pablo; Zorrilla, Marta; Duque, Rafael; Nieto-Reyes, Alicia
2011-01-01
Models in Software Engineering are considered as abstract representations of software systems. Models highlight relevant details for a certain purpose, whereas irrelevant ones are hidden. Models are supposed to make system comprehension easier by reducing complexity. Therefore, models should play a key role in education, since they would ease the…
NASA Astrophysics Data System (ADS)
Ganzert, Steven; Guttmann, Josef; Steinmann, Daniel; Kramer, Stefan
Lung protective ventilation strategies reduce the risk of ventilator associated lung injury. To develop such strategies, knowledge about mechanical properties of the mechanically ventilated human lung is essential. This study was designed to develop an equation discovery system to identify mathematical models of the respiratory system in time-series data obtained from mechanically ventilated patients. Two techniques were combined: (i) the usage of declarative bias to reduce search space complexity and inherently providing the processing of background knowledge. (ii) A newly developed heuristic for traversing the hypothesis space with a greedy, randomized strategy analogical to the GSAT algorithm. In 96.8% of all runs the applied equation discovery system was capable to detect the well-established equation of motion model of the respiratory system in the provided data. We see the potential of this semi-automatic approach to detect more complex mathematical descriptions of the respiratory system from respiratory data.
The Purpose of Analytical Models from the Perspective of a Data Provider.
ERIC Educational Resources Information Center
Sheehan, Bernard S.
The purpose of analytical models is to reduce complex institutional management problems and situations to simpler proportions and compressed time frames so that human skills of decision makers can be brought to bear most effectively. Also, modeling cultivates the art of management by forcing explicit and analytical consideration of important…
Modeling climate change impacts on the forest sector
John R. Mills; Ralph Alig; Richard W. Haynes; Darius M. Adams
2000-01-01
The forest sector has had a relatively long history of applying sectorial models to estimate the effects of atmospheric issues such as acid rain, climate change, and the forestry impacts of reduced atmospheric ozone. The models of the forest sector vary in scope and complexity but share a number of common features and databases.
Baldoví, José J; Gaita-Ariño, Alejandro; Coronado, Eugenio
2015-07-28
In a previous study, we introduced the Radial Effective Charge (REC) model to study the magnetic properties of lanthanide single ion magnets. Now, we perform an empirical determination of the effective charges (Zi) and radial displacements (Dr) of this model using spectroscopic data. This systematic study allows us to relate Dr and Zi with chemical factors such as the coordination number and the electronegativities of the metal and the donor atoms. This strategy is being used to drastically reduce the number of free parameters in the modeling of the magnetic and spectroscopic properties of f-element complexes.
An Efficient Model-based Diagnosis Engine for Hybrid Systems Using Structural Model Decomposition
NASA Technical Reports Server (NTRS)
Bregon, Anibal; Narasimhan, Sriram; Roychoudhury, Indranil; Daigle, Matthew; Pulido, Belarmino
2013-01-01
Complex hybrid systems are present in a large range of engineering applications, like mechanical systems, electrical circuits, or embedded computation systems. The behavior of these systems is made up of continuous and discrete event dynamics that increase the difficulties for accurate and timely online fault diagnosis. The Hybrid Diagnosis Engine (HyDE) offers flexibility to the diagnosis application designer to choose the modeling paradigm and the reasoning algorithms. The HyDE architecture supports the use of multiple modeling paradigms at the component and system level. However, HyDE faces some problems regarding performance in terms of complexity and time. Our focus in this paper is on developing efficient model-based methodologies for online fault diagnosis in complex hybrid systems. To do this, we propose a diagnosis framework where structural model decomposition is integrated within the HyDE diagnosis framework to reduce the computational complexity associated with the fault diagnosis of hybrid systems. As a case study, we apply our approach to a diagnostic testbed, the Advanced Diagnostics and Prognostics Testbed (ADAPT), using real data.
The application of sensitivity analysis to models of large scale physiological systems
NASA Technical Reports Server (NTRS)
Leonard, J. I.
1974-01-01
A survey of the literature of sensitivity analysis as it applies to biological systems is reported as well as a brief development of sensitivity theory. A simple population model and a more complex thermoregulatory model illustrate the investigatory techniques and interpretation of parameter sensitivity analysis. The role of sensitivity analysis in validating and verifying models, and in identifying relative parameter influence in estimating errors in model behavior due to uncertainty in input data is presented. This analysis is valuable to the simulationist and the experimentalist in allocating resources for data collection. A method for reducing highly complex, nonlinear models to simple linear algebraic models that could be useful for making rapid, first order calculations of system behavior is presented.
On the dimension of complex responses in nonlinear structural vibrations
NASA Astrophysics Data System (ADS)
Wiebe, R.; Spottswood, S. M.
2016-07-01
The ability to accurately model engineering systems under extreme dynamic loads would prove a major breakthrough in many aspects of aerospace, mechanical, and civil engineering. Extreme loads frequently induce both nonlinearities and coupling which increase the complexity of the response and the computational cost of finite element models. Dimension reduction has recently gained traction and promises the ability to distill dynamic responses down to a minimal dimension without sacrificing accuracy. In this context, the dimensionality of a response is related to the number of modes needed in a reduced order model to accurately simulate the response. Thus, an important step is characterizing the dimensionality of complex nonlinear responses of structures. In this work, the dimensionality of the nonlinear response of a post-buckled beam is investigated. Significant detail is dedicated to carefully introducing the experiment, the verification of a finite element model, and the dimensionality estimation algorithm as it is hoped that this system may help serve as a benchmark test case. It is shown that with minor modifications, the method of false nearest neighbors can quantitatively distinguish between the response dimension of various snap-through, non-snap-through, random, and deterministic loads. The state-space dimension of the nonlinear system in question increased from 2-to-10 as the system response moved from simple, low-level harmonic to chaotic snap-through. Beyond the problem studied herein, the techniques developed will serve as a prescriptive guide in developing fast and accurate dimensionally reduced models of nonlinear systems, and eventually as a tool for adaptive dimension-reduction in numerical modeling. The results are especially relevant in the aerospace industry for the design of thin structures such as beams, panels, and shells, which are all capable of spatio-temporally complex dynamic responses that are difficult and computationally expensive to model.
Modeling bed load transport and step-pool morphology with a reduced-complexity approach
NASA Astrophysics Data System (ADS)
Saletti, Matteo; Molnar, Peter; Hassan, Marwan A.; Burlando, Paolo
2016-04-01
Steep mountain channels are complex fluvial systems, where classical methods developed for lowland streams fail to capture the dynamics of sediment transport and bed morphology. Estimations of sediment transport based on average conditions have more than one order of magnitude of uncertainty because of the wide grain-size distribution of the bed material, the small relative submergence of coarse grains, the episodic character of sediment supply, and the complex boundary conditions. Most notably, bed load transport is modulated by the structure of the bed, where grains are imbricated in steps and similar bedforms and, therefore, they are much more stable then predicted. In this work we propose a new model based on a reduced-complexity (RC) approach focused on the reproduction of the step-pool morphology. In our 2-D cellular-automaton model entrainment, transport and deposition of particles are considered via intuitive rules based on physical principles. A parsimonious set of parameters allows the control of the behavior of the system, and the basic processes can be considered in a deterministic or stochastic way. The probability of entrainment of grains (and, as a consequence, particle travel distances and resting times) is a function of flow conditions and bed topography. Sediment input is fed at the upper boundary of the channel at a constant or variable rate. Our model yields realistic results in terms of longitudinal bed profiles and sediment transport trends. Phases of aggradation and degradation can be observed in the channel even under a constant input and the memory of the morphology can be quantified with long-range persistence indicators. Sediment yield at the channel outlet shows intermittency as observed in natural streams. Steps are self-formed in the channel and their stability is tested against the model parameters. Our results show the potential of RC models as complementary tools to more sophisticated models. They provide a realistic description of complex morphological systems and help to better identify the key physical principles that rule their dynamics.
D'Sousa Costa, Cinara O; Araujo Neto, João H; Baliza, Ingrid R S; Dias, Rosane B; Valverde, Ludmila de F; Vidal, Manuela T A; Sales, Caroline B S; Rocha, Clarissa A G; Moreira, Diogo R M; Soares, Milena B P; Batista, Alzir A; Bezerra, Daniel P
2017-11-28
Piplartine (piperlongumine) is a plant-derived molecule that has been receiving intense interest due to its anticancer characteristics that target the oxidative stress. In the present paper, two novel piplartine-containing ruthenium complexes [Ru(piplartine)(dppf)(bipy)](PF 6 ) 2 (1) and [Ru(piplartine)(dppb)(bipy)](PF 6 ) 2 (2) were synthesized and investigated for their cellular and molecular responses on cancer cell lines. We found that both complexes are more potent than metal-free piplartine in a panel of cancer cell lines on monolayer cultures, as well in 3D model of cancer multicellular spheroids formed from human colon carcinoma HCT116 cells. Mechanistic studies uncovered that the complexes reduced the cell growth and caused phosphatidylserine externalization, internucleosomal DNA fragmentation, caspase-3 activation and loss of the mitochondrial transmembrane potential on HCT116 cells. Moreover, the pre-treatment with Z-VAD(OMe)-FMK, a pan-caspase inhibitor, reduced the complexes-induced apoptosis, indicating cell death by apoptosis through caspase-dependent and mitochondrial intrinsic pathways. Treatment with the complexes also caused a marked increase in the production of reactive oxygen species (ROS), including hydrogen peroxide, superoxide anion and nitric oxide, and decreased reduced glutathione levels. Application of N-acetyl-cysteine, an antioxidant, reduced the ROS levels and apoptosis induced by the complexes, indicating activation of ROS-mediated apoptosis pathway. RNA transcripts of several genes, including gene related to the cell cycle, apoptosis and oxidative stress, were regulated under treatment. However, the complexes failed to induce DNA intercalation. In conclusion, the complexes are more potent than piplartine against different cancer cell lines and are able to induce caspase-dependent and mitochondrial intrinsic apoptosis on HCT116 cells by ROS-mediated pathway.
García-Diéguez, Carlos; Bernard, Olivier; Roca, Enrique
2013-03-01
The Anaerobic Digestion Model No. 1 (ADM1) is a complex model which is widely accepted as a common platform for anaerobic process modeling and simulation. However, it has a large number of parameters and states that hinder its calibration and use in control applications. A principal component analysis (PCA) technique was extended and applied to simplify the ADM1 using data of an industrial wastewater treatment plant processing winery effluent. The method shows that the main model features could be obtained with a minimum of two reactions. A reduced stoichiometric matrix was identified and the kinetic parameters were estimated on the basis of representative known biochemical kinetics (Monod and Haldane). The obtained reduced model takes into account the measured states in the anaerobic wastewater treatment (AWT) plant and reproduces the dynamics of the process fairly accurately. The reduced model can support on-line control, optimization and supervision strategies for AWT plants. Copyright © 2013 Elsevier Ltd. All rights reserved.
Modeling and simulation of ocean wave propagation using lattice Boltzmann method
NASA Astrophysics Data System (ADS)
Nuraiman, Dian
2017-10-01
In this paper, we present on modeling and simulation of ocean wave propagation from the deep sea to the shoreline. This requires high computational cost for simulation with large domain. We propose to couple a 1D shallow water equations (SWE) model with a 2D incompressible Navier-Stokes equations (NSE) model in order to reduce the computational cost. The coupled model is solved using the lattice Boltzmann method (LBM) with the lattice Bhatnagar-Gross-Krook (BGK) scheme. Additionally, a special method is implemented to treat the complex behavior of free surface close to the shoreline. The result shows the coupled model can reduce computational cost significantly compared to the full NSE model.
Qian, Xinyi (Lisa); Yarnal, Careen M.; Almeida, David M.
2013-01-01
Affective complexity, a manifestation of psychological well-being, refers to the relative independence between positive and negative affect (PA, NA). According to the Dynamic Model of Affect (DMA), stressful situations lead to highly inverse PA-NA relationship, reducing affective complexity. Meanwhile, positive events can sustain affective complexity by restoring PA-NA independence. Leisure, a type of positive events, has been identified as a coping resource. This study used the DMA to assess whether leisure time helps restore affective complexity on stressful days. We found that on days with more leisure time than usual, an individual experienced less negative PA-NA relationship after daily stressful events. The finding demonstrates the value of leisure time as a coping resource and the DMA’s contribution to coping research. PMID:24659826
Hopkins, Jim
2016-01-01
The main concepts of the free energy (FE) neuroscience developed by Karl Friston and colleagues parallel those of Freud's Project for a Scientific Psychology. In Hobson et al. (2014) these include an innate virtual reality generator that produces the fictive prior beliefs that Freud described as the primary process. This enables Friston's account to encompass a unified treatment-a complexity theory-of the role of virtual reality in both dreaming and mental disorder. In both accounts the brain operates to minimize FE aroused by sensory impingements-including interoceptive impingements that report compliance with biological imperatives-and constructs a representation/model of the causes of impingement that enables this minimization. In Friston's account (variational) FE equals complexity minus accuracy, and is minimized by increasing accuracy and decreasing complexity. Roughly the brain (or model) increases accuracy together with complexity in waking. This is mediated by consciousness-creating active inference-by which it explains sensory impingements in terms of perceptual experiences of their causes. In sleep it reduces complexity by processes that include both synaptic pruning and consciousness/virtual reality/dreaming in REM. The consciousness-creating active inference that effects complexity-reduction in REM dreaming must operate on FE-arousing data distinct from sensory impingement. The most relevant source is remembered arousals of emotion, both recent and remote, as processed in SWS and REM on "active systems" accounts of memory consolidation/reconsolidation. Freud describes these remembered arousals as condensed in the dreamwork for use in the conscious contents of dreams, and similar condensation can be seen in symptoms. Complexity partly reflects emotional conflict and trauma. This indicates that dreams and symptoms are both produced to reduce complexity in the form of potentially adverse (traumatic or conflicting) arousals of amygdala-related emotions. Mental disorder is thus caused by computational complexity together with mechanisms like synaptic pruning that have evolved for complexity-reduction; and important features of disorder can be understood in these terms. Details of the consilience among Freudian, systems consolidation, and complexity-reduction accounts appear clearly in the analysis of a single fragment of a dream, indicating also how complexity reduction proceeds by a process resembling Bayesian model selection.
Reduced complexity structural modeling for automated airframe synthesis
NASA Technical Reports Server (NTRS)
Hajela, Prabhat
1987-01-01
A procedure is developed for the optimum sizing of wing structures based on representing the built-up finite element assembly of the structure by equivalent beam models. The reduced-order beam models are computationally less demanding in an optimum design environment which dictates repetitive analysis of several trial designs. The design procedure is implemented in a computer program requiring geometry and loading information to create the wing finite element model and its equivalent beam model, and providing a rapid estimate of the optimum weight obtained from a fully stressed design approach applied to the beam. The synthesis procedure is demonstrated for representative conventional-cantilever and joined wing configurations.
NASA Astrophysics Data System (ADS)
Rocha, Alby D.; Groen, Thomas A.; Skidmore, Andrew K.; Darvishzadeh, Roshanak; Willemen, Louise
2017-11-01
The growing number of narrow spectral bands in hyperspectral remote sensing improves the capacity to describe and predict biological processes in ecosystems. But it also poses a challenge to fit empirical models based on such high dimensional data, which often contain correlated and noisy predictors. As sample sizes, to train and validate empirical models, seem not to be increasing at the same rate, overfitting has become a serious concern. Overly complex models lead to overfitting by capturing more than the underlying relationship, and also through fitting random noise in the data. Many regression techniques claim to overcome these problems by using different strategies to constrain complexity, such as limiting the number of terms in the model, by creating latent variables or by shrinking parameter coefficients. This paper is proposing a new method, named Naïve Overfitting Index Selection (NOIS), which makes use of artificially generated spectra, to quantify the relative model overfitting and to select an optimal model complexity supported by the data. The robustness of this new method is assessed by comparing it to a traditional model selection based on cross-validation. The optimal model complexity is determined for seven different regression techniques, such as partial least squares regression, support vector machine, artificial neural network and tree-based regressions using five hyperspectral datasets. The NOIS method selects less complex models, which present accuracies similar to the cross-validation method. The NOIS method reduces the chance of overfitting, thereby avoiding models that present accurate predictions that are only valid for the data used, and too complex to make inferences about the underlying process.
Remote sensing image ship target detection method based on visual attention model
NASA Astrophysics Data System (ADS)
Sun, Yuejiao; Lei, Wuhu; Ren, Xiaodong
2017-11-01
The traditional methods of detecting ship targets in remote sensing images mostly use sliding window to search the whole image comprehensively. However, the target usually occupies only a small fraction of the image. This method has high computational complexity for large format visible image data. The bottom-up selective attention mechanism can selectively allocate computing resources according to visual stimuli, thus improving the computational efficiency and reducing the difficulty of analysis. Considering of that, a method of ship target detection in remote sensing images based on visual attention model was proposed in this paper. The experimental results show that the proposed method can reduce the computational complexity while improving the detection accuracy, and improve the detection efficiency of ship targets in remote sensing images.
Castellazzi, Giovanni; D’Altri, Antonio Maria; Bitelli, Gabriele; Selvaggi, Ilenia; Lambertini, Alessandro
2015-01-01
In this paper, a new semi-automatic procedure to transform three-dimensional point clouds of complex objects to three-dimensional finite element models is presented and validated. The procedure conceives of the point cloud as a stacking of point sections. The complexity of the clouds is arbitrary, since the procedure is designed for terrestrial laser scanner surveys applied to buildings with irregular geometry, such as historical buildings. The procedure aims at solving the problems connected to the generation of finite element models of these complex structures by constructing a fine discretized geometry with a reduced amount of time and ready to be used with structural analysis. If the starting clouds represent the inner and outer surfaces of the structure, the resulting finite element model will accurately capture the whole three-dimensional structure, producing a complex solid made by voxel elements. A comparison analysis with a CAD-based model is carried out on a historical building damaged by a seismic event. The results indicate that the proposed procedure is effective and obtains comparable models in a shorter time, with an increased level of automation. PMID:26225978
Modeling the chemistry of complex petroleum mixtures.
Quann, R J
1998-01-01
Determining the complete molecular composition of petroleum and its refined products is not feasible with current analytical techniques because of the astronomical number of molecular components. Modeling the composition and behavior of such complex mixtures in refinery processes has accordingly evolved along a simplifying concept called lumping. Lumping reduces the complexity of the problem to a manageable form by grouping the entire set of molecular components into a handful of lumps. This traditional approach does not have a molecular basis and therefore excludes important aspects of process chemistry and molecular property fundamentals from the model's formulation. A new approach called structure-oriented lumping has been developed to model the composition and chemistry of complex mixtures at a molecular level. The central concept is to represent an individual molecular or a set of closely related isomers as a mathematical construct of certain specific and repeating structural groups. A complex mixture such as petroleum can then be represented as thousands of distinct molecular components, each having a mathematical identity. This enables the automated construction of large complex reaction networks with tens of thousands of specific reactions for simulating the chemistry of complex mixtures. Further, the method provides a convenient framework for incorporating molecular physical property correlations, existing group contribution methods, molecular thermodynamic properties, and the structure--activity relationships of chemical kinetics in the development of models. PMID:9860903
Representing Operational Modes for Situation Awareness
NASA Astrophysics Data System (ADS)
Kirchhübel, Denis; Lind, Morten; Ravn, Ole
2017-01-01
Operating complex plants is an increasingly demanding task for human operators. Diagnosis of and reaction to on-line events requires the interpretation of real time data. Vast amounts of sensor data as well as operational knowledge about the state and design of the plant are necessary to deduct reasonable reactions to abnormal situations. Intelligent computational support tools can make the operator’s task easier, but they require knowledge about the overall system in form of some model. While tools used for fault-tolerant control design based on physical principles and relations are valuable tools for designing robust systems, the models become too complex when considering the interactions on a plant-wide level. The alarm systems meant to support human operators in the diagnosis of the plant-wide situation on the other hand fail regularly in situations where these interactions of systems lead to many related alarms overloading the operator with alarm floods. Functional modelling can provide a middle way to reduce the complexity of plant-wide models by abstracting from physical details to more general functions and behaviours. Based on functional models the propagation of failures through the interconnected systems can be inferred and alarm floods can potentially be reduced to their root-cause. However, the desired behaviour of a complex system changes due to operating procedures that require more than one physical and functional configuration. In this paper a consistent representation of possible configurations is deduced from the analysis of an exemplary start-up procedure by functional models. The proposed interpretation of the modelling concepts simplifies the functional modelling of distinct modes. The analysis further reveals relevant links between the quantitative sensor data and the qualitative perspective of the diagnostics tool based on functional models. This will form the basis for the ongoing development of a novel real-time diagnostics system based on the on-line adaptation of the underlying MFM model.
Multiscale Modeling of Cardiac Cellular Energetics
BASSINGTHWAIGHTE, JAMES B.; CHIZECK, HOWARD J.; ATLAS, LES E.; QIAN, HONG
2010-01-01
Multiscale modeling is essential to integrating knowledge of human physiology starting from genomics, molecular biology, and the environment through the levels of cells, tissues, and organs all the way to integrated systems behavior. The lowest levels concern biophysical and biochemical events. The higher levels of organization in tissues, organs, and organism are complex, representing the dynamically varying behavior of billions of cells interacting together. Models integrating cellular events into tissue and organ behavior are forced to resort to simplifications to minimize computational complexity, thus reducing the model’s ability to respond correctly to dynamic changes in external conditions. Adjustments at protein and gene regulatory levels shortchange the simplified higher-level representations. Our cell primitive is composed of a set of subcellular modules, each defining an intracellular function (action potential, tricarboxylic acid cycle, oxidative phosphorylation, glycolysis, calcium cycling, contraction, etc.), composing what we call the “eternal cell,” which assumes that there is neither proteolysis nor protein synthesis. Within the modules are elements describing each particular component (i.e., enzymatic reactions of assorted types, transporters, ionic channels, binding sites, etc.). Cell subregions are stirred tanks, linked by diffusional or transporter-mediated exchange. The modeling uses ordinary differential equations rather than stochastic or partial differential equations. This basic model is regarded as a primitive upon which to build models encompassing gene regulation, signaling, and long-term adaptations in structure and function. During simulation, simpler forms of the model are used, when possible, to reduce computation. However, when this results in error, the more complex and detailed modules and elements need to be employed to improve model realism. The processes of error recognition and of mapping between different levels of model form complexity are challenging but are essential for successful modeling of large-scale systems in reasonable time. Currently there is to this end no established methodology from computational sciences. PMID:16093514
NASA Astrophysics Data System (ADS)
Mo, Shaoxing; Lu, Dan; Shi, Xiaoqing; Zhang, Guannan; Ye, Ming; Wu, Jianfeng; Wu, Jichun
2017-12-01
Global sensitivity analysis (GSA) and uncertainty quantification (UQ) for groundwater modeling are challenging because of the model complexity and significant computational requirements. To reduce the massive computational cost, a cheap-to-evaluate surrogate model is usually constructed to approximate and replace the expensive groundwater models in the GSA and UQ. Constructing an accurate surrogate requires actual model simulations on a number of parameter samples. Thus, a robust experimental design strategy is desired to locate informative samples so as to reduce the computational cost in surrogate construction and consequently to improve the efficiency in the GSA and UQ. In this study, we develop a Taylor expansion-based adaptive design (TEAD) that aims to build an accurate global surrogate model with a small training sample size. TEAD defines a novel hybrid score function to search informative samples, and a robust stopping criterion to terminate the sample search that guarantees the resulted approximation errors satisfy the desired accuracy. The good performance of TEAD in building global surrogate models is demonstrated in seven analytical functions with different dimensionality and complexity in comparison to two widely used experimental design methods. The application of the TEAD-based surrogate method in two groundwater models shows that the TEAD design can effectively improve the computational efficiency of GSA and UQ for groundwater modeling.
Tracer Flux Balance at an Urban Canyon Intersection
NASA Astrophysics Data System (ADS)
Carpentieri, Matteo; Robins, Alan G.
2010-05-01
Despite their importance for pollutant dispersion in urban areas, the special features of dispersion at street intersections are rarely taken into account by operational air quality models. Several previous studies have demonstrated the complex flow patterns that occur at street intersections, even with simple geometry. This study presents results from wind-tunnel experiments on a reduced scale model of a complex but realistic urban intersection, located in central London. Tracer concentration measurements were used to derive three-dimensional maps of the concentration field within the intersection. In combination with a previous study (Carpentieri et al., Boundary-Layer Meteorol 133:277-296, 2009) where the velocity field was measured in the same model, a methodology for the calculation of the mean tracer flux balance at the intersection was developed and applied. The calculation highlighted several limitations of current state-of-the-art canyon dispersion models, arising mainly from the complex geometry of the intersection. Despite its limitations, the proposed methodology could be further developed in order to derive, assess and implement street intersection dispersion models for complex urban areas.
Improved dense trajectories for action recognition based on random projection and Fisher vectors
NASA Astrophysics Data System (ADS)
Ai, Shihui; Lu, Tongwei; Xiong, Yudian
2018-03-01
As an important application of intelligent monitoring system, the action recognition in video has become a very important research area of computer vision. In order to improve the accuracy rate of the action recognition in video with improved dense trajectories, one advanced vector method is introduced. Improved dense trajectories combine Fisher Vector with Random Projection. The method realizes the reduction of the characteristic trajectory though projecting the high-dimensional trajectory descriptor into the low-dimensional subspace based on defining and analyzing Gaussian mixture model by Random Projection. And a GMM-FV hybrid model is introduced to encode the trajectory feature vector and reduce dimension. The computational complexity is reduced by Random Projection which can drop Fisher coding vector. Finally, a Linear SVM is used to classifier to predict labels. We tested the algorithm in UCF101 dataset and KTH dataset. Compared with existed some others algorithm, the result showed that the method not only reduce the computational complexity but also improved the accuracy of action recognition.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlsson, Mats; Johansson, Mikael; Larson, Jeffrey
Previous approaches for scheduling a league with round-robin and divisional tournaments involved decomposing the problem into easier subproblems. This approach, used to schedule the top Swedish handball league Elitserien, reduces the problem complexity but can result in suboptimal schedules. This paper presents an integrated constraint programming model that allows to perform the scheduling in a single step. Particular attention is given to identifying implied and symmetry-breaking constraints that reduce the computational complexity significantly. The experimental evaluation of the integrated approach takes considerably less computational effort than the previous approach.
Parodi, Jorge; Ormeño, David; la Paz, Lenin D. Ochoa-de
2015-01-01
Alzheimer's disease severely compromises cognitive function. One of the mechanisms to explain the pathology of Alzheimer’s disease has been the hypotheses of amyloid-pore/channel formation by complex Aβ-aggregates. Clinical studies suggested the moderate alcohol consumption can reduces probability developing neurodegenerative pathologies. A recent report explored the ability of ethanol to disrupt the generation of complex Aβ in vitro and reduce the toxicity in two cell lines. Molecular dynamics simulations were applied to understand how ethanol blocks the aggregation of amyloid. On the other hand, the in silico modeling showed ethanol effect over the dynamics assembling for complex Aβ-aggregates mediated by break the hydrosaline bridges between Asp 23 and Lys 28, was are key element for amyloid dimerization. The amyloid pore/channel hypothesis has been explored only in neuronal models, however recently experiments suggested the frog oocytes such an excellent model to explore the mechanism of the amyloid pore/channel hypothesis. So, the used of frog oocytes to explored the mechanism of amyloid aggregates is new, mainly for amyloid/pore hypothesis. Therefore, this experimental model is a powerful tool to explore the mechanism implicates in the Alzheimer’s disease pathology and also suggests a model to prevent the Alzheimer’s disease pathology. [BMB Reports 2015; 48(1): 13-18] PMID:25047445
Recent advances in QM/MM free energy calculations using reference potentials.
Duarte, Fernanda; Amrein, Beat A; Blaha-Nelson, David; Kamerlin, Shina C L
2015-05-01
Recent years have seen enormous progress in the development of methods for modeling (bio)molecular systems. This has allowed for the simulation of ever larger and more complex systems. However, as such complexity increases, the requirements needed for these models to be accurate and physically meaningful become more and more difficult to fulfill. The use of simplified models to describe complex biological systems has long been shown to be an effective way to overcome some of the limitations associated with this computational cost in a rational way. Hybrid QM/MM approaches have rapidly become one of the most popular computational tools for studying chemical reactivity in biomolecular systems. However, the high cost involved in performing high-level QM calculations has limited the applicability of these approaches when calculating free energies of chemical processes. In this review, we present some of the advances in using reference potentials and mean field approximations to accelerate high-level QM/MM calculations. We present illustrative applications of these approaches and discuss challenges and future perspectives for the field. The use of physically-based simplifications has shown to effectively reduce the cost of high-level QM/MM calculations. In particular, lower-level reference potentials enable one to reduce the cost of expensive free energy calculations, thus expanding the scope of problems that can be addressed. As was already demonstrated 40 years ago, the usage of simplified models still allows one to obtain cutting edge results with substantially reduced computational cost. This article is part of a Special Issue entitled Recent developments of molecular dynamics. Copyright © 2014. Published by Elsevier B.V.
Complex Instruction Set Quantum Computing
NASA Astrophysics Data System (ADS)
Sanders, G. D.; Kim, K. W.; Holton, W. C.
1998-03-01
In proposed quantum computers, electromagnetic pulses are used to implement logic gates on quantum bits (qubits). Gates are unitary transformations applied to coherent qubit wavefunctions and a universal computer can be created using a minimal set of gates. By applying many elementary gates in sequence, desired quantum computations can be performed. This reduced instruction set approach to quantum computing (RISC QC) is characterized by serial application of a few basic pulse shapes and a long coherence time. However, the unitary matrix of the overall computation is ultimately a unitary matrix of the same size as any of the elementary matrices. This suggests that we might replace a sequence of reduced instructions with a single complex instruction using an optimally taylored pulse. We refer to this approach as complex instruction set quantum computing (CISC QC). One trades the requirement for long coherence times for the ability to design and generate potentially more complex pulses. We consider a model system of coupled qubits interacting through nearest neighbor coupling and show that CISC QC can reduce the time required to perform quantum computations.
Muthiah, Muthunarayanan; Che, Hui-Lian; Kalash, Santhosh; Jo, Jihoon; Choi, Seok-Yong; Kim, Won Jong; Cho, Chong Su; Lee, Jae Young; Park, In-Kyu
2015-02-01
In this study, thiol-modified siRNA (SH-siRNA) was delivered by bioreducible polyethylenimine (ssPEI), to enhance physicochemical properties of polyplexes and function of siRNA through disulfide bonding between SH-siRNA and ssPEI. The ssPEI was utilized to deliver Akt1 SH-siRNA for suppression of Akt1 mRNA and blockage of Akt1 protein translation, resulting in reduced cellular proliferation and the induction of apoptosis. Disulfide bondings between the ssPEI and SH-siRNA through thiol groups in both were confirmed by DTT treatment. Complexation between ssPEI and Akt1SH-siRNA was enhanced and reduced surface charge of ssPEI/Akt1SH-siRNA complexes with smaller average particle sizes even at lower N/P ratios was obtained compared with PEI/Akt1siRNA ones. Cellular uptake of ssPEI/Akt1SH-siRNA complexes in CT-26 mouse colon cancer cells was also enhanced. The ssPEI/Akt1SH-siRNA complexes reduced proliferation and increased apoptosis of mouse colon cancer cells in vitro. In an in vivo mouse tumor model, the complexes reduced tumor proliferation and downregulation of Akt1 compared to controls. Copyright © 2014 Elsevier B.V. All rights reserved.
78 FR 8535 - Medicare Program: Comprehensive End-Stage Renal Disease Care Model Announcement
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-06
... develop and test innovative health care payment and service delivery models that show promise of reducing program expenditures, while preserving or enhancing the quality of care for Medicare, Medicaid, and... disease (ESRD). This population has complex health care needs, typically with comorbid conditions and...
Multiplexed Predictive Control of a Large Commercial Turbofan Engine
NASA Technical Reports Server (NTRS)
Richter, hanz; Singaraju, Anil; Litt, Jonathan S.
2008-01-01
Model predictive control is a strategy well-suited to handle the highly complex, nonlinear, uncertain, and constrained dynamics involved in aircraft engine control problems. However, it has thus far been infeasible to implement model predictive control in engine control applications, because of the combination of model complexity and the time allotted for the control update calculation. In this paper, a multiplexed implementation is proposed that dramatically reduces the computational burden of the quadratic programming optimization that must be solved online as part of the model-predictive-control algorithm. Actuator updates are calculated sequentially and cyclically in a multiplexed implementation, as opposed to the simultaneous optimization taking place in conventional model predictive control. Theoretical aspects are discussed based on a nominal model, and actual computational savings are demonstrated using a realistic commercial engine model.
NASA Astrophysics Data System (ADS)
Duan, Hongjie; Li, Lijun; Tao, Junyi
2017-06-01
The pin-cycloid gear planetary reducer with ring-plate-type is a new type of reducers. It has high transmission ratio range and high efficiency. In this paper the working principle of pin-cycloid gear planetary reducer is discussed, and the structure of the reducer is designed. Especially for the complexity and the difficulty in modelling of the cycloid gear tooth profile, the parametric design module of cycloid gear is developed to solve the cycloid gear modelling problem through the second development of Solid Works. At last, the speed schemes of the input shaft and output shaft of the reducer are obtained by the motion simulation. Through the analysis of the simulation curves, the rationality of the structure design is proved, which provides a theoretical basis for the design and manufacture of the reducer.
The Difference between Uncertainty and Information, and Why This Matters
NASA Astrophysics Data System (ADS)
Nearing, G. S.
2016-12-01
Earth science investigation and arbitration (for decision making) is very often organized around a concept of uncertainty. It seems relatively straightforward that the purpose of our science is to reduce uncertainty about how environmental systems will react and evolve under different conditions. I propose here that approaching a science of complex systems as a process of quantifying and reducing uncertainty is a mistake, and specifically a mistake that is rooted in certain rather hisoric logical errors. Instead I propose that we should be asking questions about information. I argue here that an information-based perspective facilitates almost trivial answers to environmental science questions that are either difficult or theoretically impossible to answer when posed as questions about uncertainty. In particular, I propose that an information-centric perspective leads to: Coherent and non-subjective hypothesis tests for complex system models. Process-level diagnostics for complex systems models. Methods for building complex systems models that allow for inductive inference without the need for a priori specification of likelihood functions or ad hoc error metrics. Asymptotically correct quantification of epistemic uncertainty. To put this in slightly more basic terms, I propose that an information-theoretic philosophy of science has the potential to resolve certain important aspects of the Demarcation Problem and the Duhem-Quine Problem, and that Hydrology and other Earth Systems Sciences can immediately capitalize on this to address some of our most difficult and persistent problems.
Al-Sadoon, Mohammed A. G.; Zuid, Abdulkareim; Jones, Stephen M. R.; Noras, James M.
2017-01-01
This paper proposes a new low complexity angle of arrival (AOA) method for signal direction estimation in multi-element smart wireless communication systems. The new method estimates the AOAs of the received signals directly from the received signals with significantly reduced complexity since it does not need to construct the correlation matrix, invert the matrix or apply eigen-decomposition, which are computationally expensive. A mathematical model of the proposed method is illustrated and then verified using extensive computer simulations. Both linear and circular sensors arrays are studied using various numerical examples. The method is systematically compared with other common and recently introduced AOA methods over a wide range of scenarios. The simulated results show that the new method has several advantages in terms of reduced complexity and improved accuracy under the assumptions of correlated signals and limited numbers of snapshots. PMID:29140313
Al-Sadoon, Mohammed A G; Ali, Nazar T; Dama, Yousf; Zuid, Abdulkareim; Jones, Stephen M R; Abd-Alhameed, Raed A; Noras, James M
2017-11-15
This paper proposes a new low complexity angle of arrival (AOA) method for signal direction estimation in multi-element smart wireless communication systems. The new method estimates the AOAs of the received signals directly from the received signals with significantly reduced complexity since it does not need to construct the correlation matrix, invert the matrix or apply eigen-decomposition, which are computationally expensive. A mathematical model of the proposed method is illustrated and then verified using extensive computer simulations. Both linear and circular sensors arrays are studied using various numerical examples. The method is systematically compared with other common and recently introduced AOA methods over a wide range of scenarios. The simulated results show that the new method has several advantages in terms of reduced complexity and improved accuracy under the assumptions of correlated signals and limited numbers of snapshots.
Experimentally modeling stochastic processes with less memory by the use of a quantum processor
Palsson, Matthew S.; Gu, Mile; Ho, Joseph; Wiseman, Howard M.; Pryde, Geoff J.
2017-01-01
Computer simulation of observable phenomena is an indispensable tool for engineering new technology, understanding the natural world, and studying human society. However, the most interesting systems are often so complex that simulating their future behavior demands storing immense amounts of information regarding how they have behaved in the past. For increasingly complex systems, simulation becomes increasingly difficult and is ultimately constrained by resources such as computer memory. Recent theoretical work shows that quantum theory can reduce this memory requirement beyond ultimate classical limits, as measured by a process’ statistical complexity, C. We experimentally demonstrate this quantum advantage in simulating stochastic processes. Our quantum implementation observes a memory requirement of Cq = 0.05 ± 0.01, far below the ultimate classical limit of C = 1. Scaling up this technique would substantially reduce the memory required in simulations of more complex systems. PMID:28168218
NASA Astrophysics Data System (ADS)
Rubinstein, A.; Sabirianov, R. F.; Mei, W. N.; Namavar, F.; Khoynezhad, A.
2010-08-01
Using a nonlocal electrostatic approach that incorporates the short-range structure of the contacting media, we evaluated the electrostatic contribution to the energy of the complex formation of two model proteins. In this study, we have demonstrated that the existence of an ordered interfacial water layer at the protein-solvent interface reduces the charging energy of the proteins in the aqueous solvent, and consequently increases the electrostatic contribution to the protein binding (change in free energy upon the complex formation of two proteins). This is in contrast with the finding of the continuum electrostatic model, which suggests that electrostatic interactions are not strong enough to compensate for the unfavorable desolvation effects.
Rubinstein, A; Sabirianov, R F; Mei, W N; Namavar, F; Khoynezhad, A
2010-08-01
Using a nonlocal electrostatic approach that incorporates the short-range structure of the contacting media, we evaluated the electrostatic contribution to the energy of the complex formation of two model proteins. In this study, we have demonstrated that the existence of an ordered interfacial water layer at the protein-solvent interface reduces the charging energy of the proteins in the aqueous solvent, and consequently increases the electrostatic contribution to the protein binding (change in free energy upon the complex formation of two proteins). This is in contrast with the finding of the continuum electrostatic model, which suggests that electrostatic interactions are not strong enough to compensate for the unfavorable desolvation effects.
An S-Oxygenated [NiFe] Complex Modelling Sulfenate Intermediates of an O2 -Tolerant Hydrogenase.
Lindenmaier, Nils J; Wahlefeld, Stefan; Bill, Eckhard; Szilvási, Tibor; Eberle, Christopher; Yao, Shenglai; Hildebrandt, Peter; Horch, Marius; Zebger, Ingo; Driess, Matthias
2017-02-13
To understand the molecular details of O 2 -tolerant hydrogen cycling by a soluble NAD + -reducing [NiFe] hydrogenase, we herein present the first bioinspired heterobimetallic S-oxygenated [NiFe] complex as a structural and vibrational spectroscopic model for the oxygen-inhibited [NiFe] active site. This compound and its non-S-oxygenated congener were fully characterized, and their electronic structures were elucidated in a combined experimental and theoretical study with emphasis on the bridging sulfenato moiety. Based on the vibrational spectroscopic properties of these complexes, we also propose novel strategies for exploring S-oxygenated intermediates in hydrogenases and similar enzymes. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Computational modeling of carbohydrate recognition in protein complex
NASA Astrophysics Data System (ADS)
Ishida, Toyokazu
2017-11-01
To understand the mechanistic principle of carbohydrate recognition in proteins, we propose a systematic computational modeling strategy to identify complex carbohydrate chain onto the reduced 2D free energy surface (2D-FES), determined by MD sampling combined with QM/MM energy corrections. In this article, we first report a detailed atomistic simulation study of the norovirus capsid proteins with carbohydrate antigens based on ab initio QM/MM combined with MD-FEP simulations. The present result clearly shows that the binding geometries of complex carbohydrate antigen are determined not by one single, rigid carbohydrate structure, but rather by the sum of averaged conformations mapped onto the minimum free energy region of QM/MM 2D-FES.
Tykesson, Emil; Mao, Yang; Maccarana, Marco; Pu, Yi; Gao, Jinshan; Lin, Cheng; Zaia, Joseph; Westergren-Thorsson, Gunilla; Ellervik, Ulf; Malmström, Lars; Malmström, Anders
2016-02-01
Distinct from template-directed biosynthesis of nucleic acids and proteins, the enzymatic synthesis of heterogeneous polysaccharides is a complex process that is difficult to study using common analytical tools. Therefore, the mode of action and processivity of those enzymes are largely unknown. Dermatan sulfate epimerase 1 (DS-epi1) is the predominant enzyme during the formation of iduronic acid residues in the glycosaminoglycan dermatan sulfate. Using recombinant DS-epi1 as a model enzyme, we describe a tandem mass spectrometry-based method to study the mode of action of polysaccharide processing enzymes. The enzyme action on the substrate was monitored by hydrogen-deuterium exchange mass spectrometry and the sequence information was then fed into mathematical models with two different assumptions of the mode of action for the enzyme: processive reducing end to non-reducing end, and processive non-reducing end to reducing end. Model data was scored by correlation to experimental data and it was found that DS-epi1 attacks its substrate on a random position, followed by a processive mode of modification towards the non-reducing end and that the substrate affinity of the enzyme is negatively affected by each additional epimerization event. It could also be shown that the smallest active substrate was the reducing end uronic acid in a tetrasaccharide and that octasaccharides and longer oligosaccharides were optimal substrates. The method of using tandem mass spectrometry to generate sequence information of the complex enzymatic products in combination with in silico modeling can be potentially applied to study the mode of action of other enzymes involved in polysaccharide biosynthesis.
A continuum theory for multicomponent chromatography modeling.
Pfister, David; Morbidelli, Massimo; Nicoud, Roger-Marc
2016-05-13
A continuum theory is proposed for modeling multicomponent chromatographic systems under linear conditions. The model is based on the description of complex mixtures, possibly involving tens or hundreds of solutes, by a continuum. The present approach is shown to be very efficient when dealing with a large number of similar components presenting close elution behaviors and whose individual analytical characterization is impossible. Moreover, approximating complex mixtures by continuous distributions of solutes reduces the required number of model parameters to the few ones specific to the characterization of the selected continuous distributions. Therefore, in the frame of the continuum theory, the simulation of large multicomponent systems gets simplified and the computational effectiveness of the chromatographic model is thus dramatically improved. Copyright © 2016 Elsevier B.V. All rights reserved.
A Simplified Biosphere Model for Global Climate Studies.
NASA Astrophysics Data System (ADS)
Xue, Y.; Sellers, P. J.; Kinter, J. L.; Shukla, J.
1991-03-01
The Simple Biosphere Model (SiB) as described in Sellers et al. is a bio-physically based model of land surface-atmosphere interaction. For some general circulation model (GCM) climate studies, further simplifications are desirable to have greater computation efficiency, and more important, to consolidate the parametric representation. Three major reductions in the complexity of SiB have been achieved in the present study.The diurnal variation of surface albedo is computed in SiB by means of a comprehensive yet complex calculation. Since the diurnal cycle is quite regular for each vegetation type, this calculation can be simplified considerably. The effect of root zone soil moisture on stomatal resistance is substantial, but the computation in SiB is complicated and expensive. We have developed approximations, which simulate the effects of reduced soil moisture more simply, keeping the essence of the biophysical concepts used in SiB.The surface stress and the fluxes of heat and moisture between the top of the vegetation canopy and an atmospheric reference level have been parameterized in an off-line version of SiB based upon the studies by Businger et al. and Paulson. We have developed a linear relationship between Richardson number and aero-dynamic resistance. Finally, the second vegetation layer of the original model does not appear explicitly after simplification. Compared to the model of Sellers et al., we have reduced the number of input parameters from 44 to 21. A comparison of results using the reduced parameter biosphere with those from the original formulation in a GCM and a zero-dimensional model shows the simplified version to reproduce the original results quite closely. After simplification, the computational requirement of SiB was reduced by about 55%.
Gradient-based model calibration with proxy-model assistance
NASA Astrophysics Data System (ADS)
Burrows, Wesley; Doherty, John
2016-02-01
Use of a proxy model in gradient-based calibration and uncertainty analysis of a complex groundwater model with large run times and problematic numerical behaviour is described. The methodology is general, and can be used with models of all types. The proxy model is based on a series of analytical functions that link all model outputs used in the calibration process to all parameters requiring estimation. In enforcing history-matching constraints during the calibration and post-calibration uncertainty analysis processes, the proxy model is run for the purposes of populating the Jacobian matrix, while the original model is run when testing parameter upgrades; the latter process is readily parallelized. Use of a proxy model in this fashion dramatically reduces the computational burden of complex model calibration and uncertainty analysis. At the same time, the effect of model numerical misbehaviour on calculation of local gradients is mitigated, this allowing access to the benefits of gradient-based analysis where lack of integrity in finite-difference derivatives calculation would otherwise have impeded such access. Construction of a proxy model, and its subsequent use in calibration of a complex model, and in analysing the uncertainties of predictions made by that model, is implemented in the PEST suite.
Construction and analysis of a modular model of caspase activation in apoptosis
Harrington, Heather A; Ho, Kenneth L; Ghosh, Samik; Tung, KC
2008-01-01
Background A key physiological mechanism employed by multicellular organisms is apoptosis, or programmed cell death. Apoptosis is triggered by the activation of caspases in response to both extracellular (extrinsic) and intracellular (intrinsic) signals. The extrinsic and intrinsic pathways are characterized by the formation of the death-inducing signaling complex (DISC) and the apoptosome, respectively; both the DISC and the apoptosome are oligomers with complex formation dynamics. Additionally, the extrinsic and intrinsic pathways are coupled through the mitochondrial apoptosis-induced channel via the Bcl-2 family of proteins. Results A model of caspase activation is constructed and analyzed. The apoptosis signaling network is simplified through modularization methodologies and equilibrium abstractions for three functional modules. The mathematical model is composed of a system of ordinary differential equations which is numerically solved. Multiple linear regression analysis investigates the role of each module and reduced models are constructed to identify key contributions of the extrinsic and intrinsic pathways in triggering apoptosis for different cell lines. Conclusion Through linear regression techniques, we identified the feedbacks, dissociation of complexes, and negative regulators as the key components in apoptosis. The analysis and reduced models for our model formulation reveal that the chosen cell lines predominately exhibit strong extrinsic caspase, typical of type I cell, behavior. Furthermore, under the simplified model framework, the selected cells lines exhibit different modes by which caspase activation may occur. Finally the proposed modularized model of apoptosis may generalize behavior for additional cells and tissues, specifically identifying and predicting components responsible for the transition from type I to type II cell behavior. PMID:19077196
Preprocessing Inconsistent Linear System for a Meaningful Least Squares Solution
NASA Technical Reports Server (NTRS)
Sen, Syamal K.; Shaykhian, Gholam Ali
2011-01-01
Mathematical models of many physical/statistical problems are systems of linear equations. Due to measurement and possible human errors/mistakes in modeling/data, as well as due to certain assumptions to reduce complexity, inconsistency (contradiction) is injected into the model, viz. the linear system. While any inconsistent system irrespective of the degree of inconsistency has always a least-squares solution, one needs to check whether an equation is too much inconsistent or, equivalently too much contradictory. Such an equation will affect/distort the least-squares solution to such an extent that renders it unacceptable/unfit to be used in a real-world application. We propose an algorithm which (i) prunes numerically redundant linear equations from the system as these do not add any new information to the model, (ii) detects contradictory linear equations along with their degree of contradiction (inconsistency index), (iii) removes those equations presumed to be too contradictory, and then (iv) obtain the minimum norm least-squares solution of the acceptably inconsistent reduced linear system. The algorithm presented in Matlab reduces the computational and storage complexities and also improves the accuracy of the solution. It also provides the necessary warning about the existence of too much contradiction in the model. In addition, we suggest a thorough relook into the mathematical modeling to determine the reason why unacceptable contradiction has occurred thus prompting us to make necessary corrections/modifications to the models - both mathematical and, if necessary, physical.
Preprocessing in Matlab Inconsistent Linear System for a Meaningful Least Squares Solution
NASA Technical Reports Server (NTRS)
Sen, Symal K.; Shaykhian, Gholam Ali
2011-01-01
Mathematical models of many physical/statistical problems are systems of linear equations Due to measurement and possible human errors/mistakes in modeling/data, as well as due to certain assumptions to reduce complexity, inconsistency (contradiction) is injected into the model, viz. the linear system. While any inconsistent system irrespective of the degree of inconsistency has always a least-squares solution, one needs to check whether an equation is too much inconsistent or, equivalently too much contradictory. Such an equation will affect/distort the least-squares solution to such an extent that renders it unacceptable/unfit to be used in a real-world application. We propose an algorithm which (i) prunes numerically redundant linear equations from the system as these do not add any new information to the model, (ii) detects contradictory linear equations along with their degree of contradiction (inconsistency index), (iii) removes those equations presumed to be too contradictory, and then (iv) obtain the . minimum norm least-squares solution of the acceptably inconsistent reduced linear system. The algorithm presented in Matlab reduces the computational and storage complexities and also improves the accuracy of the solution. It also provides the necessary warning about the existence of too much contradiction in the model. In addition, we suggest a thorough relook into the mathematical modeling to determine the reason why unacceptable contradiction has occurred thus prompting us to make necessary corrections/modifications to the models - both mathematical and, if necessary, physical.
A review of surrogate models and their application to groundwater modeling
NASA Astrophysics Data System (ADS)
Asher, M. J.; Croke, B. F. W.; Jakeman, A. J.; Peeters, L. J. M.
2015-08-01
The spatially and temporally variable parameters and inputs to complex groundwater models typically result in long runtimes which hinder comprehensive calibration, sensitivity, and uncertainty analysis. Surrogate modeling aims to provide a simpler, and hence faster, model which emulates the specified output of a more complex model in function of its inputs and parameters. In this review paper, we summarize surrogate modeling techniques in three categories: data-driven, projection, and hierarchical-based approaches. Data-driven surrogates approximate a groundwater model through an empirical model that captures the input-output mapping of the original model. Projection-based models reduce the dimensionality of the parameter space by projecting the governing equations onto a basis of orthonormal vectors. In hierarchical or multifidelity methods the surrogate is created by simplifying the representation of the physical system, such as by ignoring certain processes, or reducing the numerical resolution. In discussing the application to groundwater modeling of these methods, we note several imbalances in the existing literature: a large body of work on data-driven approaches seemingly ignores major drawbacks to the methods; only a fraction of the literature focuses on creating surrogates to reproduce outputs of fully distributed groundwater models, despite these being ubiquitous in practice; and a number of the more advanced surrogate modeling methods are yet to be fully applied in a groundwater modeling context.
Model reduction of the numerical analysis of Low Impact Developments techniques
NASA Astrophysics Data System (ADS)
Brunetti, Giuseppe; Šimůnek, Jirka; Wöhling, Thomas; Piro, Patrizia
2017-04-01
Mechanistic models have proven to be accurate and reliable tools for the numerical analysis of the hydrological behavior of Low Impact Development (LIDs) techniques. However, their widespread adoption is limited by their complexity and computational cost. Recent studies have tried to address this issue by investigating the application of new techniques, such as surrogate-based modeling. However, current results are still limited and fragmented. One of such approaches, the Model Order Reduction (MOR) technique, can represent a valuable tool for reducing the computational complexity of a numerical problems by computing an approximation of the original model. While this technique has been extensively used in water-related problems, no studies have evaluated its use in LIDs modeling. Thus, the main aim of this study is to apply the MOR technique for the development of a reduced order model (ROM) for the numerical analysis of the hydrologic behavior of LIDs, in particular green roofs. The model should be able to correctly reproduce all the hydrological processes of a green roof while reducing the computational cost. The proposed model decouples the subsurface water dynamic of a green roof in a) one-dimensional (1D) vertical flow through a green roof itself and b) one-dimensional saturated lateral flow along the impervious rooftop. The green roof is horizontally discretized in N elements. Each element represents a vertical domain, which can have different properties or boundary conditions. The 1D Richards equation is used to simulate flow in the substrate and drainage layers. Simulated outflow from the vertical domain is used as a recharge term for saturated lateral flow, which is described using the kinematic wave approximation of the Boussinesq equation. The proposed model has been compared with the mechanistic model HYDRUS-2D, which numerically solves the Richards equation for the whole domain. The HYDRUS-1D code has been used for the description of vertical flow, while a Finite Volume Scheme has been adopted for lateral flow. Two scenarios involving flat and steep green roofs were analyzed. Results confirmed the accuracy of the reduced order model, which was able to reproduce both subsurface outflow and the moisture distribution in the green roof, significantly reducing the computational cost.
Steel, Jason C; Cavanagh, Heather M A; Burton, Mark A; Abu-Asab, Mones S; Tsokos, Maria; Morris, John C; Kalle, Wouter H J
2007-04-01
We aimed to increase the efficiency of adenoviral vectors by limiting adenoviral spread from the target site and reducing unwanted host immune responses to the vector. We complexed adenoviral vectors with DDAB-DOPE liposomes to form adenovirus-liposomal (AL) complexes. AL complexes were delivered by intratumoral injection in an immunocompetent subcutaneous rat tumor model and the immunogenicity of the AL complexes and the expression efficiency in the tumor and other organs was examined. Animals treated with the AL complexes had significantly lower levels of beta-galactosidase expression in systemic tissues compared to animals treated with the naked adenovirus (NA) (P<0.05). The tumor to non-tumor ratio of beta-galactosidase marker expression was significantly higher for the AL complex treated animals. NA induced significantly higher titers of adenoviral-specific antibodies compared to the AL complexes (P<0.05). The AL complexes provided protection (immunoshielding) to the adenovirus from neutralizing antibody. Forty-seven percent more beta-galactosidase expression was detected following intratumoral injection with AL complexes compared to the NA in animals pre-immunized with adenovirus. Complexing of adenovirus with liposomes provides a simple method to enhance tumor localization of the vector, decrease the immunogenicity of adenovirus, and provide protection of the virus from pre-existing neutralizing antibodies.
Improving the Aircraft Design Process Using Web-Based Modeling and Simulation
NASA Technical Reports Server (NTRS)
Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.; Follen, Gregory J. (Technical Monitor)
2000-01-01
Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and multifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.
Improving the Aircraft Design Process Using Web-based Modeling and Simulation
NASA Technical Reports Server (NTRS)
Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.
2003-01-01
Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and muitifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.
Research on On-Line Modeling of Fed-Batch Fermentation Process Based on v-SVR
NASA Astrophysics Data System (ADS)
Ma, Yongjun
The fermentation process is very complex and non-linear, many parameters are not easy to measure directly on line, soft sensor modeling is a good solution. This paper introduces v-support vector regression (v-SVR) for soft sensor modeling of fed-batch fermentation process. v-SVR is a novel type of learning machine. It can control the accuracy of fitness and prediction error by adjusting the parameter v. An on-line training algorithm is discussed in detail to reduce the training complexity of v-SVR. The experimental results show that v-SVR has low error rate and better generalization with appropriate v.
Determining relative error bounds for the CVBEM
Hromadka, T.V.
1985-01-01
The Complex Variable Boundary Element Methods provides a measure of relative error which can be utilized to subsequently reduce the error or provide information for further modeling analysis. By maximizing the relative error norm on each boundary element, a bound on the total relative error for each boundary element can be evaluated. This bound can be utilized to test CVBEM convergence, to analyze the effects of additional boundary nodal points in reducing the modeling error, and to evaluate the sensitivity of resulting modeling error within a boundary element from the error produced in another boundary element as a function of geometric distance. ?? 1985.
INDIVIDUAL-BASED MODELS: POWERFUL OR POWER STRUGGLE?
Willem, L; Stijven, S; Hens, N; Vladislavleva, E; Broeckhove, J; Beutels, P
2015-01-01
Individual-based models (IBMs) offer endless possibilities to explore various research questions but come with high model complexity and computational burden. Large-scale IBMs have become feasible but the novel hardware architectures require adapted software. The increased model complexity also requires systematic exploration to gain thorough system understanding. We elaborate on the development of IBMs for vaccine-preventable infectious diseases and model exploration with active learning. Investment in IBM simulator code can lead to significant runtime reductions. We found large performance differences due to data locality. Sorting the population once, reduced simulation time by a factor two. Storing person attributes separately instead of using person objects also seemed more efficient. Next, we improved model performance up to 70% by structuring potential contacts based on health status before processing disease transmission. The active learning approach we present is based on iterative surrogate modelling and model-guided experimentation. Symbolic regression is used for nonlinear response surface modelling with automatic feature selection. We illustrate our approach using an IBM for influenza vaccination. After optimizing the parameter spade, we observed an inverse relationship between vaccination coverage and the clinical attack rate reinforced by herd immunity. These insights can be used to focus and optimise research activities, and to reduce both dimensionality and decision uncertainty.
NASA Astrophysics Data System (ADS)
Saidi, Samah; Alharzali, Nissrin; Berriche, Hamid
2017-04-01
The potential energy curves and spectroscopic constants of the ground-state of the Mg-Rg (Rg = He, Ne, Ar, Kr, and Xe) van der Waals complexes are generated by the Tang-Toennies potential model and a set of derived combining rules. The parameters of the model are calculated from the potentials of the homonuclear magnesium and rare-gas dimers. The predicted spectroscopic constants are comparable to other available theoretical and experimental results, except in the case of Mg-He, we note that there are large differences between various determinations. Moreover, in order to reveal relative differences between species more obviously we calculated the reduced potential of these five systems. The curves are clumped closely together, but at intermediate range the Mg-He reduced potential is clearly very different from the others.
Fisher, Rohan; Lassa, Jonatan
2017-04-18
Modelling travel time to services has become a common public health tool for planning service provision but the usefulness of these analyses is constrained by the availability of accurate input data and limitations inherent in the assumptions and parameterisation. This is particularly an issue in the developing world where access to basic data is limited and travel is often complex and multi-modal. Improving the accuracy and relevance in this context requires greater accessibility to, and flexibility in, travel time modelling tools to facilitate the incorporation of local knowledge and the rapid exploration of multiple travel scenarios. The aim of this work was to develop simple open source, adaptable, interactive travel time modelling tools to allow greater access to and participation in service access analysis. Described are three interconnected applications designed to reduce some of the barriers to the more wide-spread use of GIS analysis of service access and allow for complex spatial and temporal variations in service availability. These applications are an open source GIS tool-kit and two geo-simulation models. The development of these tools was guided by health service issues from a developing world context but they present a general approach to enabling greater access to and flexibility in health access modelling. The tools demonstrate a method that substantially simplifies the process for conducting travel time assessments and demonstrate a dynamic, interactive approach in an open source GIS format. In addition this paper provides examples from empirical experience where these tools have informed better policy and planning. Travel and health service access is complex and cannot be reduced to a few static modeled outputs. The approaches described in this paper use a unique set of tools to explore this complexity, promote discussion and build understanding with the goal of producing better planning outcomes. The accessible, flexible, interactive and responsive nature of the applications described has the potential to allow complex environmental social and political considerations to be incorporated and visualised. Through supporting evidence-based planning the innovative modelling practices described have the potential to help local health and emergency response planning in the developing world.
Reduced Complexity Modelling of Urban Floodplain Inundation
NASA Astrophysics Data System (ADS)
McMillan, H. K.; Brasington, J.; Mihir, M.
2004-12-01
Significant recent advances in floodplain inundation modelling have been achieved by directly coupling 1d channel hydraulic models with a raster storage cell approximation for floodplain flows. The strengths of this reduced-complexity model structure derive from its explicit dependence on a digital elevation model (DEM) to parameterize flows through riparian areas, providing a computationally efficient algorithm to model heterogeneous floodplains. Previous applications of this framework have generally used mid-range grid scales (101-102 m), showing the capacity of the models to simulate long reaches (103-104 m). However, the increasing availability of precision DEMs derived from airborne laser altimetry (LIDAR) enables their use at very high spatial resolutions (100-101 m). This spatial scale offers the opportunity to incorporate the complexity of the built environment directly within the floodplain DEM and simulate urban flooding. This poster describes a series of experiments designed to explore model functionality at these reduced scales. Important questions are considered, raised by this new approach, about the reliability and representation of the floodplain topography and built environment, and the resultant sensitivity of inundation forecasts. The experiments apply a raster floodplain model to reconstruct a 1:100 year flood event on the River Granta in eastern England, which flooded 72 properties in the town of Linton in October 2001. The simulations use a nested-scale model to maintain efficiency. A 2km by 4km urban zone is represented by a high-resolution DEM derived from single-pulse LIDAR data supplied by the UK Environment Agency, together with surveyed data and aerial photography. Novel methods of processing the raw data to provide the individual structure detail required are investigated and compared. This is then embedded within a lower-resolution model application at the reach scale which provides boundary conditions based on recorded flood stage. The high resolution predictions on a scale commensurate with urban structures make possible a multi-criteria validation which combines verification of reach-scale characteristics such as downstream flow and inundation extent with internal validation of flood depth at individual sites.
Master-slave system with force feedback based on dynamics of virtual model
NASA Technical Reports Server (NTRS)
Nojima, Shuji; Hashimoto, Hideki
1994-01-01
A master-slave system can extend manipulating and sensing capabilities of a human operator to a remote environment. But the master-slave system has two serious problems: one is the mechanically large impedance of the system; the other is the mechanical complexity of the slave for complex remote tasks. These two problems reduce the efficiency of the system. If the slave has local intelligence, it can help the human operator by using its good points like fast calculation and large memory. The authors suggest that the slave is a dextrous hand with many degrees of freedom able to manipulate an object of known shape. It is further suggested that the dimensions of the remote work space be shared by the human operator and the slave. The effect of the large impedance of the system can be reduced in a virtual model, a physical model constructed in a computer with physical parameters as if it were in the real world. A method to determine the damping parameter dynamically for the virtual model is proposed. Experimental results show that this virtual model is better than the virtual model with fixed damping.
Validation of a reduced-order jet model for subsonic and underexpanded hydrogen jets
Li, Xuefang; Hecht, Ethan S.; Christopher, David M.
2016-01-01
Much effort has been made to model hydrogen releases from leaks during potential failures of hydrogen storage systems. A reduced-order jet model can be used to quickly characterize these flows, with low computational cost. Notional nozzle models are often used to avoid modeling the complex shock structures produced by the underexpanded jets by determining an “effective” source to produce the observed downstream trends. In our work, the mean hydrogen concentration fields were measured in a series of subsonic and underexpanded jets using a planar laser Rayleigh scattering system. Furthermore, we compared the experimental data to a reduced order jet modelmore » for subsonic flows and a notional nozzle model coupled to the jet model for underexpanded jets. The values of some key model parameters were determined by comparisons with the experimental data. Finally, the coupled model was also validated against hydrogen concentrations measurements for 100 and 200 bar hydrogen jets with the predictions agreeing well with data in the literature.« less
Laamiri, Imen; Khouaja, Anis; Messaoud, Hassani
2015-03-01
In this paper we provide a convergence analysis of the alternating RGLS (Recursive Generalized Least Square) algorithm used for the identification of the reduced complexity Volterra model describing stochastic non-linear systems. The reduced Volterra model used is the 3rd order SVD-PARAFC-Volterra model provided using the Singular Value Decomposition (SVD) and the Parallel Factor (PARAFAC) tensor decomposition of the quadratic and the cubic kernels respectively of the classical Volterra model. The Alternating RGLS (ARGLS) algorithm consists on the execution of the classical RGLS algorithm in alternating way. The ARGLS convergence was proved using the Ordinary Differential Equation (ODE) method. It is noted that the algorithm convergence canno׳t be ensured when the disturbance acting on the system to be identified has specific features. The ARGLS algorithm is tested in simulations on a numerical example by satisfying the determined convergence conditions. To raise the elegies of the proposed algorithm, we proceed to its comparison with the classical Alternating Recursive Least Squares (ARLS) presented in the literature. The comparison has been built on a non-linear satellite channel and a benchmark system CSTR (Continuous Stirred Tank Reactor). Moreover the efficiency of the proposed identification approach is proved on an experimental Communicating Two Tank system (CTTS). Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Entropy, complexity, and Markov diagrams for random walk cancer models.
Newton, Paul K; Mason, Jeremy; Hurt, Brian; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Kuhn, Peter
2014-12-19
The notion of entropy is used to compare the complexity associated with 12 common cancers based on metastatic tumor distribution autopsy data. We characterize power-law distributions, entropy, and Kullback-Liebler divergence associated with each primary cancer as compared with data for all cancer types aggregated. We then correlate entropy values with other measures of complexity associated with Markov chain dynamical systems models of progression. The Markov transition matrix associated with each cancer is associated with a directed graph model where nodes are anatomical locations where a metastatic tumor could develop, and edge weightings are transition probabilities of progression from site to site. The steady-state distribution corresponds to the autopsy data distribution. Entropy correlates well with the overall complexity of the reduced directed graph structure for each cancer and with a measure of systemic interconnectedness of the graph, called graph conductance. The models suggest that grouping cancers according to their entropy values, with skin, breast, kidney, and lung cancers being prototypical high entropy cancers, stomach, uterine, pancreatic and ovarian being mid-level entropy cancers, and colorectal, cervical, bladder, and prostate cancers being prototypical low entropy cancers, provides a potentially useful framework for viewing metastatic cancer in terms of predictability, complexity, and metastatic potential.
Entropy, complexity, and Markov diagrams for random walk cancer models
NASA Astrophysics Data System (ADS)
Newton, Paul K.; Mason, Jeremy; Hurt, Brian; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Kuhn, Peter
2014-12-01
The notion of entropy is used to compare the complexity associated with 12 common cancers based on metastatic tumor distribution autopsy data. We characterize power-law distributions, entropy, and Kullback-Liebler divergence associated with each primary cancer as compared with data for all cancer types aggregated. We then correlate entropy values with other measures of complexity associated with Markov chain dynamical systems models of progression. The Markov transition matrix associated with each cancer is associated with a directed graph model where nodes are anatomical locations where a metastatic tumor could develop, and edge weightings are transition probabilities of progression from site to site. The steady-state distribution corresponds to the autopsy data distribution. Entropy correlates well with the overall complexity of the reduced directed graph structure for each cancer and with a measure of systemic interconnectedness of the graph, called graph conductance. The models suggest that grouping cancers according to their entropy values, with skin, breast, kidney, and lung cancers being prototypical high entropy cancers, stomach, uterine, pancreatic and ovarian being mid-level entropy cancers, and colorectal, cervical, bladder, and prostate cancers being prototypical low entropy cancers, provides a potentially useful framework for viewing metastatic cancer in terms of predictability, complexity, and metastatic potential.
Inversion of 2-D DC resistivity data using rapid optimization and minimal complexity neural network
NASA Astrophysics Data System (ADS)
Singh, U. K.; Tiwari, R. K.; Singh, S. B.
2010-02-01
The backpropagation (BP) artificial neural network (ANN) technique of optimization based on steepest descent algorithm is known to be inept for its poor performance and does not ensure global convergence. Nonlinear and complex DC resistivity data require efficient ANN model and more intensive optimization procedures for better results and interpretations. Improvements in the computational ANN modeling process are described with the goals of enhancing the optimization process and reducing ANN model complexity. Well-established optimization methods, such as Radial basis algorithm (RBA) and Levenberg-Marquardt algorithms (LMA) have frequently been used to deal with complexity and nonlinearity in such complex geophysical records. We examined here the efficiency of trained LMA and RB networks by using 2-D synthetic resistivity data and then finally applied to the actual field vertical electrical resistivity sounding (VES) data collected from the Puga Valley, Jammu and Kashmir, India. The resulting ANN reconstruction resistivity results are compared with the result of existing inversion approaches, which are in good agreement. The depths and resistivity structures obtained by the ANN methods also correlate well with the known drilling results and geologic boundaries. The application of the above ANN algorithms proves to be robust and could be used for fast estimation of resistive structures for other complex earth model also.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vögele, Martin; Department of Theoretical Biophysics, Max Planck Institute of Biophysics, Frankfurt a. M.; Holm, Christian
2015-12-28
We present simulations of aqueous polyelectrolyte complexes with new MARTINI models for the charged polymers poly(styrene sulfonate) and poly(diallyldimethylammonium). Our coarse-grained polyelectrolyte models allow us to study large length and long time scales with regard to chemical details and thermodynamic properties. The results are compared to the outcomes of previous atomistic molecular dynamics simulations and verify that electrostatic properties are reproduced by our MARTINI coarse-grained approach with reasonable accuracy. Structural similarity between the atomistic and the coarse-grained results is indicated by a comparison between the pair radial distribution functions and the cumulative number of surrounding particles. Our coarse-grained models aremore » able to quantitatively reproduce previous findings like the correct charge compensation mechanism and a reduced dielectric constant of water. These results can be interpreted as the underlying reason for the stability of polyelectrolyte multilayers and complexes and validate the robustness of the proposed models.« less
Fast and Accurate Circuit Design Automation through Hierarchical Model Switching.
Huynh, Linh; Tagkopoulos, Ilias
2015-08-21
In computer-aided biological design, the trifecta of characterized part libraries, accurate models and optimal design parameters is crucial for producing reliable designs. As the number of parts and model complexity increase, however, it becomes exponentially more difficult for any optimization method to search the solution space, hence creating a trade-off that hampers efficient design. To address this issue, we present a hierarchical computer-aided design architecture that uses a two-step approach for biological design. First, a simple model of low computational complexity is used to predict circuit behavior and assess candidate circuit branches through branch-and-bound methods. Then, a complex, nonlinear circuit model is used for a fine-grained search of the reduced solution space, thus achieving more accurate results. Evaluation with a benchmark of 11 circuits and a library of 102 experimental designs with known characterization parameters demonstrates a speed-up of 3 orders of magnitude when compared to other design methods that provide optimality guarantees.
Estimation of Complex Generalized Linear Mixed Models for Measurement and Growth
ERIC Educational Resources Information Center
Jeon, Minjeong
2012-01-01
Maximum likelihood (ML) estimation of generalized linear mixed models (GLMMs) is technically challenging because of the intractable likelihoods that involve high dimensional integrations over random effects. The problem is magnified when the random effects have a crossed design and thus the data cannot be reduced to small independent clusters. A…
Alvarez, Renae; Ginsburg, Jacob; Grabowski, Jessica; Post, Sharon; Rosenberg, Walter
2016-04-01
The hospital experience is taxing and confusing for patients and their families, particularly those with limited economic and social resources. This complexity often leads to disengagement, poor adherence to the plan of care, and high readmission rates. Novel approaches to addressing the complexities of transitional care are emerging as possible solutions. The Bridge Model is a person-centered, social work-led, interdisciplinary transitional care intervention that helps older adults safely transition from the hospital back to their homes and communities. The Bridge Model combines 3 key components-care coordination, case management, and patient engagement-which provide a seamless transition during this stressful time and improve the overall quality of transitional care for older adults, including reducing hospital readmissions. The post Affordable Care Act (ACA) and managed care environment's emphasis on value and quality support further development and expansion of transitional care strategies, such as the Bridge Model, which offer promising avenues to fulfil the triple aim by improving the quality of individual patient care while also impacting population health and controlling per capita costs.
Computational and Organotypic Modeling of Microcephaly (Teratology Society)
Microcephaly is associated with reduced cortical surface area and ventricular dilations. Many genetic and environmental factors precipitate this malformation, including prenatal alcohol exposure and maternal Zika infection. This complexity motivates the engineering of computation...
Ammonia formation by a thiolate-bridged diiron amide complex as a nitrogenase mimic
NASA Astrophysics Data System (ADS)
Li, Yang; Li, Ying; Wang, Baomin; Luo, Yi; Yang, Dawei; Tong, Peng; Zhao, Jinfeng; Luo, Lun; Zhou, Yuhan; Chen, Si; Cheng, Fang; Qu, Jingping
2013-04-01
Although nitrogenase enzymes routinely convert molecular nitrogen into ammonia under ambient temperature and pressure, this reaction is currently carried out industrially using the Haber-Bosch process, which requires extreme temperatures and pressures to activate dinitrogen. Biological fixation occurs through dinitrogen and reduced NxHy species at multi-iron centres of compounds bearing sulfur ligands, but it is difficult to elucidate the mechanistic details and to obtain stable model intermediate complexes for further investigation. Metal-based synthetic models have been applied to reveal partial details, although most models involve a mononuclear system. Here, we report a diiron complex bridged by a bidentate thiolate ligand that can accommodate HN=NH. Following reductions and protonations, HN=NH is converted to NH3 through pivotal intermediate complexes bridged by N2H3- and NH2- species. Notably, the final ammonia release was effected with water as the proton source. Density functional theory calculations were carried out, and a pathway of biological nitrogen fixation is proposed.
Reducing Spatial Data Complexity for Classification Models
NASA Astrophysics Data System (ADS)
Ruta, Dymitr; Gabrys, Bogdan
2007-11-01
Intelligent data analytics gradually becomes a day-to-day reality of today's businesses. However, despite rapidly increasing storage and computational power current state-of-the-art predictive models still can not handle massive and noisy corporate data warehouses. What is more adaptive and real-time operational environment requires multiple models to be frequently retrained which further hinders their use. Various data reduction techniques ranging from data sampling up to density retention models attempt to address this challenge by capturing a summarised data structure, yet they either do not account for labelled data or degrade the classification performance of the model trained on the condensed dataset. Our response is a proposition of a new general framework for reducing the complexity of labelled data by means of controlled spatial redistribution of class densities in the input space. On the example of Parzen Labelled Data Compressor (PLDC) we demonstrate a simulatory data condensation process directly inspired by the electrostatic field interaction where the data are moved and merged following the attracting and repelling interactions with the other labelled data. The process is controlled by the class density function built on the original data that acts as a class-sensitive potential field ensuring preservation of the original class density distributions, yet allowing data to rearrange and merge joining together their soft class partitions. As a result we achieved a model that reduces the labelled datasets much further than any competitive approaches yet with the maximum retention of the original class densities and hence the classification performance. PLDC leaves the reduced dataset with the soft accumulative class weights allowing for efficient online updates and as shown in a series of experiments if coupled with Parzen Density Classifier (PDC) significantly outperforms competitive data condensation methods in terms of classification performance at the comparable compression levels.
Arnould, V M-R; Hammami, H; Soyeurt, H; Gengler, N
2010-09-01
Random regression test-day models using Legendre polynomials are commonly used for the estimation of genetic parameters and genetic evaluation for test-day milk production traits. However, some researchers have reported that these models present some undesirable properties such as the overestimation of variances at the edges of lactation. Describing genetic variation of saturated fatty acids expressed in milk fat might require the testing of different models. Therefore, 3 different functions were used and compared to take into account the lactation curve: (1) Legendre polynomials with the same order as currently applied for genetic model for production traits; 2) linear splines with 10 knots; and 3) linear splines with the same 10 knots reduced to 3 parameters. The criteria used were Akaike's information and Bayesian information criteria, percentage square biases, and log-likelihood function. These criteria indentified Legendre polynomials and linear splines with 10 knots reduced to 3 parameters models as the most useful. Reducing more complex models using eigenvalues seemed appealing because the resulting models are less time demanding and can reduce convergence difficulties, because convergence properties also seemed to be improved. Finally, the results showed that the reduced spline model was very similar to the Legendre polynomials model. Copyright (c) 2010 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Zhao, Lei; Gossmann, Toni I; Waxman, David
2016-03-21
The Wright-Fisher model is an important model in evolutionary biology and population genetics. It has been applied in numerous analyses of finite populations with discrete generations. It is recognised that real populations can behave, in some key aspects, as though their size that is not the census size, N, but rather a smaller size, namely the effective population size, Ne. However, in the Wright-Fisher model, there is no distinction between the effective and census population sizes. Equivalently, we can say that in this model, Ne coincides with N. The Wright-Fisher model therefore lacks an important aspect of biological realism. Here, we present a method that allows Ne to be directly incorporated into the Wright-Fisher model. The modified model involves matrices whose size is determined by Ne. Thus apart from increased biological realism, the modified model also has reduced computational complexity, particularly so when Ne⪡N. For complex problems, it may be hard or impossible to numerically analyse the most commonly-used approximation of the Wright-Fisher model that incorporates Ne, namely the diffusion approximation. An alternative approach is simulation. However, the simulations need to be sufficiently detailed that they yield an effective size that is different to the census size. Simulations may also be time consuming and have attendant statistical errors. The method presented in this work may then be the only alternative to simulations, when Ne differs from N. We illustrate the straightforward application of the method to some problems involving allele fixation and the determination of the equilibrium site frequency spectrum. We then apply the method to the problem of fixation when three alleles are segregating in a population. This latter problem is significantly more complex than a two allele problem and since the diffusion equation cannot be numerically solved, the only other way Ne can be incorporated into the analysis is by simulation. We have achieved good accuracy in all cases considered. In summary, the present work extends the realism and tractability of an important model of evolutionary biology and population genetics. Copyright © 2016 Elsevier Ltd. All rights reserved.
Near-optimal experimental design for model selection in systems biology.
Busetto, Alberto Giovanni; Hauser, Alain; Krummenacher, Gabriel; Sunnåker, Mikael; Dimopoulos, Sotiris; Ong, Cheng Soon; Stelling, Jörg; Buhmann, Joachim M
2013-10-15
Biological systems are understood through iterations of modeling and experimentation. Not all experiments, however, are equally valuable for predictive modeling. This study introduces an efficient method for experimental design aimed at selecting dynamical models from data. Motivated by biological applications, the method enables the design of crucial experiments: it determines a highly informative selection of measurement readouts and time points. We demonstrate formal guarantees of design efficiency on the basis of previous results. By reducing our task to the setting of graphical models, we prove that the method finds a near-optimal design selection with a polynomial number of evaluations. Moreover, the method exhibits the best polynomial-complexity constant approximation factor, unless P = NP. We measure the performance of the method in comparison with established alternatives, such as ensemble non-centrality, on example models of different complexity. Efficient design accelerates the loop between modeling and experimentation: it enables the inference of complex mechanisms, such as those controlling central metabolic operation. Toolbox 'NearOED' available with source code under GPL on the Machine Learning Open Source Software Web site (mloss.org).
Hopkins, Jim
2016-01-01
The main concepts of the free energy (FE) neuroscience developed by Karl Friston and colleagues parallel those of Freud's Project for a Scientific Psychology. In Hobson et al. (2014) these include an innate virtual reality generator that produces the fictive prior beliefs that Freud described as the primary process. This enables Friston's account to encompass a unified treatment—a complexity theory—of the role of virtual reality in both dreaming and mental disorder. In both accounts the brain operates to minimize FE aroused by sensory impingements—including interoceptive impingements that report compliance with biological imperatives—and constructs a representation/model of the causes of impingement that enables this minimization. In Friston's account (variational) FE equals complexity minus accuracy, and is minimized by increasing accuracy and decreasing complexity. Roughly the brain (or model) increases accuracy together with complexity in waking. This is mediated by consciousness-creating active inference—by which it explains sensory impingements in terms of perceptual experiences of their causes. In sleep it reduces complexity by processes that include both synaptic pruning and consciousness/virtual reality/dreaming in REM. The consciousness-creating active inference that effects complexity-reduction in REM dreaming must operate on FE-arousing data distinct from sensory impingement. The most relevant source is remembered arousals of emotion, both recent and remote, as processed in SWS and REM on “active systems” accounts of memory consolidation/reconsolidation. Freud describes these remembered arousals as condensed in the dreamwork for use in the conscious contents of dreams, and similar condensation can be seen in symptoms. Complexity partly reflects emotional conflict and trauma. This indicates that dreams and symptoms are both produced to reduce complexity in the form of potentially adverse (traumatic or conflicting) arousals of amygdala-related emotions. Mental disorder is thus caused by computational complexity together with mechanisms like synaptic pruning that have evolved for complexity-reduction; and important features of disorder can be understood in these terms. Details of the consilience among Freudian, systems consolidation, and complexity-reduction accounts appear clearly in the analysis of a single fragment of a dream, indicating also how complexity reduction proceeds by a process resembling Bayesian model selection. PMID:27471478
Structure, Intent and Conformance Monitoring in ATC
NASA Technical Reports Server (NTRS)
Reynolds, Tom G.; Histon, Jonathan M.; Davison, Hayley J.; Hansman, R. John
2004-01-01
Infield studies of current Air Traffic Control operations it is found that controllers rely on underlying airspace structure to reduce the complexity of the planning and conformance monitoring tasks. The structure appears to influence the controller's working mental model through abstractions that reduce the apparent cognitive complexity. These structure-based abstractions are useful for the controller's key tasks of planning, implementing, monitoring, and evaluating tactical situations. In addition, the structure-based abstractions appear to be important in the maintenance of Situation Awareness. The process of conformance monitoring is analyzed in more detail and an approach to conformance monitoring which utilizes both the structure-based abstractions and intent is presented.
Hromadka, T.V.; Guymon, G.L.
1985-01-01
An algorithm is presented for the numerical solution of the Laplace equation boundary-value problem, which is assumed to apply to soil freezing or thawing. The Laplace equation is numerically approximated by the complex-variable boundary-element method. The algorithm aids in reducing integrated relative error by providing a true measure of modeling error along the solution domain boundary. This measure of error can be used to select locations for adding, removing, or relocating nodal points on the boundary or to provide bounds for the integrated relative error of unknown nodal variable values along the boundary.
An assembly process model based on object-oriented hierarchical time Petri Nets
NASA Astrophysics Data System (ADS)
Wang, Jiapeng; Liu, Shaoli; Liu, Jianhua; Du, Zenghui
2017-04-01
In order to improve the versatility, accuracy and integrity of the assembly process model of complex products, an assembly process model based on object-oriented hierarchical time Petri Nets is presented. A complete assembly process information model including assembly resources, assembly inspection, time, structure and flexible parts is established, and this model describes the static and dynamic data involved in the assembly process. Through the analysis of three-dimensional assembly process information, the assembly information is hierarchically divided from the whole, the local to the details and the subnet model of different levels of object-oriented Petri Nets is established. The communication problem between Petri subnets is solved by using message database, and it reduces the complexity of system modeling effectively. Finally, the modeling process is presented, and a five layer Petri Nets model is established based on the hoisting process of the engine compartment of a wheeled armored vehicle.
Mo, Shaoxing; Lu, Dan; Shi, Xiaoqing; ...
2017-12-27
Global sensitivity analysis (GSA) and uncertainty quantification (UQ) for groundwater modeling are challenging because of the model complexity and significant computational requirements. To reduce the massive computational cost, a cheap-to-evaluate surrogate model is usually constructed to approximate and replace the expensive groundwater models in the GSA and UQ. Constructing an accurate surrogate requires actual model simulations on a number of parameter samples. Thus, a robust experimental design strategy is desired to locate informative samples so as to reduce the computational cost in surrogate construction and consequently to improve the efficiency in the GSA and UQ. In this study, we developmore » a Taylor expansion-based adaptive design (TEAD) that aims to build an accurate global surrogate model with a small training sample size. TEAD defines a novel hybrid score function to search informative samples, and a robust stopping criterion to terminate the sample search that guarantees the resulted approximation errors satisfy the desired accuracy. The good performance of TEAD in building global surrogate models is demonstrated in seven analytical functions with different dimensionality and complexity in comparison to two widely used experimental design methods. The application of the TEAD-based surrogate method in two groundwater models shows that the TEAD design can effectively improve the computational efficiency of GSA and UQ for groundwater modeling.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mo, Shaoxing; Lu, Dan; Shi, Xiaoqing
Global sensitivity analysis (GSA) and uncertainty quantification (UQ) for groundwater modeling are challenging because of the model complexity and significant computational requirements. To reduce the massive computational cost, a cheap-to-evaluate surrogate model is usually constructed to approximate and replace the expensive groundwater models in the GSA and UQ. Constructing an accurate surrogate requires actual model simulations on a number of parameter samples. Thus, a robust experimental design strategy is desired to locate informative samples so as to reduce the computational cost in surrogate construction and consequently to improve the efficiency in the GSA and UQ. In this study, we developmore » a Taylor expansion-based adaptive design (TEAD) that aims to build an accurate global surrogate model with a small training sample size. TEAD defines a novel hybrid score function to search informative samples, and a robust stopping criterion to terminate the sample search that guarantees the resulted approximation errors satisfy the desired accuracy. The good performance of TEAD in building global surrogate models is demonstrated in seven analytical functions with different dimensionality and complexity in comparison to two widely used experimental design methods. The application of the TEAD-based surrogate method in two groundwater models shows that the TEAD design can effectively improve the computational efficiency of GSA and UQ for groundwater modeling.« less
Nguyen, Hai; Pérez, Alberto; Bermeo, Sherry; Simmerling, Carlos
2016-01-01
The Generalized Born (GB) implicit solvent model has undergone significant improvements in accuracy for modeling of proteins and small molecules. However, GB still remains a less widely explored option for nucleic acid simulations, in part because fast GB models are often unable to maintain stable nucleic acid structures, or they introduce structural bias in proteins, leading to difficulty in application of GB models in simulations of protein-nucleic acid complexes. Recently, GB-neck2 was developed to improve the behavior of protein simulations. In an effort to create a more accurate model for nucleic acids, a similar procedure to the development of GB-neck2 is described here for nucleic acids. The resulting parameter set significantly reduces absolute and relative energy error relative to Poisson Boltzmann for both nucleic acids and nucleic acid-protein complexes, when compared to its predecessor GB-neck model. This improvement in solvation energy calculation translates to increased structural stability for simulations of DNA and RNA duplexes, quadruplexes, and protein-nucleic acid complexes. The GB-neck2 model also enables successful folding of small DNA and RNA hairpins to near native structures as determined from comparison with experiment. The functional form and all required parameters are provided here and also implemented in the AMBER software. PMID:26574454
Recent advances in QM/MM free energy calculations using reference potentials☆
Duarte, Fernanda; Amrein, Beat A.; Blaha-Nelson, David; Kamerlin, Shina C.L.
2015-01-01
Background Recent years have seen enormous progress in the development of methods for modeling (bio)molecular systems. This has allowed for the simulation of ever larger and more complex systems. However, as such complexity increases, the requirements needed for these models to be accurate and physically meaningful become more and more difficult to fulfill. The use of simplified models to describe complex biological systems has long been shown to be an effective way to overcome some of the limitations associated with this computational cost in a rational way. Scope of review Hybrid QM/MM approaches have rapidly become one of the most popular computational tools for studying chemical reactivity in biomolecular systems. However, the high cost involved in performing high-level QM calculations has limited the applicability of these approaches when calculating free energies of chemical processes. In this review, we present some of the advances in using reference potentials and mean field approximations to accelerate high-level QM/MM calculations. We present illustrative applications of these approaches and discuss challenges and future perspectives for the field. Major conclusions The use of physically-based simplifications has shown to effectively reduce the cost of high-level QM/MM calculations. In particular, lower-level reference potentials enable one to reduce the cost of expensive free energy calculations, thus expanding the scope of problems that can be addressed. General significance As was already demonstrated 40 years ago, the usage of simplified models still allows one to obtain cutting edge results with substantially reduced computational cost. This article is part of a Special Issue entitled Recent developments of molecular dynamics. PMID:25038480
Casey, F P; Baird, D; Feng, Q; Gutenkunst, R N; Waterfall, J J; Myers, C R; Brown, K S; Cerione, R A; Sethna, J P
2007-05-01
We apply the methods of optimal experimental design to a differential equation model for epidermal growth factor receptor signalling, trafficking and down-regulation. The model incorporates the role of a recently discovered protein complex made up of the E3 ubiquitin ligase, Cbl, the guanine exchange factor (GEF), Cool-1 (beta -Pix) and the Rho family G protein Cdc42. The complex has been suggested to be important in disrupting receptor down-regulation. We demonstrate that the model interactions can accurately reproduce the experimental observations, that they can be used to make predictions with accompanying uncertainties, and that we can apply ideas of optimal experimental design to suggest new experiments that reduce the uncertainty on unmeasurable components of the system.
NASA Astrophysics Data System (ADS)
Rubinstein, Alexander; Sabirianov, Renat
2011-03-01
Using a non-local electrostatic approach that incorporates the short-range structure of the contacting media, we evaluated the electrostatic contribution to the energy of the complex formation of two model proteins. In this study, we have demonstrated that the existence of an low-dielectric interfacial water layer at the protein-solvent interface reduces the charging energy of the proteins in the aqueous solvent, and consequently increases the electrostatic contribution to the protein binding (change in free energy upon the complex formation of two proteins). This is in contrast with the finding of the continuum electrostatic model, which suggests that electrostatic interactions are not strong enough to compensate for the unfavorable desolvation effects.
Wu, Jianlan; Tang, Zhoufei; Gong, Zhihao; Cao, Jianshu; Mukamel, Shaul
2015-04-02
The energy absorbed in a light-harvesting protein complex is often transferred collectively through aggregated chromophore clusters. For population evolution of chromophores, the time-integrated effective rate matrix allows us to construct quantum kinetic clusters quantitatively and determine the reduced cluster-cluster transfer rates systematically, thus defining a minimal model of energy-transfer kinetics. For Fenna-Matthews-Olson (FMO) and light-havrvesting complex II (LCHII) monomers, quantum Markovian kinetics of clusters can accurately reproduce the overall energy-transfer process in the long-time scale. The dominant energy-transfer pathways are identified in the picture of aggregated clusters. The chromophores distributed extensively in various clusters can assist a fast and long-range energy transfer.
NASA Technical Reports Server (NTRS)
Sinha, Neeraj; Brinckman, Kevin; Jansen, Bernard; Seiner, John
2011-01-01
A method was developed of obtaining propulsive base flow data in both hot and cold jet environments, at Mach numbers and altitude of relevance to NASA launcher designs. The base flow data was used to perform computational fluid dynamics (CFD) turbulence model assessments of base flow predictive capabilities in order to provide increased confidence in base thermal and pressure load predictions obtained from computational modeling efforts. Predictive CFD analyses were used in the design of the experiments, available propulsive models were used to reduce program costs and increase success, and a wind tunnel facility was used. The data obtained allowed assessment of CFD/turbulence models in a complex flow environment, working within a building-block procedure to validation, where cold, non-reacting test data was first used for validation, followed by more complex reacting base flow validation.
2014-01-01
Background The Triatoma brasiliensis complex is a monophyletic group, comprising three species, one of which includes two subspecific taxa, distributed across 12 Brazilian states, in the caatinga and cerrado biomes. Members of the complex are diverse in terms of epidemiological importance, morphology, biology, ecology, and genetics. Triatoma b. brasiliensis is the most disease-relevant member of the complex in terms of epidemiology, extensive distribution, broad feeding preferences, broad ecological distribution, and high rates of infection with Trypanosoma cruzi; consequently, it is considered the principal vector of Chagas disease in northeastern Brazil. Methods We used ecological niche models to estimate potential distributions of all members of the complex, and evaluated the potential for suitable adjacent areas to be colonized; we also present first evaluations of potential for climate change-mediated distributional shifts. Models were developed using the GARP and Maxent algorithms. Results Models for three members of the complex (T. b. brasiliensis, N = 332; T. b. macromelasoma, N = 35; and T. juazeirensis, N = 78) had significant distributional predictivity; however, models for T. sherlocki and T. melanica, both with very small sample sizes (N = 7), did not yield predictions that performed better than random. Model projections onto future-climate scenarios indicated little broad-scale potential for change in the potential distribution of the complex through 2050. Conclusions This study suggests that T. b. brasiliensis is the member of the complex with the greatest distributional potential to colonize new areas: overall; however, the distribution of the complex appears relatively stable. These analyses offer key information to guide proactive monitoring and remediation activities to reduce risk of Chagas disease transmission. PMID:24886587
Guymon, Gary L.; Yen, Chung-Cheng
1990-01-01
The applicability of a deterministic-probabilistic model for predicting water tables in southern Owens Valley, California, is evaluated. The model is based on a two-layer deterministic model that is cascaded with a two-point probability model. To reduce the potentially large number of uncertain variables in the deterministic model, lumping of uncertain variables was evaluated by sensitivity analysis to reduce the total number of uncertain variables to three variables: hydraulic conductivity, storage coefficient or specific yield, and source-sink function. Results demonstrate that lumping of uncertain parameters reduces computational effort while providing sufficient precision for the case studied. Simulated spatial coefficients of variation for water table temporal position in most of the basin is small, which suggests that deterministic models can predict water tables in these areas with good precision. However, in several important areas where pumping occurs or the geology is complex, the simulated spatial coefficients of variation are over estimated by the two-point probability method.
NASA Astrophysics Data System (ADS)
Guymon, Gary L.; Yen, Chung-Cheng
1990-07-01
The applicability of a deterministic-probabilistic model for predicting water tables in southern Owens Valley, California, is evaluated. The model is based on a two-layer deterministic model that is cascaded with a two-point probability model. To reduce the potentially large number of uncertain variables in the deterministic model, lumping of uncertain variables was evaluated by sensitivity analysis to reduce the total number of uncertain variables to three variables: hydraulic conductivity, storage coefficient or specific yield, and source-sink function. Results demonstrate that lumping of uncertain parameters reduces computational effort while providing sufficient precision for the case studied. Simulated spatial coefficients of variation for water table temporal position in most of the basin is small, which suggests that deterministic models can predict water tables in these areas with good precision. However, in several important areas where pumping occurs or the geology is complex, the simulated spatial coefficients of variation are over estimated by the two-point probability method.
Pollard, Amelia Kate; Craig, Emma Louise; Chakrabarti, Lisa
2016-01-01
Mitochondrial function, in particular complex 1 of the electron transport chain (ETC), has been shown to decrease during normal ageing and in neurodegenerative disease. However, there is some debate concerning which area of the brain has the greatest complex 1 activity. It is important to identify the pattern of activity in order to be able to gauge the effect of age or disease related changes. We determined complex 1 activity spectrophotometrically in the cortex, brainstem and cerebellum of middle aged mice (70-71 weeks), a cerebellar ataxic neurodegeneration model (pcd5J) and young wild type controls. We share our updated protocol on the measurements of complex1 activity and find that mitochondrial fractions isolated from frozen tissues can be measured for robust activity. We show that complex 1 activity is clearly highest in the cortex when compared with brainstem and cerebellum (p<0.003). Cerebellum and brainstem mitochondria exhibit similar levels of complex 1 activity in wild type brains. In the aged brain we see similar levels of complex 1 activity in all three-brain regions. The specific activity of complex 1 measured in the aged cortex is significantly decreased when compared with controls (p<0.0001). Both the cerebellum and brainstem mitochondria also show significantly reduced activity with ageing (p<0.05). The mouse model of ataxia predictably has a lower complex 1 activity in the cerebellum, and although reductions are measured in the cortex and brain stem, the remaining activity is higher than in the aged brains. We present clear evidence that complex 1 activity decreases across the brain with age and much more specifically in the cerebellum of the pcd5j mouse. Mitochondrial impairment can be a region specific phenomenon in disease, but in ageing appears to affect the entire brain, abolishing the pattern of higher activity in cortical regions.
NASA Astrophysics Data System (ADS)
Safaei, S.; Haghnegahdar, A.; Razavi, S.
2016-12-01
Complex environmental models are now the primary tool to inform decision makers for the current or future management of environmental resources under the climate and environmental changes. These complex models often contain a large number of parameters that need to be determined by a computationally intensive calibration procedure. Sensitivity analysis (SA) is a very useful tool that not only allows for understanding the model behavior, but also helps in reducing the number of calibration parameters by identifying unimportant ones. The issue is that most global sensitivity techniques are highly computationally demanding themselves for generating robust and stable sensitivity metrics over the entire model response surface. Recently, a novel global sensitivity analysis method, Variogram Analysis of Response Surfaces (VARS), is introduced that can efficiently provide a comprehensive assessment of global sensitivity using the Variogram concept. In this work, we aim to evaluate the effectiveness of this highly efficient GSA method in saving computational burden, when applied to systems with extra-large number of input factors ( 100). We use a test function and a hydrological modelling case study to demonstrate the capability of VARS method in reducing problem dimensionality by identifying important vs unimportant input factors.
Dunham, Kylee; Grand, James B.
2016-01-01
We examined the effects of complexity and priors on the accuracy of models used to estimate ecological and observational processes, and to make predictions regarding population size and structure. State-space models are useful for estimating complex, unobservable population processes and making predictions about future populations based on limited data. To better understand the utility of state space models in evaluating population dynamics, we used them in a Bayesian framework and compared the accuracy of models with differing complexity, with and without informative priors using sequential importance sampling/resampling (SISR). Count data were simulated for 25 years using known parameters and observation process for each model. We used kernel smoothing to reduce the effect of particle depletion, which is common when estimating both states and parameters with SISR. Models using informative priors estimated parameter values and population size with greater accuracy than their non-informative counterparts. While the estimates of population size and trend did not suffer greatly in models using non-informative priors, the algorithm was unable to accurately estimate demographic parameters. This model framework provides reasonable estimates of population size when little to no information is available; however, when information on some vital rates is available, SISR can be used to obtain more precise estimates of population size and process. Incorporating model complexity such as that required by structured populations with stage-specific vital rates affects precision and accuracy when estimating latent population variables and predicting population dynamics. These results are important to consider when designing monitoring programs and conservation efforts requiring management of specific population segments.
rpe v5: an emulator for reduced floating-point precision in large numerical simulations
NASA Astrophysics Data System (ADS)
Dawson, Andrew; Düben, Peter D.
2017-06-01
This paper describes the rpe (reduced-precision emulator) library which has the capability to emulate the use of arbitrary reduced floating-point precision within large numerical models written in Fortran. The rpe software allows model developers to test how reduced floating-point precision affects the result of their simulations without having to make extensive code changes or port the model onto specialized hardware. The software can be used to identify parts of a program that are problematic for numerical precision and to guide changes to the program to allow a stronger reduction in precision.The development of rpe was motivated by the strong demand for more computing power. If numerical precision can be reduced for an application under consideration while still achieving results of acceptable quality, computational cost can be reduced, since a reduction in numerical precision may allow an increase in performance or a reduction in power consumption. For simulations with weather and climate models, savings due to a reduction in precision could be reinvested to allow model simulations at higher spatial resolution or complexity, or to increase the number of ensemble members to improve predictions. rpe was developed with a particular focus on the community of weather and climate modelling, but the software could be used with numerical simulations from other domains.
Refined generalized multiscale entropy analysis for physiological signals
NASA Astrophysics Data System (ADS)
Liu, Yunxiao; Lin, Youfang; Wang, Jing; Shang, Pengjian
2018-01-01
Multiscale entropy analysis has become a prevalent complexity measurement and been successfully applied in various fields. However, it only takes into account the information of mean values (first moment) in coarse-graining procedure. Then generalized multiscale entropy (MSEn) considering higher moments to coarse-grain a time series was proposed and MSEσ2 has been implemented. However, the MSEσ2 sometimes may yield an imprecise estimation of entropy or undefined entropy, and reduce statistical reliability of sample entropy estimation as scale factor increases. For this purpose, we developed the refined model, RMSEσ2, to improve MSEσ2. Simulations on both white noise and 1 / f noise show that RMSEσ2 provides higher entropy reliability and reduces the occurrence of undefined entropy, especially suitable for short time series. Besides, we discuss the effect on RMSEσ2 analysis from outliers, data loss and other concepts in signal processing. We apply the proposed model to evaluate the complexity of heartbeat interval time series derived from healthy young and elderly subjects, patients with congestive heart failure and patients with atrial fibrillation respectively, compared to several popular complexity metrics. The results demonstrate that RMSEσ2 measured complexity (a) decreases with aging and diseases, and (b) gives significant discrimination between different physiological/pathological states, which may facilitate clinical application.
A development framework for artificial intelligence based distributed operations support systems
NASA Technical Reports Server (NTRS)
Adler, Richard M.; Cottman, Bruce H.
1990-01-01
Advanced automation is required to reduce costly human operations support requirements for complex space-based and ground control systems. Existing knowledge based technologies have been used successfully to automate individual operations tasks. Considerably less progress has been made in integrating and coordinating multiple operations applications for unified intelligent support systems. To fill this gap, SOCIAL, a tool set for developing Distributed Artificial Intelligence (DAI) systems is being constructed. SOCIAL consists of three primary language based components defining: models of interprocess communication across heterogeneous platforms; models for interprocess coordination, concurrency control, and fault management; and for accessing heterogeneous information resources. DAI applications subsystems, either new or existing, will access these distributed services non-intrusively, via high-level message-based protocols. SOCIAL will reduce the complexity of distributed communications, control, and integration, enabling developers to concentrate on the design and functionality of the target DAI system itself.
Reduced Moment-Based Models for Oxygen Precipitates and Dislocation Loops in Silicon
NASA Astrophysics Data System (ADS)
Trzynadlowski, Bart
The demand for ever smaller, higher-performance integrated circuits and more efficient, cost-effective solar cells continues to push the frontiers of process technology. Fabrication of silicon devices requires extremely precise control of impurities and crystallographic defects. Failure to do so not only reduces performance, efficiency, and yield, it threatens the very survival of commercial enterprises in today's fiercely competitive and price-sensitive global market. The presence of oxygen in silicon is an unavoidable consequence of the Czochralski process, which remains the most popular method for large-scale production of single-crystal silicon. Oxygen precipitates that form during thermal processing cause distortion of the surrounding silicon lattice and can lead to the formation of dislocation loops. Localized deformation caused by both of these defects introduces potential wells that trap diffusing impurities such as metal atoms, which is highly desirable if done far away from sensitive device regions. Unfortunately, dislocations also reduce the mechanical strength of silicon, which can cause wafer warpage and breakage. Engineers must negotiate this and other complex tradeoffs when designing fabrication processes. Accomplishing this in a complex, modern process involving a large number of thermal steps is impossible without the aid of computational models. In this dissertation, new models for oxygen precipitation and dislocation loop evolution are described. An oxygen model using kinetic rate equations to evolve the complete precipitate size distribution was developed first. This was then used to create a reduced model tracking only the moments of the size distribution. The moment-based model was found to run significantly faster than its full counterpart while accurately capturing the evolution of oxygen precipitates. The reduced model was fitted to experimental data and a sensitivity analysis was performed to assess the robustness of the results. Source code for both models is included. A moment-based model for dislocation loop formation from {311} defects in ion-implanted silicon was also developed and validated against experimental data. Ab initio density functional theory calculations of stacking faults and edge dislocations were performed to extract energies and elastic properties. This allowed the effect of applied stress on the evolution of {311} defects and dislocation loops to be investigated.
Ferro, Stefania; De Luca, Laura; Barreca, Maria Letizia; Iraci, Nunzio; De Grazia, Sara; Christ, Frauke; Witvrouw, Myriam; Debyser, Zeger; Chimirri, Alba
2009-01-22
A new model of HIV-1 integrase-Mg-DNA complex that is useful for docking experiments has been built. It was used to study the binding mode of integrase strand transfer inhibitor 1 (CHI-1043) and other fluorine analogues. Molecular modeling results prompted us to synthesize the designed derivatives which showed potent enzymatic inhibition at nanomolar concentration, high antiviral activity, and low toxicity. Microwave assisted organic synthesis (MAOS) was employed in several steps of the synthetic pathway, thus reducing reaction times and improving yields.
NASA Technical Reports Server (NTRS)
Shishir, Pandya; Chaderjian, Neal; Ahmad, Jsaim; Kwak, Dochan (Technical Monitor)
2001-01-01
Flow simulations using the time-dependent Navier-Stokes equations remain a challenge for several reasons. Principal among them are the difficulty to accurately model complex flows, and the time needed to perform the computations. A parametric study of such complex problems is not considered practical due to the large cost associated with computing many time-dependent solutions. The computation time for each solution must be reduced in order to make a parametric study possible. With successful reduction of computation time, the issue of accuracy, and appropriateness of turbulence models will become more tractable.
Fire and Heat Spreading Model Based on Cellular Automata Theory
NASA Astrophysics Data System (ADS)
Samartsev, A. A.; Rezchikov, A. F.; Kushnikov, V. A.; Ivashchenko, V. A.; Bogomolov, A. S.; Filimonyuk, L. Yu; Dolinina, O. N.; Kushnikov, O. V.; Shulga, T. E.; Tverdokhlebov, V. A.; Fominykh, D. S.
2018-05-01
The distinctive feature of the proposed fire and heat spreading model in premises is the reduction of the computational complexity due to the use of the theory of cellular automata with probability rules of behavior. The possibilities and prospects of using this model in practice are noted. The proposed model has a simple mechanism of integration with agent-based evacuation models. The joint use of these models could improve floor plans and reduce the time of evacuation from premises during fires.
Wang, Yuan; Dong, Jing; Wang, Yi; Wei, Wei; Song, Binbin; Shan, Zhongyan; Teng, Weiping; Chen, Jie
2016-10-01
Iodine is a significant micronutrient. Iodine deficiency (ID)-induced hypothyroxinemia and hypothyroidism during developmental period can cause cerebellar dysfunction. However, mechanisms are still unclear. Therefore, the present research aims to study effects of developmental hypothyroxinemia caused by mild ID and hypothyroidism caused by severe ID or methimazole (MMZ) on parallel fiber-Purkinje cell (PF-PC) synapses in filial cerebellum. Maternal hypothyroxinemia and hypothyroidism models were established in Wistar rats using ID diet and deionized water supplemented with different concentrations of potassium iodide or MMZ water. Birth weight and cerebellum weight were measured. We also examined PF-PC synapses using immunofluorescence, and western blot analysis was conducted to investigate the activity of Neurexin1/cerebellin1 (Cbln1)/glutamate receptor d2 (GluD2) tripartite complex. Our results showed that hypothyroxinemia and hypothyroidism decreased birth weight and cerebellum weight and reduced the PF-PC synapses on postnatal day (PN) 14 and PN21. Accordingly, the mean intensity of vesicular glutamate transporter (VGluT1) and Calbindin immunofluorescence was reduced in mild ID, severe ID, and MMZ groups. Moreover, maternal hypothyroxinemia and hypothyroidism reduced expression of Neurexin1/Cbln1/GluD2 tripartite complex. Our study supports the hypothesis that developmental hypothyroxinemia and hypothyroidism reduce PF-PC synapses, which may be attributed to the downregulation of Neurexin1/Cbln1/GluD2 tripartite complex.
Speedup computation of HD-sEMG signals using a motor unit-specific electrical source model.
Carriou, Vincent; Boudaoud, Sofiane; Laforet, Jeremy
2018-01-23
Nowadays, bio-reliable modeling of muscle contraction is becoming more accurate and complex. This increasing complexity induces a significant increase in computation time which prevents the possibility of using this model in certain applications and studies. Accordingly, the aim of this work is to significantly reduce the computation time of high-density surface electromyogram (HD-sEMG) generation. This will be done through a new model of motor unit (MU)-specific electrical source based on the fibers composing the MU. In order to assess the efficiency of this approach, we computed the normalized root mean square error (NRMSE) between several simulations on single generated MU action potential (MUAP) using the usual fiber electrical sources and the MU-specific electrical source. This NRMSE was computed for five different simulation sets wherein hundreds of MUAPs are generated and summed into HD-sEMG signals. The obtained results display less than 2% error on the generated signals compared to the same signals generated with fiber electrical sources. Moreover, the computation time of the HD-sEMG signal generation model is reduced to about 90% compared to the fiber electrical source model. Using this model with MU electrical sources, we can simulate HD-sEMG signals of a physiological muscle (hundreds of MU) in less than an hour on a classical workstation. Graphical Abstract Overview of the simulation of HD-sEMG signals using the fiber scale and the MU scale. Upscaling the electrical source to the MU scale reduces the computation time by 90% inducing only small deviation of the same simulated HD-sEMG signals.
A 3D modeling approach to complex faults with multi-source data
NASA Astrophysics Data System (ADS)
Wu, Qiang; Xu, Hua; Zou, Xukai; Lei, Hongzhuan
2015-04-01
Fault modeling is a very important step in making an accurate and reliable 3D geological model. Typical existing methods demand enough fault data to be able to construct complex fault models, however, it is well known that the available fault data are generally sparse and undersampled. In this paper, we propose a workflow of fault modeling, which can integrate multi-source data to construct fault models. For the faults that are not modeled with these data, especially small-scale or approximately parallel with the sections, we propose the fault deduction method to infer the hanging wall and footwall lines after displacement calculation. Moreover, using the fault cutting algorithm can supplement the available fault points on the location where faults cut each other. Increasing fault points in poor sample areas can not only efficiently construct fault models, but also reduce manual intervention. By using a fault-based interpolation and remeshing the horizons, an accurate 3D geological model can be constructed. The method can naturally simulate geological structures no matter whether the available geological data are sufficient or not. A concrete example of using the method in Tangshan, China, shows that the method can be applied to broad and complex geological areas.
Regional climate models reduce biases of global models and project smaller European summer warming
NASA Astrophysics Data System (ADS)
Soerland, S.; Schar, C.; Lüthi, D.; Kjellstrom, E.
2017-12-01
The assessment of regional climate change and the associated planning of adaptation and response strategies are often based on complex model chains. Typically, these model chains employ global and regional climate models (GCMs and RCMs), as well as one or several impact models. It is a common belief that the errors in such model chains behave approximately additive, thus the uncertainty should increase with each modeling step. If this hypothesis were true, the application of RCMs would not lead to any intrinsic improvement (beyond higher-resolution detail) of the GCM results. Here, we investigate the bias patterns (offset during the historical period against observations) and climate change signals of two RCMs that have downscaled a comprehensive set of GCMs following the EURO-CORDEX framework. The two RCMs reduce the biases of the driving GCMs, reduce the spread and modify the amplitude of the GCM projected climate change signal. The GCM projected summer warming at the end of the century is substantially reduced by both RCMs. These results are important, as the projected summer warming and its likely impact on the water cycle are among the most serious concerns regarding European climate change.
Teaching to Emerge: Toward a Bottom-Up Pedagogy
ERIC Educational Resources Information Center
Brailas, Alexios; Koskinas, Konstantinos; Alexias, George
2017-01-01
This paper focuses on the conceptual model of an academic course inspired by complexity theory. In the proposed conceptual model, the aim of teaching is to form a learning organization: a knowledge community with emergent properties that cannot be reduced to any linear combination of the properties of its parts. In this approach, the learning of…
ERIC Educational Resources Information Center
Jia, Yue; Stokes, Lynne; Harris, Ian; Wang, Yan
2011-01-01
Estimation of parameters of random effects models from samples collected via complex multistage designs is considered. One way to reduce estimation bias due to unequal probabilities of selection is to incorporate sampling weights. Many researchers have been proposed various weighting methods (Korn, & Graubard, 2003; Pfeffermann, Skinner,…
Symmetrical group theory for mathematical complexity reduction of digital holograms
NASA Astrophysics Data System (ADS)
Perez-Ramirez, A.; Guerrero-Juk, J.; Sanchez-Lara, R.; Perez-Ramirez, M.; Rodriguez-Blanco, M. A.; May-Alarcon, M.
2017-10-01
This work presents the use of mathematical group theory through an algorithm to reduce the multiplicative computational complexity in the process of creating digital holograms. An object is considered as a set of point sources using mathematical symmetry properties of both the core in the Fresnel integral and the image, where the image is modeled using group theory. This algorithm has multiplicative complexity equal to zero and an additive complexity ( k - 1) × N for the case of sparse matrices and binary images, where k is the number of pixels other than zero and N is the total points in the image.
Development of a Comprehensive Digital Avionics Curriculum for the Aeronautical Engineer
2006-03-01
able to analyze and design aircraft and missile guidance and control systems, including feedback stabilization schemes and stochastic processes, using ...Uncertainty modeling for robust control; Robust closed-loop stability and performance; Robust H- infinity control; Robustness check using mu-analysis...Controlled feedback (reduces noise) 3. Statistical group response (reduce pressure toward conformity) When used as a tool to study a complex problem
NASA Astrophysics Data System (ADS)
Wang, S.; Huang, G. H.; Huang, W.; Fan, Y. R.; Li, Z.
2015-10-01
In this study, a fractional factorial probabilistic collocation method is proposed to reveal statistical significance of hydrologic model parameters and their multi-level interactions affecting model outputs, facilitating uncertainty propagation in a reduced dimensional space. The proposed methodology is applied to the Xiangxi River watershed in China to demonstrate its validity and applicability, as well as its capability of revealing complex and dynamic parameter interactions. A set of reduced polynomial chaos expansions (PCEs) only with statistically significant terms can be obtained based on the results of factorial analysis of variance (ANOVA), achieving a reduction of uncertainty in hydrologic predictions. The predictive performance of reduced PCEs is verified by comparing against standard PCEs and the Monte Carlo with Latin hypercube sampling (MC-LHS) method in terms of reliability, sharpness, and Nash-Sutcliffe efficiency (NSE). Results reveal that the reduced PCEs are able to capture hydrologic behaviors of the Xiangxi River watershed, and they are efficient functional representations for propagating uncertainties in hydrologic predictions.
NASA Astrophysics Data System (ADS)
Strassmann, Kuno M.; Joos, Fortunat
2018-05-01
The Bern Simple Climate Model (BernSCM) is a free open-source re-implementation of a reduced-form carbon cycle-climate model which has been used widely in previous scientific work and IPCC assessments. BernSCM represents the carbon cycle and climate system with a small set of equations for the heat and carbon budget, the parametrization of major nonlinearities, and the substitution of complex component systems with impulse response functions (IRFs). The IRF approach allows cost-efficient yet accurate substitution of detailed parent models of climate system components with near-linear behavior. Illustrative simulations of scenarios from previous multimodel studies show that BernSCM is broadly representative of the range of the climate-carbon cycle response simulated by more complex and detailed models. Model code (in Fortran) was written from scratch with transparency and extensibility in mind, and is provided open source. BernSCM makes scientifically sound carbon cycle-climate modeling available for many applications. Supporting up to decadal time steps with high accuracy, it is suitable for studies with high computational load and for coupling with integrated assessment models (IAMs), for example. Further applications include climate risk assessment in a business, public, or educational context and the estimation of CO2 and climate benefits of emission mitigation options.
NASA Astrophysics Data System (ADS)
Galbraith, Eric D.; Dunne, John P.; Gnanadesikan, Anand; Slater, Richard D.; Sarmiento, Jorge L.; Dufour, Carolina O.; de Souza, Gregory F.; Bianchi, Daniele; Claret, Mariona; Rodgers, Keith B.; Marvasti, Seyedehsafoura Sedigh
2015-12-01
Earth System Models increasingly include ocean biogeochemistry models in order to predict changes in ocean carbon storage, hypoxia, and biological productivity under climate change. However, state-of-the-art ocean biogeochemical models include many advected tracers, that significantly increase the computational resources required, forcing a trade-off with spatial resolution. Here, we compare a state-of-the art model with 30 prognostic tracers (TOPAZ) with two reduced-tracer models, one with 6 tracers (BLING), and the other with 3 tracers (miniBLING). The reduced-tracer models employ parameterized, implicit biological functions, which nonetheless capture many of the most important processes resolved by TOPAZ. All three are embedded in the same coupled climate model. Despite the large difference in tracer number, the absence of tracers for living organic matter is shown to have a minimal impact on the transport of nutrient elements, and the three models produce similar mean annual preindustrial distributions of macronutrients, oxygen, and carbon. Significant differences do exist among the models, in particular the seasonal cycle of biomass and export production, but it does not appear that these are necessary consequences of the reduced tracer number. With increasing CO2, changes in dissolved oxygen and anthropogenic carbon uptake are very similar across the different models. Thus, while the reduced-tracer models do not explicitly resolve the diversity and internal dynamics of marine ecosystems, we demonstrate that such models are applicable to a broad suite of major biogeochemical concerns, including anthropogenic change. These results are very promising for the further development and application of reduced-tracer biogeochemical models that incorporate "sub-ecosystem-scale" parameterizations.
NASA Astrophysics Data System (ADS)
Anagnostopoulos, Konstantinos N.; Azuma, Takehiro; Ito, Yuta; Nishimura, Jun; Papadoudis, Stratos Kovalkov
2018-02-01
In recent years the complex Langevin method (CLM) has proven a powerful method in studying statistical systems which suffer from the sign problem. Here we show that it can also be applied to an important problem concerning why we live in four-dimensional spacetime. Our target system is the type IIB matrix model, which is conjectured to be a nonperturbative definition of type IIB superstring theory in ten dimensions. The fermion determinant of the model becomes complex upon Euclideanization, which causes a severe sign problem in its Monte Carlo studies. It is speculated that the phase of the fermion determinant actually induces the spontaneous breaking of the SO(10) rotational symmetry, which has direct consequences on the aforementioned question. In this paper, we apply the CLM to the 6D version of the type IIB matrix model and show clear evidence that the SO(6) symmetry is broken down to SO(3). Our results are consistent with those obtained previously by the Gaussian expansion method.
Zhang, Liang; Zhang, Song; Maezawa, Izumi; Trushin, Sergey; Minhas, Paras; Pinto, Matthew; Jin, Lee-Way; Prasain, Keshar; Nguyen, Thi D.T.; Yamazaki, Yu; Kanekiyo, Takahisa; Bu, Guojun; Gateno, Benjamin; Chang, Kyeong-Ok; Nath, Karl A.; Nemutlu, Emirhan; Dzeja, Petras; Pang, Yuan-Ping; Hua, Duy H.; Trushina, Eugenia
2015-01-01
Development of therapeutic strategies to prevent Alzheimer's disease (AD) is of great importance. We show that mild inhibition of mitochondrial complex I with small molecule CP2 reduces levels of amyloid beta and phospho-Tau and averts cognitive decline in three animal models of familial AD. Low-mass molecular dynamics simulations and biochemical studies confirmed that CP2 competes with flavin mononucleotide for binding to the redox center of complex I leading to elevated AMP/ATP ratio and activation of AMP-activated protein kinase in neurons and mouse brain without inducing oxidative damage or inflammation. Furthermore, modulation of complex I activity augmented mitochondrial bioenergetics increasing coupling efficiency of respiratory chain and neuronal resistance to stress. Concomitant reduction of glycogen synthase kinase 3β activity and restoration of axonal trafficking resulted in elevated levels of neurotrophic factors and synaptic proteins in adult AD mice. Our results suggest that metabolic reprogramming induced by modulation of mitochondrial complex I activity represents promising therapeutic strategy for AD. PMID:26086035
NASA Technical Reports Server (NTRS)
Seldner, K.
1976-01-01
The development of control systems for jet engines requires a real-time computer simulation. The simulation provides an effective tool for evaluating control concepts and problem areas prior to actual engine testing. The development and use of a real-time simulation of the Pratt and Whitney F100-PW100 turbofan engine is described. The simulation was used in a multi-variable optimal controls research program using linear quadratic regulator theory. The simulation is used to generate linear engine models at selected operating points and evaluate the control algorithm. To reduce the complexity of the design, it is desirable to reduce the order of the linear model. A technique to reduce the order of the model; is discussed. Selected results between high and low order models are compared. The LQR control algorithms can be programmed on digital computer. This computer will control the engine simulation over the desired flight envelope.
Entropy, complexity, and Markov diagrams for random walk cancer models
Newton, Paul K.; Mason, Jeremy; Hurt, Brian; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Kuhn, Peter
2014-01-01
The notion of entropy is used to compare the complexity associated with 12 common cancers based on metastatic tumor distribution autopsy data. We characterize power-law distributions, entropy, and Kullback-Liebler divergence associated with each primary cancer as compared with data for all cancer types aggregated. We then correlate entropy values with other measures of complexity associated with Markov chain dynamical systems models of progression. The Markov transition matrix associated with each cancer is associated with a directed graph model where nodes are anatomical locations where a metastatic tumor could develop, and edge weightings are transition probabilities of progression from site to site. The steady-state distribution corresponds to the autopsy data distribution. Entropy correlates well with the overall complexity of the reduced directed graph structure for each cancer and with a measure of systemic interconnectedness of the graph, called graph conductance. The models suggest that grouping cancers according to their entropy values, with skin, breast, kidney, and lung cancers being prototypical high entropy cancers, stomach, uterine, pancreatic and ovarian being mid-level entropy cancers, and colorectal, cervical, bladder, and prostate cancers being prototypical low entropy cancers, provides a potentially useful framework for viewing metastatic cancer in terms of predictability, complexity, and metastatic potential. PMID:25523357
NASA Technical Reports Server (NTRS)
White, Allan L.; Palumbo, Daniel L.
1991-01-01
Semi-Markov processes have proved to be an effective and convenient tool to construct models of systems that achieve reliability by redundancy and reconfiguration. These models are able to depict complex system architectures and to capture the dynamics of fault arrival and system recovery. A disadvantage of this approach is that the models can be extremely large, which poses both a model and a computational problem. Techniques are needed to reduce the model size. Because these systems are used in critical applications where failure can be expensive, there must be an analytically derived bound for the error produced by the model reduction technique. A model reduction technique called trimming is presented that can be applied to a popular class of systems. Automatic model generation programs were written to help the reliability analyst produce models of complex systems. This method, trimming, is easy to implement and the error bound easy to compute. Hence, the method lends itself to inclusion in an automatic model generator.
Economic and environmental optimization of a multi-site utility network for an industrial complex.
Kim, Sang Hun; Yoon, Sung-Geun; Chae, Song Hwa; Park, Sunwon
2010-01-01
Most chemical companies consume a lot of steam, water and electrical resources in the production process. Given recent record fuel costs, utility networks must be optimized to reduce the overall cost of production. Environmental concerns must also be considered when preparing modifications to satisfy the requirements for industrial utilities, since wastes discharged from the utility networks are restricted by environmental regulations. Construction of Eco-Industrial Parks (EIPs) has drawn attention as a promising approach for retrofitting existing industrial parks to improve energy efficiency. The optimization of the utility network within an industrial complex is one of the most important undertakings to minimize energy consumption and waste loads in the EIP. In this work, a systematic approach to optimize the utility network of an industrial complex is presented. An important issue in the optimization of a utility network is the desire of the companies to achieve high profits while complying with the environmental regulations. Therefore, the proposed optimization was performed with consideration of both economic and environmental factors. The proposed approach consists of unit modeling using thermodynamic principles, mass and energy balances, development of a multi-period Mixed Integer Linear Programming (MILP) model for the integration of utility systems in an industrial complex, and an economic/environmental analysis of the results. This approach is applied to the Yeosu Industrial Complex, considering seasonal utility demands. The results show that both the total utility cost and waste load are reduced by optimizing the utility network of an industrial complex. 2009 Elsevier Ltd. All rights reserved.
Influence of impurities on the high temperature conductivity of SrTiO3
NASA Astrophysics Data System (ADS)
Bowes, Preston C.; Baker, Jonathon N.; Harris, Joshua S.; Behrhorst, Brian D.; Irving, Douglas L.
2018-01-01
In studies of high temperature electrical conductivity (HiTEC) of dielectrics, the impurity in the highest concentration is assumed to form a single defect that controls HiTEC. However, carrier concentrations are typically at or below the level of background impurities, and all impurities may complex with native defects. Canonical defect models ignore complex formation and lump defects from multiple impurities into a single effective defect to reduce the number of associated reactions. To evaluate the importance of background impurities and defect complexes on HiTEC, a grand canonical defect model was developed with input from density functional theory calculations using hybrid exchange correlation functionals. The influence of common background impurities and first nearest neighbor complexes with oxygen vacancies (vO) was studied for three doping cases: nominally undoped, donor doped, and acceptor doped SrTiO3. In each case, conductivity depended on the ensemble of impurity defects simulated with the extent of the dependence governed by the character of the dominant impurity and its tendency to complex with vO. Agreement between simulated and measured conductivity profiles as a function of temperature and oxygen partial pressure improved significantly when background impurities were included in the nominally undoped case. Effects of the impurities simulated were reduced in the Nb and Al doped cases as both elements did not form complexes and were present in concentrations well exceeding all other active impurities. The influence of individual impurities on HiTEC in SrTiO3 was isolated and discussed and motivates further experiments on singly doped SrTiO3.
Balancing model complexity and measurements in hydrology
NASA Astrophysics Data System (ADS)
Van De Giesen, N.; Schoups, G.; Weijs, S. V.
2012-12-01
The Data Processing Inequality implies that hydrological modeling can only reduce, and never increase, the amount of information available in the original data used to formulate and calibrate hydrological models: I(X;Z(Y)) ≤ I(X;Y). Still, hydrologists around the world seem quite content building models for "their" watersheds to move our discipline forward. Hydrological models tend to have a hybrid character with respect to underlying physics. Most models make use of some well established physical principles, such as mass and energy balances. One could argue that such principles are based on many observations, and therefore add data. These physical principles, however, are applied to hydrological models that often contain concepts that have no direct counterpart in the observable physical universe, such as "buckets" or "reservoirs" that fill up and empty out over time. These not-so-physical concepts are more like the Artificial Neural Networks and Support Vector Machines of the Artificial Intelligence (AI) community. Within AI, one quickly came to the realization that by increasing model complexity, one could basically fit any dataset but that complexity should be controlled in order to be able to predict unseen events. The more data are available to train or calibrate the model, the more complex it can be. Many complexity control approaches exist in AI, with Solomonoff inductive inference being one of the first formal approaches, the Akaike Information Criterion the most popular, and Statistical Learning Theory arguably being the most comprehensive practical approach. In hydrology, complexity control has hardly been used so far. There are a number of reasons for that lack of interest, the more valid ones of which will be presented during the presentation. For starters, there are no readily available complexity measures for our models. Second, some unrealistic simplifications of the underlying complex physics tend to have a smoothing effect on possible model outcomes, thereby preventing the most obvious results of over-fitting. Thirdly, dependence within and between time series poses an additional analytical problem. Finally, there are arguments to be made that the often discussed "equifinality" in hydrological models is simply a different manifestation of the lack of complexity control. In turn, this points toward a general idea, which is actually quite popular in sciences other than hydrology, that additional data gathering is a good way to increase the information content of our descriptions of hydrological reality.
Reduced-Order Structure-Preserving Model for Parallel-Connected Three-Phase Grid-Tied Inverters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Brian B; Purba, Victor; Jafarpour, Saber
Next-generation power networks will contain large numbers of grid-connected inverters satisfying a significant fraction of system load. Since each inverter model has a relatively large number of dynamic states, it is impractical to analyze complex system models where the full dynamics of each inverter are retained. To address this challenge, we derive a reduced-order structure-preserving model for parallel-connected grid-tied three-phase inverters. Here, each inverter in the system is assumed to have a full-bridge topology, LCL filter at the point of common coupling, and the control architecture for each inverter includes a current controller, a power controller, and a phase-locked loopmore » for grid synchronization. We outline a structure-preserving reduced-order inverter model with lumped parameters for the setting where the parallel inverters are each designed such that the filter components and controller gains scale linearly with the power rating. By structure preserving, we mean that the reduced-order three-phase inverter model is also composed of an LCL filter, a power controller, current controller, and PLL. We show that the system of parallel inverters can be modeled exactly as one aggregated inverter unit and this equivalent model has the same number of dynamical states as any individual inverter in the system. Numerical simulations validate the reduced-order model.« less
NASA Astrophysics Data System (ADS)
Leong, W. K.; Lai, S. H.
2017-06-01
Due to the effects of climate change and the increasing demand on water, sustainable development in term of water resources management has become a major challenge. In this context, the application of simulation models is useful to duel with the uncertainty and complexity of water system by providing stakeholders with the best solution. This paper outlines an integrated management planning network is developed based on Water Evaluation and Planning (WEAP) to evaluate current and future water management system of Langat River Basin, Malaysia under various scenarios. The WEAP model is known as an integrated decision support system investigate major stresses on demand and supply in terms of water availability in catchment scale. In fact, WEAP is applicable to simulate complex systems including various sectors within a single catchment or transboundary river system. To construct the model, by taking account of the Langat catchment and the corresponding demand points, we defined the hydrological model into 10 sub-hydrological catchments and 17 demand points included the export of treated water to the major cities outside the catchment. The model is calibrated and verified by several quantitative statistics (coefficient of determination, R2; Nash-Sutcliffe efficiency, NSE and Percent bias, PBIAS). The trend of supply and demand in the catchment is evaluated under three scenarios to 2050, 1: Population growth rate, 2: Demand side management (DSM) and 3: Combination of DSM and reduce non-revenue water (NRW). Results show that by reducing NRW and proper DSM, unmet demand able to reduce significantly.
Porta, Alberto; Faes, Luca; Bari, Vlasta; Marchi, Andrea; Bassani, Tito; Nollo, Giandomenico; Perseguini, Natália Maria; Milan, Juliana; Minatel, Vinícius; Borghi-Silva, Audrey; Takahashi, Anielle C. M.; Catai, Aparecida M.
2014-01-01
The proposed approach evaluates complexity of the cardiovascular control and causality among cardiovascular regulatory mechanisms from spontaneous variability of heart period (HP), systolic arterial pressure (SAP) and respiration (RESP). It relies on construction of a multivariate embedding space, optimization of the embedding dimension and a procedure allowing the selection of the components most suitable to form the multivariate embedding space. Moreover, it allows the comparison between linear model-based (MB) and nonlinear model-free (MF) techniques and between MF approaches exploiting local predictability (LP) and conditional entropy (CE). The framework was applied to study age-related modifications of complexity and causality in healthy humans in supine resting (REST) and during standing (STAND). We found that: 1) MF approaches are more efficient than the MB method when nonlinear components are present, while the reverse situation holds in presence of high dimensional embedding spaces; 2) the CE method is the least powerful in detecting age-related trends; 3) the association of HP complexity on age suggests an impairment of cardiac regulation and response to STAND; 4) the relation of SAP complexity on age indicates a gradual increase of sympathetic activity and a reduced responsiveness of vasomotor control to STAND; 5) the association from SAP to HP on age during STAND reveals a progressive inefficiency of baroreflex; 6) the reduced connection from HP to SAP with age might be linked to the progressive exploitation of Frank-Starling mechanism at REST and to the progressive increase of peripheral resistances during STAND; 7) at REST the diminished association from RESP to HP with age suggests a vagal withdrawal and a gradual uncoupling between respiratory activity and heart; 8) the weakened connection from RESP to SAP with age might be related to the progressive increase of left ventricular thickness and vascular stiffness and to the gradual decrease of respiratory sinus arrhythmia. PMID:24586796
NASA Technical Reports Server (NTRS)
Hops, J. M.; Sherif, J. S.
1994-01-01
A great deal of effort is now being devoted to the study, analysis, prediction, and minimization of software maintenance expected cost, long before software is delivered to users or customers. It has been estimated that, on the average, the effort spent on software maintenance is as costly as the effort spent on all other software costs. Software design methods should be the starting point to aid in alleviating the problems of software maintenance complexity and high costs. Two aspects of maintenance deserve attention: (1) protocols for locating and rectifying defects, and for ensuring that noe new defects are introduced in the development phase of the software process; and (2) protocols for modification, enhancement, and upgrading. This article focuses primarily on the second aspect, the development of protocols to help increase the quality and reduce the costs associated with modifications, enhancements, and upgrades of existing software. This study developed parsimonious models and a relative complexity metric for complexity measurement of software that were used to rank the modules in the system relative to one another. Some success was achieved in using the models and the relative metric to identify maintenance-prone modules.
NASA Astrophysics Data System (ADS)
Xu, M.; van Overloop, P. J.; van de Giesen, N. C.
2011-02-01
Model predictive control (MPC) of open channel flow is becoming an important tool in water management. The complexity of the prediction model has a large influence on the MPC application in terms of control effectiveness and computational efficiency. The Saint-Venant equations, called SV model in this paper, and the Integrator Delay (ID) model are either accurate but computationally costly, or simple but restricted to allowed flow changes. In this paper, a reduced Saint-Venant (RSV) model is developed through a model reduction technique, Proper Orthogonal Decomposition (POD), on the SV equations. The RSV model keeps the main flow dynamics and functions over a large flow range but is easier to implement in MPC. In the test case of a modeled canal reach, the number of states and disturbances in the RSV model is about 45 and 16 times less than the SV model, respectively. The computational time of MPC with the RSV model is significantly reduced, while the controller remains effective. Thus, the RSV model is a promising means to balance the control effectiveness and computational efficiency.
2014-01-01
Background Striking a balance between the degree of model complexity and parameter identifiability, while still producing biologically feasible simulations using modelling is a major challenge in computational biology. While these two elements of model development are closely coupled, parameter fitting from measured data and analysis of model mechanisms have traditionally been performed separately and sequentially. This process produces potential mismatches between model and data complexities that can compromise the ability of computational frameworks to reveal mechanistic insights or predict new behaviour. In this study we address this issue by presenting a generic framework for combined model parameterisation, comparison of model alternatives and analysis of model mechanisms. Results The presented methodology is based on a combination of multivariate metamodelling (statistical approximation of the input–output relationships of deterministic models) and a systematic zooming into biologically feasible regions of the parameter space by iterative generation of new experimental designs and look-up of simulations in the proximity of the measured data. The parameter fitting pipeline includes an implicit sensitivity analysis and analysis of parameter identifiability, making it suitable for testing hypotheses for model reduction. Using this approach, under-constrained model parameters, as well as the coupling between parameters within the model are identified. The methodology is demonstrated by refitting the parameters of a published model of cardiac cellular mechanics using a combination of measured data and synthetic data from an alternative model of the same system. Using this approach, reduced models with simplified expressions for the tropomyosin/crossbridge kinetics were found by identification of model components that can be omitted without affecting the fit to the parameterising data. Our analysis revealed that model parameters could be constrained to a standard deviation of on average 15% of the mean values over the succeeding parameter sets. Conclusions Our results indicate that the presented approach is effective for comparing model alternatives and reducing models to the minimum complexity replicating measured data. We therefore believe that this approach has significant potential for reparameterising existing frameworks, for identification of redundant model components of large biophysical models and to increase their predictive capacity. PMID:24886522
Active Learning to Understand Infectious Disease Models and Improve Policy Making
Vladislavleva, Ekaterina; Broeckhove, Jan; Beutels, Philippe; Hens, Niel
2014-01-01
Modeling plays a major role in policy making, especially for infectious disease interventions but such models can be complex and computationally intensive. A more systematic exploration is needed to gain a thorough systems understanding. We present an active learning approach based on machine learning techniques as iterative surrogate modeling and model-guided experimentation to systematically analyze both common and edge manifestations of complex model runs. Symbolic regression is used for nonlinear response surface modeling with automatic feature selection. First, we illustrate our approach using an individual-based model for influenza vaccination. After optimizing the parameter space, we observe an inverse relationship between vaccination coverage and cumulative attack rate reinforced by herd immunity. Second, we demonstrate the use of surrogate modeling techniques on input-response data from a deterministic dynamic model, which was designed to explore the cost-effectiveness of varicella-zoster virus vaccination. We use symbolic regression to handle high dimensionality and correlated inputs and to identify the most influential variables. Provided insight is used to focus research, reduce dimensionality and decrease decision uncertainty. We conclude that active learning is needed to fully understand complex systems behavior. Surrogate models can be readily explored at no computational expense, and can also be used as emulator to improve rapid policy making in various settings. PMID:24743387
Active learning to understand infectious disease models and improve policy making.
Willem, Lander; Stijven, Sean; Vladislavleva, Ekaterina; Broeckhove, Jan; Beutels, Philippe; Hens, Niel
2014-04-01
Modeling plays a major role in policy making, especially for infectious disease interventions but such models can be complex and computationally intensive. A more systematic exploration is needed to gain a thorough systems understanding. We present an active learning approach based on machine learning techniques as iterative surrogate modeling and model-guided experimentation to systematically analyze both common and edge manifestations of complex model runs. Symbolic regression is used for nonlinear response surface modeling with automatic feature selection. First, we illustrate our approach using an individual-based model for influenza vaccination. After optimizing the parameter space, we observe an inverse relationship between vaccination coverage and cumulative attack rate reinforced by herd immunity. Second, we demonstrate the use of surrogate modeling techniques on input-response data from a deterministic dynamic model, which was designed to explore the cost-effectiveness of varicella-zoster virus vaccination. We use symbolic regression to handle high dimensionality and correlated inputs and to identify the most influential variables. Provided insight is used to focus research, reduce dimensionality and decrease decision uncertainty. We conclude that active learning is needed to fully understand complex systems behavior. Surrogate models can be readily explored at no computational expense, and can also be used as emulator to improve rapid policy making in various settings.
Tamaki, Yusuke; Morimoto, Tatsuki; Koike, Kazuhide; Ishitani, Osamu
2012-09-25
Previously undescribed supramolecules constructed with various ratios of two kinds of Ru(II) complexes-a photosensitizer and a catalyst-were synthesized. These complexes can photocatalyze the reduction of CO(2) to formic acid with high selectivity and durability using a wide range of wavelengths of visible light and NADH model compounds as electron donors in a mixed solution of dimethylformamide-triethanolamine. Using a higher ratio of the photosensitizer unit to the catalyst unit led to a higher yield of formic acid. In particular, of the reported photocatalysts, a trinuclear complex with two photosensitizer units and one catalyst unit photocatalyzed CO(2) reduction (Φ(HCOOH) = 0.061, TON(HCOOH) = 671) with the fastest reaction rate (TOF(HCOOH) = 11.6 min(-1)). On the other hand, photocatalyses of a mixed system containing two kinds of model mononuclear Ru(II) complexes, and supramolecules with a higher ratio of the catalyst unit were much less efficient, and black oligomers and polymers were produced from the Ru complexes during photocatalytic reactions, which reduced the yield of formic acid. The photocatalytic formation of formic acid using the supramolecules described herein proceeds via two sequential processes: the photochemical reduction of the photosensitizer unit by NADH model compounds and intramolecular electron transfer to the catalyst unit.
NASA Astrophysics Data System (ADS)
Kim, Jinyong; Luo, Gang; Wang, Chao-Yang
2017-10-01
3D fine-mesh flow-fields recently developed by Toyota Mirai improved water management and mass transport in proton exchange membrane (PEM) fuel cell stacks, suggesting their potential value for robust and high-power PEM fuel cell stack performance. In such complex flow-fields, Forchheimer's inertial effect is dominant at high current density. In this work, a two-phase flow model of 3D complex flow-fields of PEMFCs is developed by accounting for Forchheimer's inertial effect, for the first time, to elucidate the underlying mechanism of liquid water behavior and mass transport inside 3D complex flow-fields and their adjacent gas diffusion layers (GDL). It is found that Forchheimer's inertial effect enhances liquid water removal from flow-fields and adds additional flow resistance around baffles, which improves interfacial liquid water and mass transport. As a result, substantial improvements in high current density cell performance and operational stability are expected in PEMFCs with 3D complex flow-fields, compared to PEMFCs with conventional flow-fields. Higher current density operation required to further reduce PEMFC stack cost per kW in the future will necessitate optimizing complex flow-field designs using the present model, in order to efficiently remove a large amount of product water and hence minimize the mass transport voltage loss.
Simeone, Kristina A; Matthews, Stephanie A; Samson, Kaeli K; Simeone, Timothy A
2014-01-01
Mitochondria actively participate in neurotransmission by providing energy (ATP) and maintaining normative concentrations of reactive oxygen species (ROS) in both presynaptic and postsynaptic elements. In human and animal epilepsies, ATP-producing respiratory rates driven by mitochondrial respiratory complex (MRC) I are reduced, antioxidant systems are attenuated and oxidative damage is increased. We report that MRCI-driven respiration and functional uncoupling (an inducible antioxidant mechanism) are reduced and levels of H2O2 are elevated in mitochondria isolated from KO mice. Experimental impairment of MRCI in WT hippocampal slices via rotenone reduces paired-pulse ratios (PPRs) at mossy fiber-CA3 synapses (resembling KO PPRs), and exacerbates seizure-like events in vitro. Daily treatment with AATP [a combination therapy composed of ascorbic acid (AA), alpha-tocopherol (T), sodium pyruvate (P) designed to synergistically target mitochondrial impairments] improved mitochondrial functions, mossy fiber PPRs, and reduced seizure burden index (SBI) scores and seizure incidence in KO mice. AATP pretreatment reduced severity of KA-induced seizures resulting in 100% protection from the severe tonic-clonic seizures in WT mice. These data suggest that restoration of bioenergetic homeostasis in the brain may represent a viable anti-seizure target for temporal lobe epilepsy. Copyright © 2013 Elsevier Inc. All rights reserved.
Langford-Smith, Kia J; Sandiford, Zara; Langford-Smith, Alex; Wilkinson, Fiona L; Jones, Simon A; Wraith, J Ed; Wynn, Robert F; Bigger, Brian W
2013-01-01
Non-myeloablative allogeneic haematopoietic stem cell transplantation (HSCT) is rarely achievable clinically, except where donor cells have selective advantages. Murine non-myeloablative conditioning regimens have limited clinical success, partly through use of clinically unachievable cell doses or strain combinations permitting allograft acceptance using immunosuppression alone. We found that reducing busulfan conditioning in murine syngeneic HSCT, increases bone marrow (BM):blood SDF-1 ratio and total donor cells homing to BM, but reduces the proportion of donor cells engrafting. Despite this, syngeneic engraftment is achievable with non-myeloablative busulfan (25 mg/kg) and higher cell doses induce increased chimerism. Therefore we investigated regimens promoting initial donor cell engraftment in the major histocompatibility complex barrier mismatched CBA to C57BL/6 allo-transplant model. This requires full myeloablation and immunosuppression with non-depleting anti-CD4/CD8 blocking antibodies to achieve engraftment of low cell doses, and rejects with reduced intensity conditioning (≤75 mg/kg busulfan). We compared increased antibody treatment, G-CSF, niche disruption and high cell dose, using reduced intensity busulfan and CD4/8 blockade in this model. Most treatments increased initial donor engraftment, but only addition of co-stimulatory blockade permitted long-term engraftment with reduced intensity or non-myeloablative conditioning, suggesting that signal 1 and 2 T-cell blockade is more important than early BM niche engraftment for transplant success.
Sterner, Eric; Masuko, Sayaka; Li, Guoyun; Li, Lingyun; Green, Dixy E.; Otto, Nigel J.; Xu, Yongmei; DeAngelis, Paul L.; Liu, Jian; Dordick, Jonathan S.; Linhardt, Robert J.
2014-01-01
Four well-defined heparan sulfate (HS) block copolymers containing S-domains (high sulfo group content) placed adjacent to N-domains (low sulfo group content) were chemoenzymatically synthesized and characterized. The domain lengths in these HS block co-polymers were ∼40 saccharide units. Microtiter 96-well and three-dimensional cell-based microarray assays utilizing murine immortalized bone marrow (BaF3) cells were developed to evaluate the activity of these HS block co-polymers. Each recombinant BaF3 cell line expresses only a single type of fibroblast growth factor receptor (FGFR) but produces neither HS nor fibroblast growth factors (FGFs). In the presence of different FGFs, BaF3 cell proliferation showed clear differences for the four HS block co-polymers examined. These data were used to examine the two proposed signaling models, the symmetric FGF2-HS2-FGFR2 ternary complex model and the asymmetric FGF2-HS1-FGFR2 ternary complex model. In the symmetric FGF2-HS2-FGFR2 model, two acidic HS chains bind in a basic canyon located on the top face of the FGF2-FGFR2 protein complex. In this model the S-domains at the non-reducing ends of the two HS proteoglycan chains are proposed to interact with the FGF2-FGFR2 protein complex. In contrast, in the asymmetric FGF2-HS1-FGFR2 model, a single HS chain interacts with the FGF2-FGFR2 protein complex through a single S-domain that can be located at any position within an HS chain. Our data comparing a series of synthetically prepared HS block copolymers support a preference for the symmetric FGF2-HS2-FGFR2 ternary complex model. PMID:24563485
Reduced-Order Modeling for Optimization and Control of Complex Flows
2010-11-30
Statistics Colloquium, Auburn, AL, (January 2009). 16. University of Pittsburgh, Mathematics Colloquium, Pittsburgh, PA, (February 2009). 17. Goethe ...Center for Scientific Computing, Goethe University Frankfurt am Main, Ger- many, (June 2009). 18. Air Force Institute of Technology, Wright-Patterson
Condition-based diagnosis of mechatronic systems using a fractional calculus approach
NASA Astrophysics Data System (ADS)
Gutiérrez-Carvajal, Ricardo Enrique; Flávio de Melo, Leonimer; Maurício Rosário, João; Tenreiro Machado, J. A.
2016-07-01
While fractional calculus (FC) is as old as integer calculus, its application has been mainly restricted to mathematics. However, many real systems are better described using FC equations than with integer models. FC is a suitable tool for describing systems characterised by their fractal nature, long-term memory and chaotic behaviour. It is a promising methodology for failure analysis and modelling, since the behaviour of a failing system depends on factors that increase the model's complexity. This paper explores the proficiency of FC in modelling complex behaviour by tuning only a few parameters. This work proposes a novel two-step strategy for diagnosis, first modelling common failure conditions and, second, by comparing these models with real machine signals and using the difference to feed a computational classifier. Our proposal is validated using an electrical motor coupled with a mechanical gear reducer.
2.5D complex resistivity modeling and inversion using unstructured grids
NASA Astrophysics Data System (ADS)
Xu, Kaijun; Sun, Jie
2016-04-01
The characteristic of complex resistivity on rock and ore has been recognized by people for a long time. Generally we have used the Cole-Cole Model(CCM) to describe complex resistivity. It has been proved that the electrical anomaly of geologic body can be quantitative estimated by CCM parameters such as direct resistivity(ρ0), chargeability(m), time constant(τ) and frequency dependence(c). Thus it is very important to obtain the complex parameters of geologic body. It is difficult to approximate complex structures and terrain using traditional rectangular grid. In order to enhance the numerical accuracy and rationality of modeling and inversion, we use an adaptive finite-element algorithm for forward modeling of the frequency-domain 2.5D complex resistivity and implement the conjugate gradient algorithm in the inversion of 2.5D complex resistivity. An adaptive finite element method is applied for solving the 2.5D complex resistivity forward modeling of horizontal electric dipole source. First of all, the CCM is introduced into the Maxwell's equations to calculate the complex resistivity electromagnetic fields. Next, the pseudo delta function is used to distribute electric dipole source. Then the electromagnetic fields can be expressed in terms of the primary fields caused by layered structure and the secondary fields caused by inhomogeneities anomalous conductivity. At last, we calculated the electromagnetic fields response of complex geoelectric structures such as anticline, syncline, fault. The modeling results show that adaptive finite-element methods can automatically improve mesh generation and simulate complex geoelectric models using unstructured grids. The 2.5D complex resistivity invertion is implemented based the conjugate gradient algorithm.The conjugate gradient algorithm doesn't need to compute the sensitivity matrix but directly computes the sensitivity matrix or its transpose multiplying vector. In addition, the inversion target zones are segmented with fine grids and the background zones are segmented with big grid, the method can reduce the grid amounts of inversion, it is very helpful to improve the computational efficiency. The inversion results verify the validity and stability of conjugate gradient inversion algorithm. The results of theoretical calculation indicate that the modeling and inversion of 2.5D complex resistivity using unstructured grids are feasible. Using unstructured grids can improve the accuracy of modeling, but the large number of grids inversion is extremely time-consuming, so the parallel computation for the inversion is necessary. Acknowledgments: We thank to the support of the National Natural Science Foundation of China(41304094).
CHENG, JIANLIN; EICKHOLT, JESSE; WANG, ZHENG; DENG, XIN
2013-01-01
After decades of research, protein structure prediction remains a very challenging problem. In order to address the different levels of complexity of structural modeling, two types of modeling techniques — template-based modeling and template-free modeling — have been developed. Template-based modeling can often generate a moderate- to high-resolution model when a similar, homologous template structure is found for a query protein but fails if no template or only incorrect templates are found. Template-free modeling, such as fragment-based assembly, may generate models of moderate resolution for small proteins of low topological complexity. Seldom have the two techniques been integrated together to improve protein modeling. Here we develop a recursive protein modeling approach to selectively and collaboratively apply template-based and template-free modeling methods to model template-covered (i.e. certain) and template-free (i.e. uncertain) regions of a protein. A preliminary implementation of the approach was tested on a number of hard modeling cases during the 9th Critical Assessment of Techniques for Protein Structure Prediction (CASP9) and successfully improved the quality of modeling in most of these cases. Recursive modeling can signicantly reduce the complexity of protein structure modeling and integrate template-based and template-free modeling to improve the quality and efficiency of protein structure prediction. PMID:22809379
Proton beam therapy and accountable care: the challenges ahead.
Elnahal, Shereef M; Kerstiens, John; Helsper, Richard S; Zietman, Anthony L; Johnstone, Peter A S
2013-03-15
Proton beam therapy (PBT) centers have drawn increasing public scrutiny for their high cost. The behavior of such facilities is likely to change under the Affordable Care Act. We modeled how accountable care reform may affect the financial standing of PBT centers and their incentives to treat complex patient cases. We used operational data and publicly listed Medicare rates to model the relationship between financial metrics for PBT center performance and case mix (defined as the percentage of complex cases, such as pediatric central nervous system tumors). Financial metrics included total daily revenues and debt coverage (daily revenues - daily debt payments). Fee-for-service (FFS) and accountable care (ACO) reimbursement scenarios were modeled. Sensitivity analyses were performed around the room time required to treat noncomplex cases: simple (30 minutes), prostate (24 minutes), and short prostate (15 minutes). Sensitivity analyses were also performed for total machine operating time (14, 16, and 18 h/d). Reimbursement under ACOs could reduce daily revenues in PBT centers by up to 32%. The incremental revenue gained by replacing 1 complex case with noncomplex cases was lowest for simple cases and highest for short prostate cases. ACO rates reduced this incremental incentive by 53.2% for simple cases and 41.7% for short prostate cases. To cover daily debt payments after ACO rates were imposed, 26% fewer complex patients were allowable at varying capital costs and interest rates. Only facilities with total machine operating times of 18 hours per day would cover debt payments in all scenarios. Debt-financed PBT centers will face steep challenges to remain financially viable after ACO implementation. Paradoxically, reduced reimbursement for noncomplex cases will require PBT centers to treat more such cases over cases for which PBT has demonstrated superior outcomes. Relative losses will be highest for those facilities focused primarily on treating noncomplex cases. Copyright © 2013 Elsevier Inc. All rights reserved.
Proton Beam Therapy and Accountable Care: The Challenges Ahead
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elnahal, Shereef M., E-mail: selnahal@partners.org; Kerstiens, John; Helsper, Richard S.
2013-03-15
Purpose: Proton beam therapy (PBT) centers have drawn increasing public scrutiny for their high cost. The behavior of such facilities is likely to change under the Affordable Care Act. We modeled how accountable care reform may affect the financial standing of PBT centers and their incentives to treat complex patient cases. Methods and Materials: We used operational data and publicly listed Medicare rates to model the relationship between financial metrics for PBT center performance and case mix (defined as the percentage of complex cases, such as pediatric central nervous system tumors). Financial metrics included total daily revenues and debt coveragemore » (daily revenues − daily debt payments). Fee-for-service (FFS) and accountable care (ACO) reimbursement scenarios were modeled. Sensitivity analyses were performed around the room time required to treat noncomplex cases: simple (30 minutes), prostate (24 minutes), and short prostate (15 minutes). Sensitivity analyses were also performed for total machine operating time (14, 16, and 18 h/d). Results: Reimbursement under ACOs could reduce daily revenues in PBT centers by up to 32%. The incremental revenue gained by replacing 1 complex case with noncomplex cases was lowest for simple cases and highest for short prostate cases. ACO rates reduced this incremental incentive by 53.2% for simple cases and 41.7% for short prostate cases. To cover daily debt payments after ACO rates were imposed, 26% fewer complex patients were allowable at varying capital costs and interest rates. Only facilities with total machine operating times of 18 hours per day would cover debt payments in all scenarios. Conclusions: Debt-financed PBT centers will face steep challenges to remain financially viable after ACO implementation. Paradoxically, reduced reimbursement for noncomplex cases will require PBT centers to treat more such cases over cases for which PBT has demonstrated superior outcomes. Relative losses will be highest for those facilities focused primarily on treating noncomplex cases.« less
Quaternionic Kähler Detour Complexes and {mathcal{N} = 2} Supersymmetric Black Holes
NASA Astrophysics Data System (ADS)
Cherney, D.; Latini, E.; Waldron, A.
2011-03-01
We study a class of supersymmetric spinning particle models derived from the radial quantization of stationary, spherically symmetric black holes of four dimensional {{mathcal N} = 2} supergravities. By virtue of the c-map, these spinning particles move in quaternionic Kähler manifolds. Their spinning degrees of freedom describe mini-superspace-reduced supergravity fermions. We quantize these models using BRST detour complex technology. The construction of a nilpotent BRST charge is achieved by using local (worldline) supersymmetry ghosts to generate special holonomy transformations. (An interesting byproduct of the construction is a novel Dirac operator on the superghost extended Hilbert space.) The resulting quantized models are gauge invariant field theories with fields equaling sections of special quaternionic vector bundles. They underly and generalize the quaternionic version of Dolbeault cohomology discovered by Baston. In fact, Baston’s complex is related to the BPS sector of the models we write down. Our results rely on a calculus of operators on quaternionic Kähler manifolds that follows from BRST machinery, and although directly motivated by black hole physics, can be broadly applied to any model relying on quaternionic geometry.
FTM-West : fuel treatment market model for U.S. West
Peter J. Ince; Andrew Kramp; Henry Spelter; Ken Skog; Dennis Dykstra
2006-01-01
This paper presents FTMâWest, a partial market equilibrium model designed to project future wood market impacts of significantly expanded fuel treatment programs that could remove trees to reduce fire hazard on forestlands in the U.S. West. FTMâWest was designed to account for structural complexities in marketing and utilization that arise from unconventional size...
Feinstein, Daniel T.; Thomas, Mary Ann
2009-01-01
This report describes a modeling approach for studying how redox conditions evolve under the influence of a complex ground-water flow field. The distribution of redox conditions within a flow system is of interest because of the intrinsic susceptibility of an aquifer to redox-sensitive, naturally occurring contaminants - such as arsenic - as well as anthropogenic contaminants - such as chlorinated solvents. The MODFLOW-MT3D-RT3D suite of code was applied to a glacial valley-fill aquifer to demonstrate a method for testing the interaction of flow patterns, sources of reactive organic carbon, and availability of electron acceptors in controlling redox conditions. Modeling results show how three hypothetical distributions of organic carbon influence the development of redox conditions in a water-supply aquifer. The distribution of strongly reduced water depends on the balance between the rate of redox reactions and the capability of different parts of the flow system to transmit oxygenated water. The method can take account of changes in the flow system induced by pumping that result in a new distribution of reduced water.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Heng; Chen, Xingyuan; Ye, Ming
Sensitivity analysis is an important tool for quantifying uncertainty in the outputs of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study we developed a hierarchical sensitivity analysis method that (1) constructs an uncertainty hierarchy by analyzing the input uncertainty sources, and (2) accounts for the spatial correlation among parameters at each level ofmore » the hierarchy using geostatistical tools. The contribution of uncertainty source at each hierarchy level is measured by sensitivity indices calculated using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport in model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally as driven by the dynamic interaction between groundwater and river water at the site. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially-distributed parameters.« less
Coding response to a case-mix measurement system based on multiple diagnoses.
Preyra, Colin
2004-08-01
To examine the hospital coding response to a payment model using a case-mix measurement system based on multiple diagnoses and the resulting impact on a hospital cost model. Financial, clinical, and supplementary data for all Ontario short stay hospitals from years 1997 to 2002. Disaggregated trends in hospital case-mix growth are examined for five years following the adoption of an inpatient classification system making extensive use of combinations of secondary diagnoses. Hospital case mix is decomposed into base and complexity components. The longitudinal effects of coding variation on a standard hospital payment model are examined in terms of payment accuracy and impact on adjustment factors. Introduction of the refined case-mix system provided incentives for hospitals to increase reporting of secondary diagnoses and resulted in growth in highest complexity cases that were not matched by increased resource use over time. Despite a pronounced coding response on the part of hospitals, the increase in measured complexity and case mix did not reduce the unexplained variation in hospital unit cost nor did it reduce the reliance on the teaching adjustment factor, a potential proxy for case mix. The main implication was changes in the size and distribution of predicted hospital operating costs. Jurisdictions introducing extensive refinements to standard diagnostic related group (DRG)-type payment systems should consider the effects of induced changes to hospital coding practices. Assessing model performance should include analysis of the robustness of classification systems to hospital-level variation in coding practices. Unanticipated coding effects imply that case-mix models hypothesized to perform well ex ante may not meet expectations ex post.
NASA Technical Reports Server (NTRS)
Hyland, D. C.; Bernstein, D. S.
1987-01-01
The underlying philosophy and motivation of the optimal projection/maximum entropy (OP/ME) stochastic modeling and reduced control design methodology for high order systems with parameter uncertainties are discussed. The OP/ME design equations for reduced-order dynamic compensation including the effect of parameter uncertainties are reviewed. The application of the methodology to several Large Space Structures (LSS) problems of representative complexity is illustrated.
Cyclosporine A at reperfusion fails to reduce infarct size in the in vivo rat heart.
De Paulis, Damien; Chiari, Pascal; Teixeira, Geoffrey; Couture-Lepetit, Elisabeth; Abrial, Maryline; Argaud, Laurent; Gharib, Abdallah; Ovize, Michel
2013-09-01
We examined the effects on infarct size and mitochondrial function of ischemic (Isch), cyclosporine A (CsA) and isoflurane (Iso) preconditioning and postconditioning in the in vivo rat model. Anesthetized open-chest rats underwent 30 min of ischemia followed by either 120 min (protocol 1: infarct size assessment) or 15 min of reperfusion (protocol 2: assessment of mitochondrial function). All treatments administered before the 30-min ischemia (Pre-Isch, Pre-CsA, Pre-Iso) significantly reduced infarct as compared to control. In contrast, only Post-Iso significantly reduced infarct size, while Post-Isch and Post-CsA had no significant protective effect. As for the postconditioning-like interventions, the mitochondrial calcium retention capacity significantly increased only in the Post-Iso group (+58 % vs control) after succinate activation. Only Post-Iso increased state 3 (+177 and +62 %, for G/M and succinate, respectively) when compared to control. Also, Post-Iso reduced the hydrogen peroxide (H2O2) production (-46 % vs control) after complex I activation. This study suggests that isoflurane, but not cyclosporine A, can prevent lethal reperfusion injury in this in vivo rat model. This might be related to the need for a combined effect on cyclophilin D and complex I during the first minutes of reperfusion.
NASA Astrophysics Data System (ADS)
Kissinger, Alexander; Noack, Vera; Knopf, Stefan; Konrad, Wilfried; Scheer, Dirk; Class, Holger
2017-06-01
Saltwater intrusion into potential drinking water aquifers due to the injection of CO2 into deep saline aquifers is one of the hazards associated with the geological storage of CO2. Thus, in a site-specific risk assessment, models for predicting the fate of the displaced brine are required. Practical simulation of brine displacement involves decisions regarding the complexity of the model. The choice of an appropriate level of model complexity depends on multiple criteria: the target variable of interest, the relevant physical processes, the computational demand, the availability of data, and the data uncertainty. In this study, we set up a regional-scale geological model for a realistic (but not real) onshore site in the North German Basin with characteristic geological features for that region. A major aim of this work is to identify the relevant parameters controlling saltwater intrusion in a complex structural setting and to test the applicability of different model simplifications. The model that is used to identify relevant parameters fully couples flow in shallow freshwater aquifers and deep saline aquifers. This model also includes variable-density transport of salt and realistically incorporates surface boundary conditions with groundwater recharge. The complexity of this model is then reduced in several steps, by neglecting physical processes (two-phase flow near the injection well, variable-density flow) and by simplifying the complex geometry of the geological model. The results indicate that the initial salt distribution prior to the injection of CO2 is one of the key parameters controlling shallow aquifer salinization. However, determining the initial salt distribution involves large uncertainties in the regional-scale hydrogeological parameterization and requires complex and computationally demanding models (regional-scale variable-density salt transport). In order to evaluate strategies for minimizing leakage into shallow aquifers, other target variables can be considered, such as the volumetric leakage rate into shallow aquifers or the pressure buildup in the injection horizon. Our results show that simplified models, which neglect variable-density salt transport, can reach an acceptable agreement with more complex models.
Stochastic model simulation using Kronecker product analysis and Zassenhaus formula approximation.
Caglar, Mehmet Umut; Pal, Ranadip
2013-01-01
Probabilistic Models are regularly applied in Genetic Regulatory Network modeling to capture the stochastic behavior observed in the generation of biological entities such as mRNA or proteins. Several approaches including Stochastic Master Equations and Probabilistic Boolean Networks have been proposed to model the stochastic behavior in genetic regulatory networks. It is generally accepted that Stochastic Master Equation is a fundamental model that can describe the system being investigated in fine detail, but the application of this model is computationally enormously expensive. On the other hand, Probabilistic Boolean Network captures only the coarse-scale stochastic properties of the system without modeling the detailed interactions. We propose a new approximation of the stochastic master equation model that is able to capture the finer details of the modeled system including bistabilities and oscillatory behavior, and yet has a significantly lower computational complexity. In this new method, we represent the system using tensors and derive an identity to exploit the sparse connectivity of regulatory targets for complexity reduction. The algorithm involves an approximation based on Zassenhaus formula to represent the exponential of a sum of matrices as product of matrices. We derive upper bounds on the expected error of the proposed model distribution as compared to the stochastic master equation model distribution. Simulation results of the application of the model to four different biological benchmark systems illustrate performance comparable to detailed stochastic master equation models but with considerably lower computational complexity. The results also demonstrate the reduced complexity of the new approach as compared to commonly used Stochastic Simulation Algorithm for equivalent accuracy.
NASA Astrophysics Data System (ADS)
Pascu, Nicoleta Elisabeta; CǎruÅ£aşu, Nicoleta LuminiÅ£a.; Geambaşu, Gabriel George; Adîr, Victor Gabriel; Arion, Aurel Florin; Ivaşcu, Laura
2018-02-01
Aerial vehicles have become indispensable. There are in this field UAV (Unconventional Aerial vehicle) and transportation airplanes and other aerospace vehicles for spatial tourism. Today, the research and development activity in aerospace industry is focused to obtain a good and efficient design for airplanes, to solve the problem of high pollution and to reduce the noise. For these goals are necessary to realize light and resistant components. The aerospace industry products are, generally, very complex concerning geometric shapes and the costs are high, usually. Due to the progress in this field (products obtained using FDM) was possible to reduce the number of used tools, welding belts, and, of course, to eliminate a lot of machine tools. In addition, the complex shapes are easier product using this high technology, the cost is more attractive and the time is lower. This paper allows to present a few aspects about FDM technology and the obtained structures using it, as follows: computer geometric modeling (different designing softs) to design and redesign complex structures using 3D printing, for this kind of vehicles; finite element analysis to identify what is the influence of design for different structures; testing the structures.
Roberts, Shauna R; Crigler, Jane; Ramirez, Cristina; Sisco, Deborah; Early, Gerald L
2015-01-01
The care coordination program described here evolved from 5 years of trial and learning related to how to best serve our high-cost, high-utilizing, chronically ill, urban core patient population. In addition to medical complexity, they have daily challenges characteristic of persons served by Safety-Net health systems. Many have unstable health insurance status. Others have insecure housing. A number of patients have a history of substance use and mental illness. Many have fractured social supports. Although some of the best-known care transition models have been successful in reducing rehospitalizations and cost among patients studied, these models were developed for a relatively high functioning patient population with social support. We describe a successful approach targeted at working with patients who require a more intense and lengthy care coordination intervention to self-manage and reduce the cost of caring for their medical conditions. Using a diverse team and a set of replicable processes, we have demonstrated statistically significant reduction in the use of hospital and emergency services. Our intervention leverages the strengths and resilience of patients, focuses on trust and self-management, and targets heterogeneous "high-utilizer" patients with medical and social complexity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Xuefang; Hecht, Ethan S.; Christopher, David M.
Much effort has been made to model hydrogen releases from leaks during potential failures of hydrogen storage systems. A reduced-order jet model can be used to quickly characterize these flows, with low computational cost. Notional nozzle models are often used to avoid modeling the complex shock structures produced by the underexpanded jets by determining an “effective” source to produce the observed downstream trends. In our work, the mean hydrogen concentration fields were measured in a series of subsonic and underexpanded jets using a planar laser Rayleigh scattering system. Furthermore, we compared the experimental data to a reduced order jet modelmore » for subsonic flows and a notional nozzle model coupled to the jet model for underexpanded jets. The values of some key model parameters were determined by comparisons with the experimental data. Finally, the coupled model was also validated against hydrogen concentrations measurements for 100 and 200 bar hydrogen jets with the predictions agreeing well with data in the literature.« less
Assimilation of glider and mooring data into a coastal ocean model
NASA Astrophysics Data System (ADS)
Jones, Emlyn M.; Oke, Peter R.; Rizwi, Farhan; Murray, Lawrence M.
We have applied an ensemble optimal interpolation (EnOI) data assimilation system to a high resolution coastal ocean model of south-east Tasmania, Australia. The region is characterised by a complex coastline with water masses influenced by riverine input and the interaction between two offshore current systems. Using a large static ensemble to estimate the systems background error covariance, data from a coastal observing network of fixed moorings and a Slocum glider are assimilated into the model at daily intervals. We demonstrate that the EnOI algorithm can successfully correct a biased high resolution coastal model. In areas with dense observations, the assimilation scheme reduces the RMS difference between the model and independent GHRSST observations by 90%, while the domain-wide RMS difference is reduced by a more modest 40%. Our findings show that errors introduced by surface forcing and boundary conditions can be identified and reduced by a relatively sparse observing array using an inexpensive ensemble-based data assimilation system.
Process-Improvement Cost Model for the Emergency Department.
Dyas, Sheila R; Greenfield, Eric; Messimer, Sherri; Thotakura, Swati; Gholston, Sampson; Doughty, Tracy; Hays, Mary; Ivey, Richard; Spalding, Joseph; Phillips, Robin
2015-01-01
The objective of this report is to present a simplified, activity-based costing approach for hospital emergency departments (EDs) to use with Lean Six Sigma cost-benefit analyses. The cost model complexity is reduced by removing diagnostic and condition-specific costs, thereby revealing the underlying process activities' cost inefficiencies. Examples are provided for evaluating the cost savings from reducing discharge delays and the cost impact of keeping patients in the ED (boarding) after the decision to admit has been made. The process-improvement cost model provides a needed tool in selecting, prioritizing, and validating Lean process-improvement projects in the ED and other areas of patient care that involve multiple dissimilar diagnoses.
Investigation of approximate models of experimental temperature characteristics of machines
NASA Astrophysics Data System (ADS)
Parfenov, I. V.; Polyakov, A. N.
2018-05-01
This work is devoted to the investigation of various approaches to the approximation of experimental data and the creation of simulation mathematical models of thermal processes in machines with the aim of finding ways to reduce the time of their field tests and reducing the temperature error of the treatments. The main methods of research which the authors used in this work are: the full-scale thermal testing of machines; realization of various approaches at approximation of experimental temperature characteristics of machine tools by polynomial models; analysis and evaluation of modelling results (model quality) of the temperature characteristics of machines and their derivatives up to the third order in time. As a result of the performed researches, rational methods, type, parameters and complexity of simulation mathematical models of thermal processes in machine tools are proposed.
Walcott, Brian P; Reinshagen, Clemens; Stapleton, Christopher J; Choudhri, Omar; Rayz, Vitaliy; Saloner, David; Lawton, Michael T
2016-06-01
Cerebral aneurysms are weakened blood vessel dilatations that can result in spontaneous, devastating hemorrhage events. Aneurysm treatment aims to reduce hemorrhage events, and strategies for complex aneurysms often require surgical bypass or endovascular stenting for blood flow diversion. Interventions that divert blood flow from their normal circulation patterns have the potential to result in unintentional ischemia. Recent developments in computational modeling and in vivo assessment of hemodynamics for cerebral aneurysm treatment have entered into clinical practice. Herein, we review how these techniques are currently utilized to improve risk stratification and treatment planning. © The Author(s) 2016.
A developmental approach to mentalizing communities: I. A model for social change.
Twemlow, Stuart W; Fonagy, Peter; Sacco, Frank C
2005-01-01
A developmental model is proposed applying attachment theory to complex social systems to promote social change. The idea of mentalizing communities is outlined with a proposal for three projects testing the model: ways to reduce bullying and create a peaceful climate in schools, projects to promote compassion in cities by a focus of end-of-life care, and a mentalization-based intervention into parenting style of borderline and substance abusing parents.
The Nature of Arsenic-Phytochelatin Complexes in Holcus lanatus and Pteris cretica1
Raab, Andrea; Feldmann, Jörg; Meharg, Andrew A.
2004-01-01
We have developed a method to extract and separate phytochelatins (PCs)—metal(loid) complexes using parallel metal(loid)-specific (inductively coupled plasma-mass spectrometry) and organic-specific (electrospray ionization-mass spectrometry) detection systems—and use it here to ascertain the nature of arsenic (As)-PC complexes in plant extracts. This study is the first unequivocal report, to our knowledge, of PC complex coordination chemistry in plant extracts for any metal or metalloid ion. The As-tolerant grass Holcus lanatus and the As hyperaccumulator Pteris cretica were used as model plants. In an in vitro experiment using a mixture of reduced glutathione (GS), PC2, and PC3, As preferred the formation of the arsenite [As(III)]-PC3 complex over GS-As(III)-PC2, As(III)-(GS)3, As(III)-PC2, or As(III)-(PC2)2 (GS: glutathione bound to arsenic via sulphur of cysteine). In H. lanatus, the As(III)-PC3 complex was the dominant complex, although reduced glutathione, PC2, and PC3 were found in the extract. P. cretica only synthesizes PC2 and forms dominantly the GS-As(III)-PC2 complex. This is the first evidence, to our knowledge, for the existence of mixed glutathione-PC-metal(loid) complexes in plant tissues or in vitro. In both plant species, As is dominantly in non-bound inorganic forms, with 13% being present in PC complexes for H. lanatus and 1% in P. cretica. PMID:15001701
Ensemble-Based Parameter Estimation in a Coupled General Circulation Model
Liu, Y.; Liu, Z.; Zhang, S.; ...
2014-09-10
Parameter estimation provides a potentially powerful approach to reduce model bias for complex climate models. Here, in a twin experiment framework, the authors perform the first parameter estimation in a fully coupled ocean–atmosphere general circulation model using an ensemble coupled data assimilation system facilitated with parameter estimation. The authors first perform single-parameter estimation and then multiple-parameter estimation. In the case of the single-parameter estimation, the error of the parameter [solar penetration depth (SPD)] is reduced by over 90% after ~40 years of assimilation of the conventional observations of monthly sea surface temperature (SST) and salinity (SSS). The results of multiple-parametermore » estimation are less reliable than those of single-parameter estimation when only the monthly SST and SSS are assimilated. Assimilating additional observations of atmospheric data of temperature and wind improves the reliability of multiple-parameter estimation. The errors of the parameters are reduced by 90% in ~8 years of assimilation. Finally, the improved parameters also improve the model climatology. With the optimized parameters, the bias of the climatology of SST is reduced by ~90%. Altogether, this study suggests the feasibility of ensemble-based parameter estimation in a fully coupled general circulation model.« less
Model-order reduction of lumped parameter systems via fractional calculus
NASA Astrophysics Data System (ADS)
Hollkamp, John P.; Sen, Mihir; Semperlotti, Fabio
2018-04-01
This study investigates the use of fractional order differential models to simulate the dynamic response of non-homogeneous discrete systems and to achieve efficient and accurate model order reduction. The traditional integer order approach to the simulation of non-homogeneous systems dictates the use of numerical solutions and often imposes stringent compromises between accuracy and computational performance. Fractional calculus provides an alternative approach where complex dynamical systems can be modeled with compact fractional equations that not only can still guarantee analytical solutions, but can also enable high levels of order reduction without compromising on accuracy. Different approaches are explored in order to transform the integer order model into a reduced order fractional model able to match the dynamic response of the initial system. Analytical and numerical results show that, under certain conditions, an exact match is possible and the resulting fractional differential models have both a complex and frequency-dependent order of the differential operator. The implications of this type of approach for both model order reduction and model synthesis are discussed.
2016 International Land Model Benchmarking (ILAMB) Workshop Report
NASA Technical Reports Server (NTRS)
Hoffman, Forrest M.; Koven, Charles D.; Keppel-Aleks, Gretchen; Lawrence, David M.; Riley, William J.; Randerson, James T.; Ahlstrom, Anders; Abramowitz, Gabriel; Baldocchi, Dennis D.; Best, Martin J.;
2016-01-01
As earth system models (ESMs) become increasingly complex, there is a growing need for comprehensive and multi-faceted evaluation of model projections. To advance understanding of terrestrial biogeochemical processes and their interactions with hydrology and climate under conditions of increasing atmospheric carbon dioxide, new analysis methods are required that use observations to constrain model predictions, inform model development, and identify needed measurements and field experiments. Better representations of biogeochemistryclimate feedbacks and ecosystem processes in these models are essential for reducing the acknowledged substantial uncertainties in 21st century climate change projections.
2016 International Land Model Benchmarking (ILAMB) Workshop Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffman, Forrest M.; Koven, Charles D.; Keppel-Aleks, Gretchen
As Earth system models become increasingly complex, there is a growing need for comprehensive and multi-faceted evaluation of model projections. To advance understanding of biogeochemical processes and their interactions with hydrology and climate under conditions of increasing atmospheric carbon dioxide, new analysis methods are required that use observations to constrain model predictions, inform model development, and identify needed measurements and field experiments. Better representations of biogeochemistry–climate feedbacks and ecosystem processes in these models are essential for reducing uncertainties associated with projections of climate change during the remainder of the 21st century.
NASA Astrophysics Data System (ADS)
Randers, Jorgen; Golüke, Ulrich; Wenstøp, Fred; Wenstøp, Søren
2016-11-01
We have made a simple system dynamics model, ESCIMO (Earth System Climate Interpretable Model), which runs on a desktop computer in seconds and is able to reproduce the main output from more complex climate models. ESCIMO represents the main causal mechanisms at work in the Earth system and is able to reproduce the broad outline of climate history from 1850 to 2015. We have run many simulations with ESCIMO to 2100 and beyond. In this paper we present the effects of introducing in 2015 six possible global policy interventions that cost around USD 1000 billion per year - around 1 % of world GDP. We tentatively conclude (a) that these policy interventions can at most reduce the global mean surface temperature - GMST - by up to 0.5 °C in 2050 and up to 1.0 °C in 2100 relative to no intervention. The exception is injection of aerosols into the stratosphere, which can reduce the GMST by more than 1.0 °C in a decade but creates other serious problems. We also conclude (b) that relatively cheap human intervention can keep global warming in this century below +2 °C relative to preindustrial times. Finally, we conclude (c) that run-away warming is unlikely to occur in this century but is likely to occur in the longer run. The ensuing warming is slow, however. In ESCIMO, it takes several hundred years to lift the GMST to +3 °C above preindustrial times through gradual self-reinforcing melting of the permafrost. We call for research to test whether more complex climate models support our tentative conclusions from ESCIMO.
Galbraith, Eric D.; Dunne, John P.; Gnanadesikan, Anand; ...
2015-12-21
Earth System Models increasingly include ocean biogeochemistry models in order to predict changes in ocean carbon storage, hypoxia, and biological productivity under climate change. However, state-of-the-art ocean biogeochemical models include many advected tracers, that significantly increase the computational resources required, forcing a trade-off with spatial resolution. Here, we compare a state-of the art model with 30 prognostic tracers (TOPAZ) with two reduced-tracer models, one with 6 tracers (BLING), and the other with 3 tracers (miniBLING). The reduced-tracer models employ parameterized, implicit biological functions, which nonetheless capture many of the most important processes resolved by TOPAZ. All three are embedded inmore » the same coupled climate model. Despite the large difference in tracer number, the absence of tracers for living organic matter is shown to have a minimal impact on the transport of nutrient elements, and the three models produce similar mean annual preindustrial distributions of macronutrients, oxygen, and carbon. Significant differences do exist among the models, in particular the seasonal cycle of biomass and export production, but it does not appear that these are necessary consequences of the reduced tracer number. With increasing CO2, changes in dissolved oxygen and anthropogenic carbon uptake are very similar across the different models. Thus, while the reduced-tracer models do not explicitly resolve the diversity and internal dynamics of marine ecosystems, we demonstrate that such models are applicable to a broad suite of major biogeochemical concerns, including anthropogenic change. Lastly, these results are very promising for the further development and application of reduced-tracer biogeochemical models that incorporate ‘‘sub-ecosystem-scale’’ parameterizations.« less
Reduced order modeling and active flow control of an inlet duct
NASA Astrophysics Data System (ADS)
Ge, Xiaoqing
Many aerodynamic applications require the modeling of compressible flows in or around a body, e.g., the design of aircraft, inlet or exhaust duct, wind turbines, or tall buildings. Traditional methods use wind tunnel experiments and computational fluid dynamics (CFD) to investigate the spatial and temporal distribution of the flows. Although they provide a great deal of insight into the essential characteristics of the flow field, they are not suitable for control analysis and design due to the high physical/computational cost. Many model reduction methods have been studied to reduce the complexity of the flow model. There are two main approaches: linearization based input/output modeling and proper orthogonal decomposition (POD) based model reduction. The former captures mostly the local behavior near a steady state, which is suitable to model laminar flow dynamics. The latter obtains a reduced order model by projecting the governing equation onto an "optimal" subspace and is able to model complex nonlinear flow phenomena. In this research we investigate various model reduction approaches and compare them in flow modeling and control design. We propose an integrated model-based control methodology and apply it to the reduced order modeling and active flow control of compressible flows within a very aggressive (length to exit diameter ratio, L/D, of 1.5) inlet duct and its upstream contraction section. The approach systematically applies reduced order modeling, estimator design, sensor placement and control design to improve the aerodynamic performance. The main contribution of this work is the development of a hybrid model reduction approach that attempts to combine the best features of input/output model identification and POD method. We first identify a linear input/output model by using a subspace algorithm. We next project the difference between CFD response and the identified model response onto a set of POD basis. This trajectory is fit to a nonlinear dynamical model to augment the linear input/output model. Thus, the full system is decomposed into a dominant linear subsystem and a low order nonlinear subsystem. The hybrid model is then used for control design and compared with other modeling methods in CFD simulations. Numerical results indicate that the hybrid model accurately predicts the nonlinear behavior of the flow for a 2D diffuser contraction section model. It also performs best in terms of feedback control design and learning control. Since some outputs of interest (e.g., the AIP pressure recovery) are not observable during normal operations, static and dynamic estimators are designed to recreate the information from available sensor measurements. The latter also provides a state estimation for feedback controller. Based on the reduced order models and estimators, different controllers are designed to improve the aerodynamic performance of the contraction section and inlet duct. The integrated control methodology is evaluated with CFD simulations. Numerical results demonstrate the feasibility and efficacy of the active flow control based on reduced order models. Our reduced order models not only generate a good approximation of the nonlinear flow dynamics over a wide input range, but also help to design controllers that significantly improve the flow response. The tools developed for model reduction, estimator and control design can also be applied to wind tunnel experiment.
Simplified paraboloid phase model-based phase tracker for demodulation of a single complex fringe.
He, A; Deepan, B; Quan, C
2017-09-01
A regularized phase tracker (RPT) is an effective method for demodulation of single closed-fringe patterns. However, lengthy calculation time, specially designed scanning strategy, and sign-ambiguity problems caused by noise and saddle points reduce its effectiveness, especially for demodulating large and complex fringe patterns. In this paper, a simplified paraboloid phase model-based regularized phase tracker (SPRPT) is proposed. In SPRPT, first and second phase derivatives are pre-determined by the density-direction-combined method and discrete higher-order demodulation algorithm, respectively. Hence, cost function is effectively simplified to reduce the computation time significantly. Moreover, pre-determined phase derivatives improve the robustness of the demodulation of closed, complex fringe patterns. Thus, no specifically designed scanning strategy is needed; nevertheless, it is robust against the sign-ambiguity problem. The paraboloid phase model also assures better accuracy and robustness against noise. Both the simulated and experimental fringe patterns (obtained using electronic speckle pattern interferometry) are used to validate the proposed method, and a comparison of the proposed method with existing RPT methods is carried out. The simulation results show that the proposed method has achieved the highest accuracy with less computational time. The experimental result proves the robustness and the accuracy of the proposed method for demodulation of noisy fringe patterns and its feasibility for static and dynamic applications.
Effectively-truncated large-scale shell-model calculations and nuclei around 100Sn
NASA Astrophysics Data System (ADS)
Gargano, A.; Coraggio, L.; Itaco, N.
2017-09-01
This paper presents a short overview of a procedure we have recently introduced, dubbed the double-step truncation method, which is aimed to reduce the computational complexity of large-scale shell-model calculations. Within this procedure, one starts with a realistic shell-model Hamiltonian defined in a large model space, and then, by analyzing the effective single particle energies of this Hamiltonian as a function of the number of valence protons and/or neutrons, reduced model spaces are identified containing only the single-particle orbitals relevant to the description of the spectroscopic properties of a certain class of nuclei. As a final step, new effective shell-model Hamiltonians defined within the reduced model spaces are derived by way of a unitary transformation of the original large-scale Hamiltonian. A detailed account of this transformation is given and the merit of the double-step truncation method is illustrated by discussing few selected results for 96Mo, described as four protons and four neutrons outside 88Sr. Some new preliminary results for light odd-tin isotopes from A = 101 to 107 are also reported.
Postprocessing of docked protein-ligand complexes using implicit solvation models.
Lindström, Anton; Edvinsson, Lotta; Johansson, Andreas; Andersson, C David; Andersson, Ida E; Raubacher, Florian; Linusson, Anna
2011-02-28
Molecular docking plays an important role in drug discovery as a tool for the structure-based design of small organic ligands for macromolecules. Possible applications of docking are identification of the bioactive conformation of a protein-ligand complex and the ranking of different ligands with respect to their strength of binding to a particular target. We have investigated the effect of implicit water on the postprocessing of binding poses generated by molecular docking using MM-PB/GB-SA (molecular mechanics Poisson-Boltzmann and generalized Born surface area) methodology. The investigation was divided into three parts: geometry optimization, pose selection, and estimation of the relative binding energies of docked protein-ligand complexes. Appropriate geometry optimization afforded more accurate binding poses for 20% of the complexes investigated. The time required for this step was greatly reduced by minimizing the energy of the binding site using GB solvation models rather than minimizing the entire complex using the PB model. By optimizing the geometries of docking poses using the GB(HCT+SA) model then calculating their free energies of binding using the PB implicit solvent model, binding poses similar to those observed in crystal structures were obtained. Rescoring of these poses according to their calculated binding energies resulted in improved correlations with experimental binding data. These correlations could be further improved by applying the postprocessing to several of the most highly ranked poses rather than focusing exclusively on the top-scored pose. The postprocessing protocol was successfully applied to the analysis of a set of Factor Xa inhibitors and a set of glycopeptide ligands for the class II major histocompatibility complex (MHC) A(q) protein. These results indicate that the protocol for the postprocessing of docked protein-ligand complexes developed in this paper may be generally useful for structure-based design in drug discovery.
NASA Astrophysics Data System (ADS)
Tejada, I. G.; Brochard, L.; Stoltz, G.; Legoll, F.; Lelièvre, T.; Cancès, E.
2015-01-01
Molecular dynamics is a simulation technique that can be used to study failure in solids, provided the inter-atomic potential energy is able to account for the complex mechanisms at failure. Reactive potentials fitted on ab initio results or on experimental values have the ability to adapt to any complex atomic arrangement and, therefore, are suited to simulate failure. But the complexity of these potentials, together with the size of the systems considered, make simulations computationally expensive. In order to improve the efficiency of numerical simulations, simpler harmonic potentials can be used instead of complex reactive potentials in the regions where the system is close to its ground state and a harmonic approximation reasonably fits the actual reactive potential. However the validity and precision of such an approach has not been investigated in detail yet. We present here a methodology for constructing a reduced potential and combining it with the reactive one. We also report some important features of crack propagation that may be affected by the coupling of reactive and reduced potentials. As an illustrative case, we model a crystalline two-dimensional material (graphene) with a reactive empirical bond-order potential (REBO) or with harmonic potentials made of bond and angle springs that are designed to reproduce the second order approximation of REBO in the ground state. We analyze the consistency of this approximation by comparing the mechanical behavior and the phonon spectra of systems modeled with these potentials. These tests reveal when the anharmonicity effects appear. As anharmonic effects originate from strain, stress or temperature, the latter quantities are the basis for establishing coupling criteria for on the fly substitution in large simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gauntt, Randall O.; Bixler, Nathan E.; Wagner, Kenneth Charles
2014-03-01
A methodology for using the MELCOR code with the Latin Hypercube Sampling method was developed to estimate uncertainty in various predicted quantities such as hydrogen generation or release of fission products under severe accident conditions. In this case, the emphasis was on estimating the range of hydrogen sources in station blackout conditions in the Sequoyah Ice Condenser plant, taking into account uncertainties in the modeled physics known to affect hydrogen generation. The method uses user-specified likelihood distributions for uncertain model parameters, which may include uncertainties of a stochastic nature, to produce a collection of code calculations, or realizations, characterizing themore » range of possible outcomes. Forty MELCOR code realizations of Sequoyah were conducted that included 10 uncertain parameters, producing a range of in-vessel hydrogen quantities. The range of total hydrogen produced was approximately 583kg 131kg. Sensitivity analyses revealed expected trends with respected to the parameters of greatest importance, however, considerable scatter in results when plotted against any of the uncertain parameters was observed, with no parameter manifesting dominant effects on hydrogen generation. It is concluded that, with respect to the physics parameters investigated, in order to further reduce predicted hydrogen uncertainty, it would be necessary to reduce all physics parameter uncertainties similarly, bearing in mind that some parameters are inherently uncertain within a range. It is suspected that some residual uncertainty associated with modeling complex, coupled and synergistic phenomena, is an inherent aspect of complex systems and cannot be reduced to point value estimates. The probabilistic analyses such as the one demonstrated in this work are important to properly characterize response of complex systems such as severe accident progression in nuclear power plants.« less
A Novel Biobjective Risk-Based Model for Stochastic Air Traffic Network Flow Optimization Problem.
Cai, Kaiquan; Jia, Yaoguang; Zhu, Yanbo; Xiao, Mingming
2015-01-01
Network-wide air traffic flow management (ATFM) is an effective way to alleviate demand-capacity imbalances globally and thereafter reduce airspace congestion and flight delays. The conventional ATFM models assume the capacities of airports or airspace sectors are all predetermined. However, the capacity uncertainties due to the dynamics of convective weather may make the deterministic ATFM measures impractical. This paper investigates the stochastic air traffic network flow optimization (SATNFO) problem, which is formulated as a weighted biobjective 0-1 integer programming model. In order to evaluate the effect of capacity uncertainties on ATFM, the operational risk is modeled via probabilistic risk assessment and introduced as an extra objective in SATNFO problem. Computation experiments using real-world air traffic network data associated with simulated weather data show that presented model has far less constraints compared to stochastic model with nonanticipative constraints, which means our proposed model reduces the computation complexity.
Aeroelastic Modeling of X-56A Stiff-Wing Configuration Flight Test Data
NASA Technical Reports Server (NTRS)
Grauer, Jared A.; Boucher, Matthew J.
2017-01-01
Aeroelastic stability and control derivatives for the X-56A Multi-Utility Technology Testbed (MUTT), in the stiff-wing configuration, were estimated from flight test data using the output-error method. Practical aspects of the analysis are discussed. The orthogonal phase-optimized multisine inputs provided excellent data information for aeroelastic modeling. Consistent parameter estimates were determined using output error in both the frequency and time domains. The frequency domain analysis converged faster and was less sensitive to starting values for the model parameters, which was useful for determining the aeroelastic model structure and obtaining starting values for the time domain analysis. Including a modal description of the structure from a finite element model reduced the complexity of the estimation problem and improved the modeling results. Effects of reducing the model order on the short period stability and control derivatives were investigated.
Hong, Taehoon; Koo, Choongwan; Kim, Hyunjoong
2012-12-15
The number of deteriorated multi-family housing complexes in South Korea continues to rise, and consequently their electricity consumption is also increasing. This needs to be addressed as part of the nation's efforts to reduce energy consumption. The objective of this research was to develop a decision support model for determining the need to improve multi-family housing complexes. In this research, 1664 cases located in Seoul were selected for model development. The research team collected the characteristics and electricity energy consumption data of these projects in 2009-2010. The following were carried out in this research: (i) using the Decision Tree, multi-family housing complexes were clustered based on their electricity energy consumption; (ii) using Case-Based Reasoning, similar cases were retrieved from the same cluster; and (iii) using a combination of Multiple Regression Analysis, Artificial Neural Network, and Genetic Algorithm, the prediction performance of the developed model was improved. The results of this research can be used as follows: (i) as basic research data for continuously managing several energy consumption data of multi-family housing complexes; (ii) as advanced research data for predicting energy consumption based on the project characteristics; (iii) as practical research data for selecting the most optimal multi-family housing complex with the most potential in terms of energy savings; and (iv) as consistent and objective criteria for incentives and penalties. Copyright © 2012 Elsevier Ltd. All rights reserved.
Modeling OPC complexity for design for manufacturability
NASA Astrophysics Data System (ADS)
Gupta, Puneet; Kahng, Andrew B.; Muddu, Swamy; Nakagawa, Sam; Park, Chul-Hong
2005-11-01
Increasing design complexity in sub-90nm designs results in increased mask complexity and cost. Resolution enhancement techniques (RET) such as assist feature addition, phase shifting (attenuated PSM) and aggressive optical proximity correction (OPC) help in preserving feature fidelity in silicon but increase mask complexity and cost. Data volume increase with rise in mask complexity is becoming prohibitive for manufacturing. Mask cost is determined by mask write time and mask inspection time, which are directly related to the complexity of features printed on the mask. Aggressive RET increase complexity by adding assist features and by modifying existing features. Passing design intent to OPC has been identified as a solution for reducing mask complexity and cost in several recent works. The goal of design-aware OPC is to relax OPC tolerances of layout features to minimize mask cost, without sacrificing parametric yield. To convey optimal OPC tolerances for manufacturing, design optimization should drive OPC tolerance optimization using models of mask cost for devices and wires. Design optimization should be aware of impact of OPC correction levels on mask cost and performance of the design. This work introduces mask cost characterization (MCC) that quantifies OPC complexity, measured in terms of fracture count of the mask, for different OPC tolerances. MCC with different OPC tolerances is a critical step in linking design and manufacturing. In this paper, we present a MCC methodology that provides models of fracture count of standard cells and wire patterns for use in design optimization. MCC cannot be performed by designers as they do not have access to foundry OPC recipes and RET tools. To build a fracture count model, we perform OPC and fracturing on a limited set of standard cells and wire configurations with all tolerance combinations. Separately, we identify the characteristics of the layout that impact fracture count. Based on the fracture count (FC) data from OPC and mask data preparation runs, we build models of FC as function of OPC tolerances and layout parameters.
Understanding the Dynamics of Soil Carbon in CMIP5 Models
NASA Astrophysics Data System (ADS)
Todd-Brown, K. E.; Luo, Y.; Randerson, J. T.; Allison, S. D.; Smith, M. J.
2014-12-01
Soil carbon stocks have the potential to be a strong source or sink for carbon dioxide over the next century, playing a critical role in climate change. These stocks are the result of small differences between much larger primary carbon fluxes: gross primary production, litter fall, autotrophic respiration and heterotrophic respiration. There was little agreement on predicted soil carbon stocks between Earth system models (ESMs) in the most recent Climate Model Intercomparison Project. Predicted present-day stocks ranged from roughly 500 Pg to over 3000 Pg and predicted changes over the 21st century ranged from -70 Pg to +250 Pg). The primary goal of this study was to understand why such large differences exist. We constructed four reduced complexity models to describe the primary carbon fluxes, making different assumptions about how soil carbon fluxes are modelled in ESMs. For each of these reduced complexity models we statistically inferred the most likely model parameters given the gridded ESM simulation outputs. Gross primary production was best explained by incoming short wave radiation, CO2 concentration, and leaf area index (global GPP comparison of simulation vs reduced complexity model of R2>0.9 (p < 1e-4) with slopes between 0.65 and 1.2 and intercepts between -13 and 67 Pg C yr-1). Autotrophic respiration was best explained as a proportion of GPP (R2 > 0.9 (p < 1e-4) with slopes between 0.78 and 1.1 and intercepts between -15 and 14 Pg C yr-1). Flux between the vegetation and soil pools were best explained as a proportion of the vegetation carbon stock (R2 > 0.9 (p < 1e-4) with slopes between 0.9 and 2.1 and intercepts between -65 and 25 Pg C yr-1). Finally heterotrophic respiration was best explained as a function of soil carbon stocks and soil temperature (R2 > 0.9 (p < 1e-4) with slopes between 0.7 and 1.5 and intercepts between -40 and 15 Pg C yr-1). This research suggests three main lines of decomposition model improvement: 1) improve connecting sub-models, 2) data integration to improve parameterization, 3) modification of model structure. The implied variation in RCM parameterization suggests that data integration could constrain model simulation results. However, the similarity in model structure may lead to systematic biases in the simulations without the introduction of new model structures.
Proper Orthogonal Decomposition in Optimal Control of Fluids
NASA Technical Reports Server (NTRS)
Ravindran, S. S.
1999-01-01
In this article, we present a reduced order modeling approach suitable for active control of fluid dynamical systems based on proper orthogonal decomposition (POD). The rationale behind the reduced order modeling is that numerical simulation of Navier-Stokes equations is still too costly for the purpose of optimization and control of unsteady flows. We examine the possibility of obtaining reduced order models that reduce computational complexity associated with the Navier-Stokes equations while capturing the essential dynamics by using the POD. The POD allows extraction of certain optimal set of basis functions, perhaps few, from a computational or experimental data-base through an eigenvalue analysis. The solution is then obtained as a linear combination of these optimal set of basis functions by means of Galerkin projection. This makes it attractive for optimal control and estimation of systems governed by partial differential equations. We here use it in active control of fluid flows governed by the Navier-Stokes equations. We show that the resulting reduced order model can be very efficient for the computations of optimization and control problems in unsteady flows. Finally, implementational issues and numerical experiments are presented for simulations and optimal control of fluid flow through channels.
Chasing Perfection: Should We Reduce Model Uncertainty in Carbon Cycle-Climate Feedbacks
NASA Astrophysics Data System (ADS)
Bonan, G. B.; Lombardozzi, D.; Wieder, W. R.; Lindsay, K. T.; Thomas, R. Q.
2015-12-01
Earth system model simulations of the terrestrial carbon (C) cycle show large multi-model spread in the carbon-concentration and carbon-climate feedback parameters. Large differences among models are also seen in their simulation of global vegetation and soil C stocks and other aspects of the C cycle, prompting concern about model uncertainty and our ability to faithfully represent fundamental aspects of the terrestrial C cycle in Earth system models. Benchmarking analyses that compare model simulations with common datasets have been proposed as a means to assess model fidelity with observations, and various model-data fusion techniques have been used to reduce model biases. While such efforts will reduce multi-model spread, they may not help reduce uncertainty (and increase confidence) in projections of the C cycle over the twenty-first century. Many ecological and biogeochemical processes represented in Earth system models are poorly understood at both the site scale and across large regions, where biotic and edaphic heterogeneity are important. Our experience with the Community Land Model (CLM) suggests that large uncertainty in the terrestrial C cycle and its feedback with climate change is an inherent property of biological systems. The challenge of representing life in Earth system models, with the rich diversity of lifeforms and complexity of biological systems, may necessitate a multitude of modeling approaches to capture the range of possible outcomes. Such models should encompass a range of plausible model structures. We distinguish between model parameter uncertainty and model structural uncertainty. Focusing on improved parameter estimates may, in fact, limit progress in assessing model structural uncertainty associated with realistically representing biological processes. Moreover, higher confidence may be achieved through better process representation, but this does not necessarily reduce uncertainty.
Computational and experimental study of airflow around a fan powered UVGI lamp
NASA Astrophysics Data System (ADS)
Kaligotla, Srikar; Tavakoli, Behtash; Glauser, Mark; Ahmadi, Goodarz
2011-11-01
The quality of indoor air environment is very important for improving the health of occupants and reducing personal exposure to hazardous pollutants. An effective way of controlling air quality is by eliminating the airborne bacteria and viruses or by reducing their emissions. Ultraviolet Germicidal Irradiation (UVGI) lamps can effectively reduce these bio-contaminants in an indoor environment, but the efficiency of these systems depends on airflow in and around the device. UVGI lamps would not be as effective in stagnant environments as they would be when the moving air brings the bio-contaminant in their irradiation region. Introducing a fan into the UVGI system would augment the efficiency of the system's kill rate. Airflows in ventilated spaces are quite complex due to the vast range of length and velocity scales. The purpose of this research is to study these complex airflows using CFD techniques and validate computational model with airflow measurements around the device using Particle Image Velocimetry measurements. The experimental results including mean velocities, length scales and RMS values of fluctuating velocities are used in the CFD validation. Comparison of these data at different locations around the device with the CFD model predictions are performed and good agreement was observed.
Gravitational lensing by eigenvalue distributions of random matrix models
NASA Astrophysics Data System (ADS)
Martínez Alonso, Luis; Medina, Elena
2018-05-01
We propose to use eigenvalue densities of unitary random matrix ensembles as mass distributions in gravitational lensing. The corresponding lens equations reduce to algebraic equations in the complex plane which can be treated analytically. We prove that these models can be applied to describe lensing by systems of edge-on galaxies. We illustrate our analysis with the Gaussian and the quartic unitary matrix ensembles.
The Idaho dedicated education unit model: cost-effective, high-quality education.
Springer, Pamela J; Johnson, Patricia; Lind, Bonnie; Walker, Eldon; Clavelle, Joanne; Jensen, Nancy
2012-01-01
Faculty face many challenges in delivering clinical education, including faculty availability, the complexity of the faculty role, and limited clinical placements. Dedicated education units (DEUs) are being explored as alternatives to traditional clinical placement models. The authors describe the successful development of a DEU that resulted in positive student outcomes at reduced cost to both the school and the medical center.
NASA Astrophysics Data System (ADS)
Nikolaeva, L. S.; Semenov, A. N.
2018-02-01
The anticoagulant activity of high-molecular-weight heparin is increased by developing a new highly active heparin complex with glutamate using the thermodynamic model of chemical equilibria based on pH-metric data. The anticoagulant activity of the developed complexes is estimated in the pH range of blood plasma according to the drop in the calculated equilibrium Ca2+ concentration associated with the formation of mixed ligand complexes of Ca2+ ions, heparin (Na4hep), and glutamate (H2Glu). A thermodynamic model is calculated by mathematically modelling chemical equilibria in the CaCl2-Na4hep-H2Glu-H2O-NaCl system in the pH range of 2.30 ≤ pH ≤ 10.50 in diluted saline that acts as a background electrolyte (0.154 M NaCl) at 37°C and initial concentrations of the main components of ν × 10-3 M, where n ≤ 4. The thermodynamic model is used to determine the main complex of the monomeric unit of heparin with glutamate (HhepGlu5-) and the most stable mixed ligand complex of Ca2+ with heparin and glutamate (Ca2hepGlu2-) in the pH range of blood plasma (6.80 ≤ pH ≤ 7.40). It is concluded that the Ca2hepGlu2- complex reduces the Ca2+ concentration 107 times more than the Ca2+ complex with pure heparin. The anticoagulant effect of the developed HhepGlu5- complex is confirmed in vitro and in vivo via coagulation tests on the blood plasma of laboratory rats. Additional antithrombotic properties of the developed complex are identified. The new highly active anticoagulant, HhepGlu5- complex with additional antithrombotic properties, is patented.
Peng, Shanli; Xue, Lei; Leng, Xue; Yang, Ruobing; Zhang, Genyi; Hamaker, Bruce R
2015-03-18
The in vivo slow digestion property of octenyl succinic anhydride modified waxy corn starch (OSA-starch) in the presence of tea polyphenols (TPLs) was studied. Using a mouse model, the experimental results showed an extended and moderate postprandial glycemic response with a delayed and significantly decreased blood glucose peak of OSA-starch after cocooking with TPLs (5% starch weight base). Further studies revealed an increased hydrodynamic radius of OSA-starch molecules indicating an interaction between OSA-starch and TPLs. Additionally, decreased gelatinization temperature and enthalpy and reduced viscosity and emulsifiability of OSA-starch support their possible complexation to form a spherical OSA-starch-TPLs (OSAT) complex. The moderate and extended postprandial glycemic response is likely caused by decreased activity of mucosal α-glucosidase, which is noncompetitively inhibited by tea catechins released from the complex during digestion. Meanwhile, a significant decrease of malondialdehyde (MDA) and increased DPPH free radical scavenging activity in small intestine tissue demonstrated the antioxidative functional property of the OSAT complex. Thus, the complex of OSAT, acting as a functional carbohydrate material, not only leads to a flattened and prolonged glycemic response but also reduces the oxidative stress, which might be beneficial to health.
Reeve, Joanne; Cooper, Lucy; Harrington, Sean; Rosbottom, Peter; Watkins, Jane
2016-09-06
Health services face the challenges created by complex problems, and so need complex intervention solutions. However they also experience ongoing difficulties in translating findings from research in this area in to quality improvement changes on the ground. BounceBack was a service development innovation project which sought to examine this issue through the implementation and evaluation in a primary care setting of a novel complex intervention. The project was a collaboration between a local mental health charity, an academic unit, and GP practices. The aim was to translate the charity's model of care into practice-based evidence describing delivery and impact. Normalisation Process Theory (NPT) was used to support the implementation of the new model of primary mental health care into six GP practices. An integrated process evaluation evaluated the process and impact of care. Implementation quickly stalled as we identified problems with the described model of care when applied in a changing and variable primary care context. The team therefore switched to using the NPT framework to support the systematic identification and modification of the components of the complex intervention: including the core components that made it distinct (the consultation approach) and the variable components (organisational issues) that made it work in practice. The extra work significantly reduced the time available for outcome evaluation. However findings demonstrated moderately successful implementation of the model and a suggestion of hypothesised changes in outcomes. The BounceBack project demonstrates the development of a complex intervention from practice. It highlights the use of Normalisation Process Theory to support development, and not just implementation, of a complex intervention; and describes the use of the research process in the generation of practice-based evidence. Implications for future translational complex intervention research supporting practice change through scholarship are discussed.
Ground state atoms confined in a real Rydberg and complex Rydberg-Scarf II potential
NASA Astrophysics Data System (ADS)
Mansoori Kermani, Maryam
2017-12-01
In this work, a system of two ground state atoms confined in a one-dimensional real Rydberg potential was modeled. The atom-atom interaction was considered as a nonlocal separable potential (NLSP) of rank one. This potential was assumed because it leads to an analytical solution of the Lippmann-Schwinger equation. The NLSPs are useful in the few body problems that the many-body potential at each point is replaced by a projective two-body nonlocal potential operator. Analytical expressions for the confined particle resolvent were calculated as a key function in this study. The contributions of the bound and virtual states in the complex energy plane were obtained via the derived transition matrix. Since the low energy quantum scattering problems scattering length is an important quantity, the behavior of this parameter was described versus the reduced energy considering various values of potential parameters. In a one-dimensional model, the total cross section in units of the area is not a meaningful property; however, the reflectance coefficient has a similar role. Therefore the reflectance probability and its behavior were investigated. Then a new confined potential via combining the complex absorbing Scarf II potential with the real Rydberg potential, called the Rydberg-Scarf II potential, was introduced to construct a non-Hermitian Hamiltonian. In order to investigate the effect of the complex potential, the scattering length and reflectance coefficient were calculated. It was concluded that in addition to the competition between the repulsive and attractive parts of both potentials, the imaginary part of the complex potential has an important effect on the properties of the system. The complex potential also reduces the reflectance probability via increasing the absorption probability. For all numerical computations, the parameters of a system including argon gas confined in graphite were considered.
Hydrothermal growth of cross-linked hyperbranched copper dendrites using copper oxalate complex
NASA Astrophysics Data System (ADS)
Truong, Quang Duc; Kakihana, Masato
2012-06-01
A facile and surfactant-free approach has been developed for the synthesis of cross-linked hyperbranched copper dendrites using copper oxalate complex as a precursor and oxalic acid as a reducing and structure-directing agent. The synthesized particles are composed of highly branched nanostructures with unusual cross-linked hierarchical networks. The formation of copper dendrites can be explained in view of both diffusion control and aggregation-based growth model accompanied by the chelation-assisted assembly. Oxalic acid was found to play dual roles as reducing and structure-directing agent based on the investigation results. The understanding on the crystal growth and the roles of oxalic acid provides clear insight into the formation mechanism of hyperbranched metal dendrites.
Modeling Reduced Human Performance as a Complex Adaptive System
2003-09-01
successfully used this design strategy in the domain of military simulation. See ( Arntzen 1998; Bohmann 1999; Le 1999; Schrepf 1999). 89 LCC’s design...THIS PAGE INTENTIONALLY LEFT BLANK 187 LIST OF REFERENCES Arntzen , A. (1998). Software Components for Air Defense Planing. Operations
Lysine desuccinylase SIRT5 binds to cardiolipin and regulates the electron transport chain.
Zhang, Yuxun; Bharathi, Sivakama S; Rardin, Matthew J; Lu, Jie; Maringer, Katherine V; Sims-Lucas, Sunder; Prochownik, Edward V; Gibson, Bradford W; Goetzman, Eric S
2017-06-16
SIRT5 is a lysine desuccinylase known to regulate mitochondrial fatty acid oxidation and the urea cycle. Here, SIRT5 was observed to bind to cardiolipin via an amphipathic helix on its N terminus. In vitro , succinyl-CoA was used to succinylate liver mitochondrial membrane proteins. SIRT5 largely reversed the succinyl-CoA-driven lysine succinylation. Quantitative mass spectrometry of SIRT5-treated membrane proteins pointed to the electron transport chain, particularly Complex I, as being highly targeted for desuccinylation by SIRT5. Correspondingly, SIRT5 -/- HEK293 cells showed defects in both Complex I- and Complex II-driven respiration. In mouse liver, SIRT5 expression was observed to localize strictly to the periportal hepatocytes. However, homogenates prepared from whole SIRT5 -/- liver did show reduced Complex II-driven respiration. The enzymatic activities of Complex II and ATP synthase were also significantly reduced. Three-dimensional modeling of Complex II suggested that several SIRT5-targeted lysine residues lie at the protein-lipid interface of succinate dehydrogenase subunit B. We postulate that succinylation at these sites may disrupt Complex II subunit-subunit interactions and electron transfer. Lastly, SIRT5 -/- mice, like humans with Complex II deficiency, were found to have mild lactic acidosis. Our findings suggest that SIRT5 is targeted to protein complexes on the inner mitochondrial membrane via affinity for cardiolipin to promote respiratory chain function. © 2017 by The American Society for Biochemistry and Molecular Biology, Inc.
Reduction of N2 by supported tungsten clusters gives a model of the process by nitrogenase
Murakami, Junichi; Yamaguchi, Wataru
2012-01-01
Metalloenzymes catalyze difficult chemical reactions under mild conditions. Mimicking their functions is a challenging task and it has been investigated using homogeneous systems containing metal complexes. The nitrogenase that converts N2 to NH3 under mild conditions is one of such enzymes. Efforts to realize the biological function have continued for more than four decades, which has resulted in several reports of reduction of N2, ligated to metal complexes in solutions, to NH3 by protonation under mild conditions. Here, we show that seemingly distinct supported small tungsten clusters in a dry environment reduce N2 under mild conditions like the nitrogenase. N2 is reduced to NH3 via N2H4 by addition of neutral H atoms, which agrees with the mechanism recently proposed for the N2 reduction on the active site of nitrogenase. The process on the supported clusters gives a model of the biological N2 reduction. PMID:22586517
Modeling competitive substitution in a polyelectrolyte complex
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peng, B.; Muthukumar, M., E-mail: muthu@polysci.umass.edu
2015-12-28
We have simulated the invasion of a polyelectrolyte complex made of a polycation chain and a polyanion chain, by another longer polyanion chain, using the coarse-grained united atom model for the chains and the Langevin dynamics methodology. Our simulations reveal many intricate details of the substitution reaction in terms of conformational changes of the chains and competition between the invading chain and the chain being displaced for the common complementary chain. We show that the invading chain is required to be sufficiently longer than the chain being displaced for effecting the substitution. Yet, having the invading chain to be longermore » than a certain threshold value does not reduce the substitution time much further. While most of the simulations were carried out in salt-free conditions, we show that presence of salt facilitates the substitution reaction and reduces the substitution time. Analysis of our data shows that the dominant driving force for the substitution process involving polyelectrolytes lies in the release of counterions during the substitution.« less
Spatiotemporal control to eliminate cardiac alternans using isostable reduction
NASA Astrophysics Data System (ADS)
Wilson, Dan; Moehlis, Jeff
2017-03-01
Cardiac alternans, an arrhythmia characterized by a beat-to-beat alternation of cardiac action potential durations, is widely believed to facilitate the transition from normal cardiac function to ventricular fibrillation and sudden cardiac death. Alternans arises due to an instability of a healthy period-1 rhythm, and most dynamical control strategies either require extensive knowledge of the cardiac system, making experimental validation difficult, or are model independent and sacrifice important information about the specific system under study. Isostable reduction provides an alternative approach, in which the response of a system to external perturbations can be used to reduce the complexity of a cardiac system, making it easier to work with from an analytical perspective while retaining many of its important features. Here, we use isostable reduction strategies to reduce the complexity of partial differential equation models of cardiac systems in order to develop energy optimal strategies for the elimination of alternans. Resulting control strategies require significantly less energy to terminate alternans than comparable strategies and do not require continuous state feedback.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Brian B; Purba, Victor; Jafarpour, Saber
Given that next-generation infrastructures will contain large numbers of grid-connected inverters and these interfaces will be satisfying a growing fraction of system load, it is imperative to analyze the impacts of power electronics on such systems. However, since each inverter model has a relatively large number of dynamic states, it would be impractical to execute complex system models where the full dynamics of each inverter are retained. To address this challenge, we derive a reduced-order structure-preserving model for parallel-connected grid-tied three-phase inverters. Here, each inverter in the system is assumed to have a full-bridge topology, LCL filter at the pointmore » of common coupling, and the control architecture for each inverter includes a current controller, a power controller, and a phase-locked loop for grid synchronization. We outline a structure-preserving reduced-order inverter model for the setting where the parallel inverters are each designed such that the filter components and controller gains scale linearly with the power rating. By structure preserving, we mean that the reduced-order three-phase inverter model is also composed of an LCL filter, a power controller, current controller, and PLL. That is, we show that the system of parallel inverters can be modeled exactly as one aggregated inverter unit and this equivalent model has the same number of dynamical states as an individual inverter in the paralleled system. Numerical simulations validate the reduced-order models.« less
Ngu, Lock Hock; Nijtmans, Leo G; Distelmaier, Felix; Venselaar, Hanka; van Emst-de Vries, Sjenet E; van den Brand, Mariël A M; Stoltenborg, Berendien J M; Wintjes, Liesbeth T; Willems, Peter H; van den Heuvel, Lambertus P; Smeitink, Jan A; Rodenburg, Richard J T
2012-02-01
In this study, we investigated the pathogenicity of a homozygous Asp446Asn mutation in the NDUFS2 gene of a patient with a mitochondrial respiratory chain complex I deficiency. The clinical, biochemical, and genetic features of the NDUFS2 patient were compared with those of 4 patients with previously identified NDUFS2 mutations. All 5 patients presented with Leigh syndrome. In addition, 3 out of 5 showed hypertrophic cardiomyopathy. Complex I amounts in the patient carrying the Asp446Asn mutation were normal, while the complex I activity was strongly reduced, showing that the NDUFS2 mutation affects complex I enzymatic function. By contrast, the 4 other NDUFS2 patients showed both a reduced amount and activity of complex I. The enzymatic defect in fibroblasts of the patient carrying the Asp446Asn mutation was rescued by transduction of wild type NDUFS2. A 3-D model of the catalytic core of complex I showed that the mutated amino acid residue resides near the coenzyme Q binding pocket. However, the K(M) of complex I for coenzyme Q analogs of the Asp446Asn mutated complex I was similar to the K(M) observed in other complex I defects and in controls. We propose that the mutation interferes with the reduction of coenzyme Q or with the coupling of coenzyme Q reduction with the conformational changes involved in proton pumping of complex I. Copyright © 2011 Elsevier B.V. All rights reserved.
Stability of actin-lysozyme complexes formed in cystic fibrosis disease.
Mohammadinejad, Sarah; Ghamkhari, Behnoush; Abdolmaleki, Sarah
2016-08-21
Finding the conditions for destabilizing actin-lysozyme complexes is of biomedical importance in preventing infections in cystic fibrosis. In this manuscript, the effects of different charge-mutants of lysozyme and salt concentration on the stability of actin-lysozyme complexes are studied using Langevin dynamics simulation. A coarse-grained model of F-actin is used in which both its twist and bending rigidities are considered. We observe that the attraction between F-actins is stronger in the presence of wild-type lysozymes relative to the mutated lysozymes of lower charges. By calculating the potential of mean force between F-actins, we conclude that the stability of actin-lysozyme complexes is decreased by reducing the charge of lysozyme mutants. The distributions of different lysozyme charge-mutants show that wild-type (+9e) lysozymes are mostly accumulated in the center of triangles formed by three adjacent F-actins, while lysozyme mutants of charges +7e and +5e occupy the bridging regions between F-actins. Low-charge mutants of lysozyme (+3e) distribute uniformly around F-actins. A rough estimate of the electrostatic energy for these different distributions proves that the distribution in which lysozymes reside in the center of triangles leads to more stable complexes. Also our results in the presence of a salt suggest that at physiological salt concentration of airway, F-actin complexes are not formed by charge-reduced mutants of lysozyme. The findings are interesting because if we can design charge-reduced lysozyme mutants with considerable antibacterial activity, they are not sequestered inside F-actin aggregates and can play their role as antibacterial agents against airway infection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanders, L.K.; Xian, W.; Guaqueta, C.
The aim for deterministic control of the interactions between macroions in aqueous media has motivated widespread experimental and theoretical work. Although it has been well established that like-charged macromolecules can aggregate under the influence of oppositely charged condensing agents, the specific conditions for the stability of such aggregates can only be determined empirically. We examine these conditions, which involve an interplay of electrostatic and osmotic effects, by using a well defined model system composed of F-actin, an anionic rod-like polyelectrolyte, and lysozyme, a cationic globular protein with a charge that can be genetically modified. The structure and stability of actin-lysozymemore » complexes for different lysozyme charge mutants and salt concentrations are examined by using synchrotron x-ray scattering and molecular dynamics simulations. We provide evidence that supports a structural transition from columnar arrangements of F-actin held together by arrays of lysozyme at the threefold interstitial sites of the actin sublattice to marginally stable complexes in which lysozyme resides at twofold bridging sites between actin. The reduced stability arises from strongly reduced partitioning of salt between the complex and the surrounding solution. Changes in the stability of actin-lysozyme complexes are of biomedical interest because their formation has been reported to contribute to the persistence of airway infections in cystic fibrosis by sequestering antimicrobials such as lysozyme. We present x-ray microscopy results that argue for the existence of actin-lysozyme complexes in cystic fibrosis sputum and demonstrate that, for a wide range of salt conditions, charge-reduced lysozyme is not sequestered in ordered complexes while retaining its bacterial killing activity.« less
From Utility to Exploration: Teaching with Data to Develop Complexity Thinking
NASA Astrophysics Data System (ADS)
Lutz, T. M.
2016-12-01
Scientific, social, and economic advances are possible because we impose simplicity and predictability on natural and social systems that are inherently complex and uncertain. But, the work of Edgar Morin, Gregory Bateson and others, suggests that a failure to integrate the simple and the complex in our thinking (worldview) is a root cause of humanity's unsustainable existence. This diagnosis is challenging for scientists because we make the world visible through data: complex earth systems reduced to numbers. What we do with those numbers mirrors our approach to the world. Geoscience students gain much of their experience working with data from courses in statistics, physics, and chemistry as well as courses in their major. They learn to solve problems within a scientific context, and are led to see data analysis as a set of tools needed to make predictions and decisions (e.g., probabilities, regression equations). They learn that there are right ways of doing things and correct answers to be found. We do need such skills - but they reflect a simple and reductionist view. For example, the objective of a regression model may be to reduce a large number of data to a much smaller number of parameters to gain utility in prediction. However, this is the "wrong direction" to approach complexity. The mission of Geometrics, a combined undergraduate & graduate course (ESS 321/521), at West Chester University is to seek ways to meaningfully reveal complexity (within the limitations of the data) and to understand data differently. The aim is to create multiple, possibly divergent, views of data sets to create a sense of richness and depth. This presentation will give examples of heuristic models, exploratory methods (e.g., moving average and kernel modeling; ensemble simulation) and visualizations (data slicing, conditioning, and rotation). Excel programs used in the course are constructed to develop a sense of playfulness and freedom in the students' approach to data, and they open up an often neglected side of scientific methods: abductive reasoning, and the formation of hypotheses that recognize complexity.
Optimizing Automatic Deployment Using Non-functional Requirement Annotations
NASA Astrophysics Data System (ADS)
Kugele, Stefan; Haberl, Wolfgang; Tautschnig, Michael; Wechs, Martin
Model-driven development has become common practice in design of safety-critical real-time systems. High-level modeling constructs help to reduce the overall system complexity apparent to developers. This abstraction caters for fewer implementation errors in the resulting systems. In order to retain correctness of the model down to the software executed on a concrete platform, human faults during implementation must be avoided. This calls for an automatic, unattended deployment process including allocation, scheduling, and platform configuration.
McKenna, James; Kapfhamer, David; Kinchen, Jason M; Wasek, Brandi; Dunworth, Matthew; Murray-Stewart, Tracy; Bottiglieri, Teodoro; Casero, Robert A; Gambello, Michael J
2018-06-15
Tuberous sclerosis complex (TSC) is an autosomal dominant neurodevelopmental disorder and the quintessential disorder of mechanistic Target of Rapamycin Complex 1 (mTORC1) dysregulation. Loss of either causative gene, TSC1 or TSC2, leads to constitutive mTORC1 kinase activation and a pathologically anabolic state of macromolecular biosynthesis. Little is known about the organ-specific metabolic reprogramming that occurs in TSC-affected organs. Using a mouse model of TSC in which Tsc2 is disrupted in radial glial precursors and their neuronal and glial descendants, we performed an unbiased metabolomic analysis of hippocampi to identify Tsc2-dependent metabolic changes. Significant metabolic reprogramming was found in well-established pathways associated with mTORC1 activation, including redox homeostasis, glutamine/tricarboxylic acid cycle, pentose and nucleotide metabolism. Changes in two novel pathways were identified: transmethylation and polyamine metabolism. Changes in transmethylation included reduced methionine, cystathionine, S-adenosylmethionine (SAM-the major methyl donor), reduced SAM/S-adenosylhomocysteine ratio (cellular methylation potential), and elevated betaine, an alternative methyl donor. These changes were associated with alterations in SAM-dependent methylation pathways and expression of the enzymes methionine adenosyltransferase 2A and cystathionine beta synthase. We also found increased levels of the polyamine putrescine due to increased activity of ornithine decarboxylase, the rate-determining enzyme in polyamine synthesis. Treatment of Tsc2+/- mice with the ornithine decarboxylase inhibitor α-difluoromethylornithine, to reduce putrescine synthesis dose-dependently reduced hippocampal astrogliosis. These data establish roles for SAM-dependent methylation reactions and polyamine metabolism in TSC neuropathology. Importantly, both pathways are amenable to nutritional or pharmacologic therapy.
NASA Technical Reports Server (NTRS)
Schmidt, R. J.; Dodds, R. H., Jr.
1985-01-01
The dynamic analysis of complex structural systems using the finite element method and multilevel substructured models is presented. The fixed-interface method is selected for substructure reduction because of its efficiency, accuracy, and adaptability to restart and reanalysis. This method is extended to reduction of substructures which are themselves composed of reduced substructures. The implementation and performance of the method in a general purpose software system is emphasized. Solution algorithms consistent with the chosen data structures are presented. It is demonstrated that successful finite element software requires the use of software executives to supplement the algorithmic language. The complexity of the implementation of restart and reanalysis porcedures illustrates the need for executive systems to support the noncomputational aspects of the software. It is shown that significant computational efficiencies can be achieved through proper use of substructuring and reduction technbiques without sacrificing solution accuracy. The restart and reanalysis capabilities and the flexible procedures for multilevel substructured modeling gives economical yet accurate analyses of complex structural systems.
Gaussian functional regression for output prediction: Model assimilation and experimental design
NASA Astrophysics Data System (ADS)
Nguyen, N. C.; Peraire, J.
2016-03-01
In this paper, we introduce a Gaussian functional regression (GFR) technique that integrates multi-fidelity models with model reduction to efficiently predict the input-output relationship of a high-fidelity model. The GFR method combines the high-fidelity model with a low-fidelity model to provide an estimate of the output of the high-fidelity model in the form of a posterior distribution that can characterize uncertainty in the prediction. A reduced basis approximation is constructed upon the low-fidelity model and incorporated into the GFR method to yield an inexpensive posterior distribution of the output estimate. As this posterior distribution depends crucially on a set of training inputs at which the high-fidelity models are simulated, we develop a greedy sampling algorithm to select the training inputs. Our approach results in an output prediction model that inherits the fidelity of the high-fidelity model and has the computational complexity of the reduced basis approximation. Numerical results are presented to demonstrate the proposed approach.
Reduced-order modeling for hyperthermia: an extended balanced-realization-based approach.
Mattingly, M; Bailey, E A; Dutton, A W; Roemer, R B; Devasia, S
1998-09-01
Accurate thermal models are needed in hyperthermia cancer treatments for such tasks as actuator and sensor placement design, parameter estimation, and feedback temperature control. The complexity of the human body produces full-order models which are too large for effective execution of these tasks, making use of reduced-order models necessary. However, standard balanced-realization (SBR)-based model reduction techniques require a priori knowledge of the particular placement of actuators and sensors for model reduction. Since placement design is intractable (computationally) on the full-order models, SBR techniques must use ad hoc placements. To alleviate this problem, an extended balanced-realization (EBR)-based model-order reduction approach is presented. The new technique allows model order reduction to be performed over all possible placement designs and does not require ad hoc placement designs. It is shown that models obtained using the EBR method are more robust to intratreatment changes in the placement of the applied power field than those models obtained using the SBR method.
Analysis of a dynamic model of guard cell signaling reveals the stability of signal propagation
NASA Astrophysics Data System (ADS)
Gan, Xiao; Albert, RéKa
Analyzing the long-term behaviors (attractors) of dynamic models of biological systems can provide valuable insight into biological phenotypes and their stability. We identified the long-term behaviors of a multi-level, 70-node discrete dynamic model of the stomatal opening process in plants. We reduce the model's huge state space by reducing unregulated nodes and simple mediator nodes, and by simplifying the regulatory functions of selected nodes while keeping the model consistent with experimental observations. We perform attractor analysis on the resulting 32-node reduced model by two methods: 1. converting it into a Boolean model, then applying two attractor-finding algorithms; 2. theoretical analysis of the regulatory functions. We conclude that all nodes except two in the reduced model have a single attractor; and only two nodes can admit oscillations. The multistability or oscillations do not affect the stomatal opening level in any situation. This conclusion applies to the original model as well in all the biologically meaningful cases. We further demonstrate the robustness of signal propagation by showing that a large percentage of single-node knockouts does not affect the stomatal opening level. Thus, we conclude that the complex structure of this signal transduction network provides multiple information propagation pathways while not allowing extensive multistability or oscillations, resulting in robust signal propagation. Our innovative combination of methods offers a promising way to analyze multi-level models.
NASA Astrophysics Data System (ADS)
French, Jon; Payo, Andres; Murray, Brad; Orford, Julian; Eliot, Matt; Cowell, Peter
2016-03-01
Coastal and estuarine landforms provide a physical template that not only accommodates diverse ecosystem functions and human activities, but also mediates flood and erosion risks that are expected to increase with climate change. In this paper, we explore some of the issues associated with the conceptualisation and modelling of coastal morphological change at time and space scales relevant to managers and policy makers. Firstly, we revisit the question of how to define the most appropriate scales at which to seek quantitative predictions of landform change within an age defined by human interference with natural sediment systems and by the prospect of significant changes in climate and ocean forcing. Secondly, we consider the theoretical bases and conceptual frameworks for determining which processes are most important at a given scale of interest and the related problem of how to translate this understanding into models that are computationally feasible, retain a sound physical basis and demonstrate useful predictive skill. In particular, we explore the limitations of a primary scale approach and the extent to which these can be resolved with reference to the concept of the coastal tract and application of systems theory. Thirdly, we consider the importance of different styles of landform change and the need to resolve not only incremental evolution of morphology but also changes in the qualitative dynamics of a system and/or its gross morphological configuration. The extreme complexity and spatially distributed nature of landform systems means that quantitative prediction of future changes must necessarily be approached through mechanistic modelling of some form or another. Geomorphology has increasingly embraced so-called 'reduced complexity' models as a means of moving from an essentially reductionist focus on the mechanics of sediment transport towards a more synthesist view of landform evolution. However, there is little consensus on exactly what constitutes a reduced complexity model and the term itself is both misleading and, arguably, unhelpful. Accordingly, we synthesise a set of requirements for what might be termed 'appropriate complexity modelling' of quantitative coastal morphological change at scales commensurate with contemporary management and policy-making requirements: 1) The system being studied must be bounded with reference to the time and space scales at which behaviours of interest emerge and/or scientific or management problems arise; 2) model complexity and comprehensiveness must be appropriate to the problem at hand; 3) modellers should seek a priori insights into what kind of behaviours are likely to be evident at the scale of interest and the extent to which the behavioural validity of a model may be constrained by its underlying assumptions and its comprehensiveness; 4) informed by qualitative insights into likely dynamic behaviour, models should then be formulated with a view to resolving critical state changes; and 5) meso-scale modelling of coastal morphological change should reflect critically on the role of modelling and its relation to the observable world.
Active Learning of Classification Models with Likert-Scale Feedback.
Xue, Yanbing; Hauskrecht, Milos
2017-01-01
Annotation of classification data by humans can be a time-consuming and tedious process. Finding ways of reducing the annotation effort is critical for building the classification models in practice and for applying them to a variety of classification tasks. In this paper, we develop a new active learning framework that combines two strategies to reduce the annotation effort. First, it relies on label uncertainty information obtained from the human in terms of the Likert-scale feedback. Second, it uses active learning to annotate examples with the greatest expected change. We propose a Bayesian approach to calculate the expectation and an incremental SVM solver to reduce the time complexity of the solvers. We show the combination of our active learning strategy and the Likert-scale feedback can learn classification models more rapidly and with a smaller number of labeled instances than methods that rely on either Likert-scale labels or active learning alone.
Active Learning of Classification Models with Likert-Scale Feedback
Xue, Yanbing; Hauskrecht, Milos
2017-01-01
Annotation of classification data by humans can be a time-consuming and tedious process. Finding ways of reducing the annotation effort is critical for building the classification models in practice and for applying them to a variety of classification tasks. In this paper, we develop a new active learning framework that combines two strategies to reduce the annotation effort. First, it relies on label uncertainty information obtained from the human in terms of the Likert-scale feedback. Second, it uses active learning to annotate examples with the greatest expected change. We propose a Bayesian approach to calculate the expectation and an incremental SVM solver to reduce the time complexity of the solvers. We show the combination of our active learning strategy and the Likert-scale feedback can learn classification models more rapidly and with a smaller number of labeled instances than methods that rely on either Likert-scale labels or active learning alone. PMID:28979827
Learning reduced kinetic Monte Carlo models of complex chemistry from molecular dynamics.
Yang, Qian; Sing-Long, Carlos A; Reed, Evan J
2017-08-01
We propose a novel statistical learning framework for automatically and efficiently building reduced kinetic Monte Carlo (KMC) models of large-scale elementary reaction networks from data generated by a single or few molecular dynamics simulations (MD). Existing approaches for identifying species and reactions from molecular dynamics typically use bond length and duration criteria, where bond duration is a fixed parameter motivated by an understanding of bond vibrational frequencies. In contrast, we show that for highly reactive systems, bond duration should be a model parameter that is chosen to maximize the predictive power of the resulting statistical model. We demonstrate our method on a high temperature, high pressure system of reacting liquid methane, and show that the learned KMC model is able to extrapolate more than an order of magnitude in time for key molecules. Additionally, our KMC model of elementary reactions enables us to isolate the most important set of reactions governing the behavior of key molecules found in the MD simulation. We develop a new data-driven algorithm to reduce the chemical reaction network which can be solved either as an integer program or efficiently using L1 regularization, and compare our results with simple count-based reduction. For our liquid methane system, we discover that rare reactions do not play a significant role in the system, and find that less than 7% of the approximately 2000 reactions observed from molecular dynamics are necessary to reproduce the molecular concentration over time of methane. The framework described in this work paves the way towards a genomic approach to studying complex chemical systems, where expensive MD simulation data can be reused to contribute to an increasingly large and accurate genome of elementary reactions and rates.
Learning reduced kinetic Monte Carlo models of complex chemistry from molecular dynamics
Sing-Long, Carlos A.
2017-01-01
We propose a novel statistical learning framework for automatically and efficiently building reduced kinetic Monte Carlo (KMC) models of large-scale elementary reaction networks from data generated by a single or few molecular dynamics simulations (MD). Existing approaches for identifying species and reactions from molecular dynamics typically use bond length and duration criteria, where bond duration is a fixed parameter motivated by an understanding of bond vibrational frequencies. In contrast, we show that for highly reactive systems, bond duration should be a model parameter that is chosen to maximize the predictive power of the resulting statistical model. We demonstrate our method on a high temperature, high pressure system of reacting liquid methane, and show that the learned KMC model is able to extrapolate more than an order of magnitude in time for key molecules. Additionally, our KMC model of elementary reactions enables us to isolate the most important set of reactions governing the behavior of key molecules found in the MD simulation. We develop a new data-driven algorithm to reduce the chemical reaction network which can be solved either as an integer program or efficiently using L1 regularization, and compare our results with simple count-based reduction. For our liquid methane system, we discover that rare reactions do not play a significant role in the system, and find that less than 7% of the approximately 2000 reactions observed from molecular dynamics are necessary to reproduce the molecular concentration over time of methane. The framework described in this work paves the way towards a genomic approach to studying complex chemical systems, where expensive MD simulation data can be reused to contribute to an increasingly large and accurate genome of elementary reactions and rates. PMID:28989618
Learning reduced kinetic Monte Carlo models of complex chemistry from molecular dynamics
Yang, Qian; Sing-Long, Carlos A.; Reed, Evan J.
2017-06-19
Here, we propose a novel statistical learning framework for automatically and efficiently building reduced kinetic Monte Carlo (KMC) models of large-scale elementary reaction networks from data generated by a single or few molecular dynamics simulations (MD). Existing approaches for identifying species and reactions from molecular dynamics typically use bond length and duration criteria, where bond duration is a fixed parameter motivated by an understanding of bond vibrational frequencies. Conversely, we show that for highly reactive systems, bond duration should be a model parameter that is chosen to maximize the predictive power of the resulting statistical model. We demonstrate our methodmore » on a high temperature, high pressure system of reacting liquid methane, and show that the learned KMC model is able to extrapolate more than an order of magnitude in time for key molecules. Additionally, our KMC model of elementary reactions enables us to isolate the most important set of reactions governing the behavior of key molecules found in the MD simulation. We develop a new data-driven algorithm to reduce the chemical reaction network which can be solved either as an integer program or efficiently using L1 regularization, and compare our results with simple count-based reduction. For our liquid methane system, we discover that rare reactions do not play a significant role in the system, and find that less than 7% of the approximately 2000 reactions observed from molecular dynamics are necessary to reproduce the molecular concentration over time of methane. Furthermore, we describe a framework in this work that paves the way towards a genomic approach to studying complex chemical systems, where expensive MD simulation data can be reused to contribute to an increasingly large and accurate genome of elementary reactions and rates.« less
Wakeland, Wayne; Nielsen, Alexandra; Schmidt, Teresa D; McCarty, Dennis; Webster, Lynn R; Fitzgerald, John; Haddox, J David
2013-10-01
Three educational interventions were simulated in a system dynamics model of the medical use, trafficking, and nonmedical use of pharmaceutical opioids. The study relied on secondary data obtained in the literature for the period of 1995 to 2008 as well as expert panel recommendations regarding model parameters and structure. The behavior of the resulting systems-level model was tested for fit against reference behavior data. After the base model was tested, logic to represent three educational interventions was added and the impact of each intervention on simulated overdose deaths was evaluated over a 7-year evaluation period, 2008 to 2015. Principal findings were that a prescriber education intervention not only reduced total overdose deaths in the model but also reduced the total number of persons who receive opioid analgesic therapy, medical user education not only reduced overdose deaths among medical users but also resulted in increased deaths from nonmedical use, and a "popularity" intervention sharply reduced overdose deaths among nonmedical users while having no effect on medical use. System dynamics modeling shows promise for evaluating potential interventions to ameliorate the adverse outcomes associated with the complex system surrounding the use of opioid analgesics to treat pain.
Modeling an alkaline electrolysis cell through reduced-order and loss-estimate approaches
NASA Astrophysics Data System (ADS)
Milewski, Jaroslaw; Guandalini, Giulio; Campanari, Stefano
2014-12-01
The paper presents two approaches to the mathematical modeling of an Alkaline Electrolyzer Cell. The presented models were compared and validated against available experimental results taken from a laboratory test and against literature data. The first modeling approach is based on the analysis of estimated losses due to the different phenomena occurring inside the electrolytic cell, and requires careful calibration of several specific parameters (e.g. those related to the electrochemical behavior of the electrodes) some of which could be hard to define. An alternative approach is based on a reduced-order equivalent circuit, resulting in only two fitting parameters (electrodes specific resistance and parasitic losses) and calculation of the internal electric resistance of the electrolyte. Both models yield satisfactory results with an average error limited below 3% vs. the considered experimental data and show the capability to describe with sufficient accuracy the different operating conditions of the electrolyzer; the reduced-order model could be preferred thanks to its simplicity for implementation within plant simulation tools dealing with complex systems, such as electrolyzers coupled with storage facilities and intermittent renewable energy sources.
Comparison of different models for non-invasive FFR estimation
NASA Astrophysics Data System (ADS)
Mirramezani, Mehran; Shadden, Shawn
2017-11-01
Coronary artery disease is a leading cause of death worldwide. Fractional flow reserve (FFR), derived from invasively measuring the pressure drop across a stenosis, is considered the gold standard to diagnose disease severity and need for treatment. Non-invasive estimation of FFR has gained recent attention for its potential to reduce patient risk and procedural cost versus invasive FFR measurement. Non-invasive FFR can be obtained by using image-based computational fluid dynamics to simulate blood flow and pressure in a patient-specific coronary model. However, 3D simulations require extensive effort for model construction and numerical computation, which limits their routine use. In this study we compare (ordered by increasing computational cost/complexity): reduced-order algebraic models of pressure drop across a stenosis; 1D, 2D (multiring) and 3D CFD models; as well as 3D FSI for the computation of FFR in idealized and patient-specific stenosis geometries. We demonstrate the ability of an appropriate reduced order algebraic model to closely predict FFR when compared to FFR from a full 3D simulation. This work was supported by the NIH, Grant No. R01-HL103419.
Lewis Acid Assisted Nitrate Reduction with Biomimetic Molybdenum Oxotransferase Complex.
Elrod, Lee Taylor; Kim, Eunsuk
2018-03-05
The reduction of nitrate (NO 3 - ) to nitrite (NO 2 - ) is of significant biological and environmental importance. While Mo IV (O) and Mo VI (O) 2 complexes that mimic the active site structure of nitrate reducing enzymes are prevalent, few of these model complexes can reduce nitrate to nitrite through oxygen atom transfer (OAT) chemistry. We present a novel strategy to induce nitrate reduction chemistry of a previously known catalyst Mo IV (O)(SN) 2 (2), where SN = bis(4- tert-butylphenyl)-2-pyridylmethanethiolate, that is otherwise incapable of achieving OAT with nitrate. Addition of nitrate with the Lewis acid Sc(OTf) 3 (OTf = trifluoromethanesulfonate) to 2 results in an immediate and clean conversion of 2 to Mo VI (O) 2 (SN) 2 (1). The Lewis acid additive further reacts with the OAT product, nitrite, to form N 2 O and O 2 . This work highlights the ability of Sc 3+ additives to expand the reactivity scope of an existing Mo IV (O) complex together with which Sc 3+ can convert nitrate to stable gaseous molecules.
Model-Based Thermal System Design Optimization for the James Webb Space Telescope
NASA Technical Reports Server (NTRS)
Cataldo, Giuseppe; Niedner, Malcolm B.; Fixsen, Dale J.; Moseley, Samuel H.
2017-01-01
Spacecraft thermal model validation is normally performed by comparing model predictions with thermal test data and reducing their discrepancies to meet the mission requirements. Based on thermal engineering expertise, the model input parameters are adjusted to tune the model output response to the test data. The end result is not guaranteed to be the best solution in terms of reduced discrepancy and the process requires months to complete. A model-based methodology was developed to perform the validation process in a fully automated fashion and provide mathematical bases to the search for the optimal parameter set that minimizes the discrepancies between model and data. The methodology was successfully applied to several thermal subsystems of the James Webb Space Telescope (JWST). Global or quasiglobal optimal solutions were found and the total execution time of the model validation process was reduced to about two weeks. The model sensitivities to the parameters, which are required to solve the optimization problem, can be calculated automatically before the test begins and provide a library for sensitivity studies. This methodology represents a crucial commodity when testing complex, large-scale systems under time and budget constraints. Here, results for the JWST Core thermal system will be presented in detail.
Model-based thermal system design optimization for the James Webb Space Telescope
NASA Astrophysics Data System (ADS)
Cataldo, Giuseppe; Niedner, Malcolm B.; Fixsen, Dale J.; Moseley, Samuel H.
2017-10-01
Spacecraft thermal model validation is normally performed by comparing model predictions with thermal test data and reducing their discrepancies to meet the mission requirements. Based on thermal engineering expertise, the model input parameters are adjusted to tune the model output response to the test data. The end result is not guaranteed to be the best solution in terms of reduced discrepancy and the process requires months to complete. A model-based methodology was developed to perform the validation process in a fully automated fashion and provide mathematical bases to the search for the optimal parameter set that minimizes the discrepancies between model and data. The methodology was successfully applied to several thermal subsystems of the James Webb Space Telescope (JWST). Global or quasiglobal optimal solutions were found and the total execution time of the model validation process was reduced to about two weeks. The model sensitivities to the parameters, which are required to solve the optimization problem, can be calculated automatically before the test begins and provide a library for sensitivity studies. This methodology represents a crucial commodity when testing complex, large-scale systems under time and budget constraints. Here, results for the JWST Core thermal system will be presented in detail.
Program test objectives milestone 3. [Integrated Propulsion Technology Demonstrator
NASA Technical Reports Server (NTRS)
Gaynor, T. L.
1994-01-01
The following conclusions have been developed relative to propulsion system technology adequacy for efficient development and operation of recoverable and expendable launch vehicles (RLV and ELV) and the benefits which the integrated propulsion technology demonstrator will provide for enhancing technology: (1) Technology improvements relative to propulsion system design and operation can reduce program cost. Many features or improvement needs to enhance operability, reduce cost, and improve payload are identified. (2) The Integrated Propulsion Technology Demonstrator (IPTD) Program provides a means of resolving the majority of issues associated with improvement needs. (3) The IPTD will evaluate complex integration of vehicle and facility functions in fluid management and propulsion control systems, and provides an environment for validating improved mechanical and electrical components. (4) The IPTD provides a mechanism for investigating operational issues focusing on reducing manpower and time to perform various functions at the launch site. These efforts include model development, collection of data to validate subject models, and ultimate development of complex time line models. (5) The IPTD provides an engine test bed for tri/bi-propellant engine development firings which is representative of the actual vehicle environment. (6) The IPTD provides for only a limited multiengine configuration integration environment for RLV. Multiengine efforts may be simulated for a number of subsystems and a number of subsystems are relatively independent of the multiengine influences.
Sparsity enabled cluster reduced-order models for control
NASA Astrophysics Data System (ADS)
Kaiser, Eurika; Morzyński, Marek; Daviller, Guillaume; Kutz, J. Nathan; Brunton, Bingni W.; Brunton, Steven L.
2018-01-01
Characterizing and controlling nonlinear, multi-scale phenomena are central goals in science and engineering. Cluster-based reduced-order modeling (CROM) was introduced to exploit the underlying low-dimensional dynamics of complex systems. CROM builds a data-driven discretization of the Perron-Frobenius operator, resulting in a probabilistic model for ensembles of trajectories. A key advantage of CROM is that it embeds nonlinear dynamics in a linear framework, which enables the application of standard linear techniques to the nonlinear system. CROM is typically computed on high-dimensional data; however, access to and computations on this full-state data limit the online implementation of CROM for prediction and control. Here, we address this key challenge by identifying a small subset of critical measurements to learn an efficient CROM, referred to as sparsity-enabled CROM. In particular, we leverage compressive measurements to faithfully embed the cluster geometry and preserve the probabilistic dynamics. Further, we show how to identify fewer optimized sensor locations tailored to a specific problem that outperform random measurements. Both of these sparsity-enabled sensing strategies significantly reduce the burden of data acquisition and processing for low-latency in-time estimation and control. We illustrate this unsupervised learning approach on three different high-dimensional nonlinear dynamical systems from fluids with increasing complexity, with one application in flow control. Sparsity-enabled CROM is a critical facilitator for real-time implementation on high-dimensional systems where full-state information may be inaccessible.
Gloaguen, Frederic
2016-01-19
Synthetic models of the active site of iron-iron hydrogenases are currently the subjects of numerous studies aimed at developing H2-production catalysts based on cheap and abundant materials. In this context, the present report offers an electrochemist's view of the catalysis of proton reduction by simple binuclear iron(I) thiolate complexes. Although these complexes probably do not follow a biocatalytic pathway, we analyze and discuss the interplay between the reduction potential and basicity and how these antagonist properties impact the mechanisms of proton-coupled electron transfer to the metal centers. This question is central to any consideration of the activity at the molecular level of hydrogenases and related enzymes. In a second part, special attention is paid to iron thiolate complexes holding rigid and unsaturated bridging ligands. The complexes that enjoy mild reduction potentials and stabilized reduced forms are promising iron-based catalysts for the photodriven evolution of H2 in organic solvents and, more importantly, in water.
Ascorbate Efflux as a New Strategy for Iron Reduction and Transport in Plants*
Grillet, Louis; Ouerdane, Laurent; Flis, Paulina; Hoang, Minh Thi Thanh; Isaure, Marie-Pierre; Lobinski, Ryszard; Curie, Catherine; Mari, Stéphane
2014-01-01
Iron (Fe) is essential for virtually all living organisms. The identification of the chemical forms of iron (the speciation) circulating in and between cells is crucial to further understand the mechanisms of iron delivery to its final targets. Here we analyzed how iron is transported to the seeds by the chemical identification of iron complexes that are delivered to embryos, followed by the biochemical characterization of the transport of these complexes by the embryo, using the pea (Pisum sativum) as a model species. We have found that iron circulates as ferric complexes with citrate and malate (Fe(III)3Cit2Mal2, Fe(III)3Cit3Mal1, Fe(III)Cit2). Because dicotyledonous plants only transport ferrous iron, we checked whether embryos were capable of reducing iron of these complexes. Indeed, embryos did express a constitutively high ferric reduction activity. Surprisingly, iron(III) reduction is not catalyzed by the expected membrane-bound ferric reductase. Instead, embryos efflux high amounts of ascorbate that chemically reduce iron(III) from citrate-malate complexes. In vitro transport experiments on isolated embryos using radiolabeled 55Fe demonstrated that this ascorbate-mediated reduction is an obligatory step for the uptake of iron(II). Moreover, the ascorbate efflux activity was also measured in Arabidopsis embryos, suggesting that this new iron transport system may be generic to dicotyledonous plants. Finally, in embryos of the ascorbate-deficient mutants vtc2-4, vtc5-1, and vtc5-2, the reducing activity and the iron concentration were reduced significantly. Taken together, our results identified a new iron transport mechanism in plants that could play a major role to control iron loading in seeds. PMID:24347170
Ascorbate efflux as a new strategy for iron reduction and transport in plants.
Grillet, Louis; Ouerdane, Laurent; Flis, Paulina; Hoang, Minh Thi Thanh; Isaure, Marie-Pierre; Lobinski, Ryszard; Curie, Catherine; Mari, Stéphane
2014-01-31
Iron (Fe) is essential for virtually all living organisms. The identification of the chemical forms of iron (the speciation) circulating in and between cells is crucial to further understand the mechanisms of iron delivery to its final targets. Here we analyzed how iron is transported to the seeds by the chemical identification of iron complexes that are delivered to embryos, followed by the biochemical characterization of the transport of these complexes by the embryo, using the pea (Pisum sativum) as a model species. We have found that iron circulates as ferric complexes with citrate and malate (Fe(III)3Cit2Mal2, Fe(III)3Cit3Mal1, Fe(III)Cit2). Because dicotyledonous plants only transport ferrous iron, we checked whether embryos were capable of reducing iron of these complexes. Indeed, embryos did express a constitutively high ferric reduction activity. Surprisingly, iron(III) reduction is not catalyzed by the expected membrane-bound ferric reductase. Instead, embryos efflux high amounts of ascorbate that chemically reduce iron(III) from citrate-malate complexes. In vitro transport experiments on isolated embryos using radiolabeled (55)Fe demonstrated that this ascorbate-mediated reduction is an obligatory step for the uptake of iron(II). Moreover, the ascorbate efflux activity was also measured in Arabidopsis embryos, suggesting that this new iron transport system may be generic to dicotyledonous plants. Finally, in embryos of the ascorbate-deficient mutants vtc2-4, vtc5-1, and vtc5-2, the reducing activity and the iron concentration were reduced significantly. Taken together, our results identified a new iron transport mechanism in plants that could play a major role to control iron loading in seeds.
The formation and study of titanium, zirconium, and hafnium complexes
NASA Technical Reports Server (NTRS)
Wilson, Bobby; Sarin, Sam; Smith, Laverne; Wilson, Melanie
1989-01-01
Research involves the preparation and characterization of a series of Ti, Zr, Hf, TiO, and HfO complexes using the poly(pyrazole) borates as ligands. The study will provide increased understanding of the decomposition of these coordination compounds which may lead to the production of molecular oxygen on the Moon from lunar materials such as ilmenite and rutile. The model compounds are investigated under reducing conditions of molecular hydrogen by use of a high temperature/pressure stainless steel autoclave reactor and by thermogravimetric analysis.
A Novel DEM Approach to Simulate Block Propagation on Forested Slopes
NASA Astrophysics Data System (ADS)
Toe, David; Bourrier, Franck; Dorren, Luuk; Berger, Frédéric
2018-03-01
In order to model rockfall on forested slopes, we developed a trajectory rockfall model based on the discrete element method (DEM). This model is able to take the complex mechanical processes at work during an impact into account (large deformations, complex contact conditions) and can explicitly simulate block/soil, block/tree contacts as well as contacts between neighbouring trees. In this paper, we describe the DEM model developed and we use it to assess the protective effect of different types of forest. In addition, we compared it with a more classical rockfall simulation model. The results highlight that forests can significantly reduce rockfall hazard and that the spatial structure of coppice forests has to be taken into account in rockfall simulations in order to avoid overestimating the protective role of these forest structures against rockfall hazard. In addition, the protective role of the forests is mainly influenced by the basal area. Finally, the advantages and limitations of the DEM model were compared with classical rockfall modelling approaches.
Modelling the influence of sensory dynamics on linear and nonlinear driver steering control
NASA Astrophysics Data System (ADS)
Nash, C. J.; Cole, D. J.
2018-05-01
A recent review of the literature has indicated that sensory dynamics play an important role in the driver-vehicle steering task, motivating the design of a new driver model incorporating human sensory systems. This paper presents a full derivation of the linear driver model developed in previous work, and extends the model to control a vehicle with nonlinear tyres. Various nonlinear controllers and state estimators are compared with different approximations of the true system dynamics. The model simulation time is found to increase significantly with the complexity of the controller and state estimator. In general the more complex controllers perform best, although with certain vehicle and tyre models linearised controllers perform as well as a full nonlinear optimisation. Various extended Kalman filters give similar results, although the driver's sensory dynamics reduce control performance compared with full state feedback. The new model could be used to design vehicle systems which interact more naturally and safely with a human driver.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Y.; Liu, Z.; Zhang, S.
Parameter estimation provides a potentially powerful approach to reduce model bias for complex climate models. Here, in a twin experiment framework, the authors perform the first parameter estimation in a fully coupled ocean–atmosphere general circulation model using an ensemble coupled data assimilation system facilitated with parameter estimation. The authors first perform single-parameter estimation and then multiple-parameter estimation. In the case of the single-parameter estimation, the error of the parameter [solar penetration depth (SPD)] is reduced by over 90% after ~40 years of assimilation of the conventional observations of monthly sea surface temperature (SST) and salinity (SSS). The results of multiple-parametermore » estimation are less reliable than those of single-parameter estimation when only the monthly SST and SSS are assimilated. Assimilating additional observations of atmospheric data of temperature and wind improves the reliability of multiple-parameter estimation. The errors of the parameters are reduced by 90% in ~8 years of assimilation. Finally, the improved parameters also improve the model climatology. With the optimized parameters, the bias of the climatology of SST is reduced by ~90%. Altogether, this study suggests the feasibility of ensemble-based parameter estimation in a fully coupled general circulation model.« less
Carlo, Michael A; Riddell, Eric A; Levy, Ofir; Sears, Michael W
2018-01-01
The capacity to tolerate climate change often varies across ontogeny in organisms with complex life cycles. Recently developed species distribution models incorporate traits across life stages; however, these life-cycle models primarily evaluate effects of lethal change. Here, we examine impacts of recurrent sublethal warming on development and survival in ecological projections of climate change. We reared lizard embryos in the laboratory under temperature cycles that simulated contemporary conditions and warming scenarios. We also artificially warmed natural nests to mimic laboratory treatments. In both cases, recurrent sublethal warming decreased embryonic survival and hatchling sizes. Incorporating survivorship results into a mechanistic species distribution model reduced annual survival by up to 24% compared to models that did not incorporate sublethal warming. Contrary to models without sublethal effects, our model suggests that modest increases in developmental temperatures influence species ranges due to effects on survivorship. © 2017 John Wiley & Sons Ltd/CNRS.
A novel medical information management and decision model for uncertain demand optimization.
Bi, Ya
2015-01-01
Accurately planning the procurement volume is an effective measure for controlling the medicine inventory cost. Due to uncertain demand it is difficult to make accurate decision on procurement volume. As to the biomedicine sensitive to time and season demand, the uncertain demand fitted by the fuzzy mathematics method is obviously better than general random distribution functions. To establish a novel medical information management and decision model for uncertain demand optimization. A novel optimal management and decision model under uncertain demand has been presented based on fuzzy mathematics and a new comprehensive improved particle swarm algorithm. The optimal management and decision model can effectively reduce the medicine inventory cost. The proposed improved particle swarm optimization is a simple and effective algorithm to improve the Fuzzy interference and hence effectively reduce the calculation complexity of the optimal management and decision model. Therefore the new model can be used for accurate decision on procurement volume under uncertain demand.
Modeling Composite Laminate Crushing for Crash Analysis
NASA Technical Reports Server (NTRS)
Fleming, David C.; Jones, Lisa (Technical Monitor)
2002-01-01
Crash modeling of composite structures remains limited in application and has not been effectively demonstrated as a predictive tool. While the global response of composite structures may be well modeled, when composite structures act as energy-absorbing members through direct laminate crushing the modeling accuracy is greatly reduced. The most efficient composite energy absorbing structures, in terms of energy absorbed per unit mass, are those that absorb energy through a complex progressive crushing response in which fiber and matrix fractures on a small scale dominate the behavior. Such failure modes simultaneously include delamination of plies, failure of the matrix to produce fiber bundles, and subsequent failure of fiber bundles either in bending or in shear. In addition, the response may include the significant action of friction, both internally (between delaminated plies or fiber bundles) or externally (between the laminate and the crushing surface). A figure shows the crushing damage observed in a fiberglass composite tube specimen, illustrating the complexity of the response. To achieve a finite element model of such complex behavior is an extremely challenging problem. A practical crushing model based on detailed modeling of the physical mechanisms of crushing behavior is not expected in the foreseeable future. The present research describes attempts to model composite crushing behavior using a novel hybrid modeling procedure. Experimental testing is done is support of the modeling efforts, and a test specimen is developed to provide data for validating laminate crushing models.
NASA Astrophysics Data System (ADS)
McCormack, Kimberly A.; Hesse, Marc A.
2018-04-01
We model the subsurface hydrologic response to the 7.6 Mw subduction zone earthquake that occurred on the plate interface beneath the Nicoya peninsula in Costa Rica on September 5, 2012. The regional-scale poroelastic model of the overlying plate integrates seismologic, geodetic and hydrologic data sets to predict the post-seismic poroelastic response. A representative two-dimensional model shows that thrust earthquakes with a slip width less than a third of their depth produce complex multi-lobed pressure perturbations in the shallow subsurface. This leads to multiple poroelastic relaxation timescales that may overlap with the longer viscoelastic timescales. In the three-dimensional model, the complex slip distribution of 2012 Nicoya event and its small width to depth ratio lead to a pore pressure distribution comprising multiple trench parallel ridges of high and low pressure. This leads to complex groundwater flow patterns, non-monotonic variations in predicted well water levels, and poroelastic relaxation on multiple time scales. The model also predicts significant tectonically driven submarine groundwater discharge off-shore. In the weeks following the earthquake, the predicted net submarine groundwater discharge in the study area increases, creating a 100 fold increase in net discharge relative to topography-driven flow over the first 30 days. Our model suggests the hydrological response on land is more complex than typically acknowledged in tectonic studies. This may complicate the interpretation of transient post-seismic surface deformations. Combined tectonic-hydrological observation networks have the potential to reduce such ambiguities.
Tamaki, Yusuke; Morimoto, Tatsuki; Koike, Kazuhide; Ishitani, Osamu
2012-01-01
Previously undescribed supramolecules constructed with various ratios of two kinds of Ru(II) complexes—a photosensitizer and a catalyst—were synthesized. These complexes can photocatalyze the reduction of CO2 to formic acid with high selectivity and durability using a wide range of wavelengths of visible light and NADH model compounds as electron donors in a mixed solution of dimethylformamide–triethanolamine. Using a higher ratio of the photosensitizer unit to the catalyst unit led to a higher yield of formic acid. In particular, of the reported photocatalysts, a trinuclear complex with two photosensitizer units and one catalyst unit photocatalyzed CO2 reduction (ΦHCOOH = 0.061, TONHCOOH = 671) with the fastest reaction rate (TOFHCOOH = 11.6 min-1). On the other hand, photocatalyses of a mixed system containing two kinds of model mononuclear Ru(II) complexes, and supramolecules with a higher ratio of the catalyst unit were much less efficient, and black oligomers and polymers were produced from the Ru complexes during photocatalytic reactions, which reduced the yield of formic acid. The photocatalytic formation of formic acid using the supramolecules described herein proceeds via two sequential processes: the photochemical reduction of the photosensitizer unit by NADH model compounds and intramolecular electron transfer to the catalyst unit. PMID:22908243
NASA Astrophysics Data System (ADS)
Zhang, Qi; Bodony, Daniel
2014-11-01
Commercial jet aircraft generate undesirable noise from several sources, with the engines being the most dominant sources at take-off and major contributors at all other stages of flight. Acoustic liners, which are perforated sheets of metal or composite mounted within the engine, have been an effective means of reducing internal engine noise from the fan, compressor, combustor, and turbine but their performance suffers when subjected to a turbulent grazing flow or to high-amplitude incident sound due to poorly understood interactions between the liner orifices and the exterior flow. Through the use of direct numerical simulations, the flow-orifice interaction is examined numerically, quantified, and modeled over a range of conditions that includes current and envisioned uses of acoustic liners and with detail that exceeds experimental capabilities. A new time-domain model of acoustic liners is developed that extends currently-available reduced-order models to more complex flow conditions but is still efficient for use at the design stage.
Network community-based model reduction for vortical flows
NASA Astrophysics Data System (ADS)
Gopalakrishnan Meena, Muralikrishnan; Nair, Aditya G.; Taira, Kunihiko
2018-06-01
A network community-based reduced-order model is developed to capture key interactions among coherent structures in high-dimensional unsteady vortical flows. The present approach is data-inspired and founded on network-theoretic techniques to identify important vortical communities that are comprised of vortical elements that share similar dynamical behavior. The overall interaction-based physics of the high-dimensional flow field is distilled into the vortical community centroids, considerably reducing the system dimension. Taking advantage of these vortical interactions, the proposed methodology is applied to formulate reduced-order models for the inter-community dynamics of vortical flows, and predict lift and drag forces on bodies in wake flows. We demonstrate the capabilities of these models by accurately capturing the macroscopic dynamics of a collection of discrete point vortices, and the complex unsteady aerodynamic forces on a circular cylinder and an airfoil with a Gurney flap. The present formulation is found to be robust against simulated experimental noise and turbulence due to its integrating nature of the system reduction.
A systems-based approach for integrated design of materials, products and design process chains
NASA Astrophysics Data System (ADS)
Panchal, Jitesh H.; Choi, Hae-Jin; Allen, Janet K.; McDowell, David L.; Mistree, Farrokh
2007-12-01
The concurrent design of materials and products provides designers with flexibility to achieve design objectives that were not previously accessible. However, the improved flexibility comes at a cost of increased complexity of the design process chains and the materials simulation models used for executing the design chains. Efforts to reduce the complexity generally result in increased uncertainty. We contend that a systems based approach is essential for managing both the complexity and the uncertainty in design process chains and simulation models in concurrent material and product design. Our approach is based on simplifying the design process chains systematically such that the resulting uncertainty does not significantly affect the overall system performance. Similarly, instead of striving for accurate models for multiscale systems (that are inherently complex), we rely on making design decisions that are robust to uncertainties in the models. Accordingly, we pursue hierarchical modeling in the context of design of multiscale systems. In this paper our focus is on design process chains. We present a systems based approach, premised on the assumption that complex systems can be designed efficiently by managing the complexity of design process chains. The approach relies on (a) the use of reusable interaction patterns to model design process chains, and (b) consideration of design process decisions using value-of-information based metrics. The approach is illustrated using a Multifunctional Energetic Structural Material (MESM) design example. Energetic materials store considerable energy which can be released through shock-induced detonation; conventionally, they are not engineered for strength properties. The design objectives for the MESM in this paper include both sufficient strength and energy release characteristics. The design is carried out by using models at different length and time scales that simulate different aspects of the system. Finally, by applying the method to the MESM design problem, we show that the integrated design of materials and products can be carried out more efficiently by explicitly accounting for design process decisions with the hierarchy of models.
Coding Response to a Case-Mix Measurement System Based on Multiple Diagnoses
Preyra, Colin
2004-01-01
Objective To examine the hospital coding response to a payment model using a case-mix measurement system based on multiple diagnoses and the resulting impact on a hospital cost model. Data Sources Financial, clinical, and supplementary data for all Ontario short stay hospitals from years 1997 to 2002. Study Design Disaggregated trends in hospital case-mix growth are examined for five years following the adoption of an inpatient classification system making extensive use of combinations of secondary diagnoses. Hospital case mix is decomposed into base and complexity components. The longitudinal effects of coding variation on a standard hospital payment model are examined in terms of payment accuracy and impact on adjustment factors. Principal Findings Introduction of the refined case-mix system provided incentives for hospitals to increase reporting of secondary diagnoses and resulted in growth in highest complexity cases that were not matched by increased resource use over time. Despite a pronounced coding response on the part of hospitals, the increase in measured complexity and case mix did not reduce the unexplained variation in hospital unit cost nor did it reduce the reliance on the teaching adjustment factor, a potential proxy for case mix. The main implication was changes in the size and distribution of predicted hospital operating costs. Conclusions Jurisdictions introducing extensive refinements to standard diagnostic related group (DRG)-type payment systems should consider the effects of induced changes to hospital coding practices. Assessing model performance should include analysis of the robustness of classification systems to hospital-level variation in coding practices. Unanticipated coding effects imply that case-mix models hypothesized to perform well ex ante may not meet expectations ex post. PMID:15230940
(Pre-) calibration of a Reduced Complexity Model of the Antarctic Contribution to Sea-level Changes
NASA Astrophysics Data System (ADS)
Ruckert, K. L.; Guan, Y.; Shaffer, G.; Forest, C. E.; Keller, K.
2015-12-01
(Pre-) calibration of a Reduced Complexity Model of the Antarctic Contribution to Sea-level ChangesKelsey L. Ruckert1*, Yawen Guan2, Chris E. Forest1,3,7, Gary Shaffer 4,5,6, and Klaus Keller1,7,81 Department of Geosciences, The Pennsylvania State University, University Park, Pennsylvania, USA 2 Department of Statistics, The Pennsylvania State University, University Park, Pennsylvania, USA 3 Department of Meteorology, The Pennsylvania State University, University Park, Pennsylvania, USA 4 GAIA_Antarctica, University of Magallanes, Punta Arenas, Chile 5 Center for Advanced Studies in Arid Zones, La Serena, Chile 6 Niels Bohr Institute, University of Copenhagen, Copenhagen, Denmark 7 Earth and Environmental Systems Institute, The Pennsylvania State University, University Park, Pennsylvania, USA 8 Department of Engineering and Public Policy, Carnegie Mellon University, Pittsburgh, Pennsylvania, USA * Corresponding author. E-mail klr324@psu.eduUnderstanding and projecting future sea-level changes poses nontrivial challenges. Sea-level changes are driven primarily by changes in the density of seawater as well as changes in the size of glaciers and ice sheets. Previous studies have demonstrated that a key source of uncertainties surrounding sea-level projections is the response of the Antarctic ice sheet to warming temperatures. Here we calibrate a previously published and relatively simple model of the Antarctic ice sheet over a hindcast period from the last interglacial period to the present. We apply and compare a range of (pre-) calibration methods, including a Bayesian approach that accounts for heteroskedasticity. We compare the model hindcasts and projections for different levels of model complexity and calibration methods. We compare the projections with the upper bounds from previous studies and find our projections have a narrower range in 2100. Furthermore we discuss the implications for the design of climate risk management strategies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Polettini, M., E-mail: matteo.polettini@uni.lu; Wachtel, A., E-mail: artur.wachtel@uni.lu; Esposito, M., E-mail: massimilano.esposito@uni.lu
We study the effect of intrinsic noise on the thermodynamic balance of complex chemical networks subtending cellular metabolism and gene regulation. A topological network property called deficiency, known to determine the possibility of complex behavior such as multistability and oscillations, is shown to also characterize the entropic balance. In particular, when deficiency is zero the average stochastic dissipation rate equals that of the corresponding deterministic model, where correlations are disregarded. In fact, dissipation can be reduced by the effect of noise, as occurs in a toy model of metabolism that we employ to illustrate our findings. This phenomenon highlights thatmore » there is a close interplay between deficiency and the activation of new dissipative pathways at low molecule numbers.« less
The QuakeSim Project: Numerical Simulations for Active Tectonic Processes
NASA Technical Reports Server (NTRS)
Donnellan, Andrea; Parker, Jay; Lyzenga, Greg; Granat, Robert; Fox, Geoffrey; Pierce, Marlon; Rundle, John; McLeod, Dennis; Grant, Lisa; Tullis, Terry
2004-01-01
In order to develop a solid earth science framework for understanding and studying of active tectonic and earthquake processes, this task develops simulation and analysis tools to study the physics of earthquakes using state-of-the art modeling, data manipulation, and pattern recognition technologies. We develop clearly defined accessible data formats and code protocols as inputs to the simulations. these are adapted to high-performance computers because the solid earth system is extremely complex and nonlinear resulting in computationally intensive problems with millions of unknowns. With these tools it will be possible to construct the more complex models and simulations necessary to develop hazard assessment systems critical for reducing future losses from major earthquakes.
CAD-Based Aerodynamic Design of Complex Configurations using a Cartesian Method
NASA Technical Reports Server (NTRS)
Nemec, Marian; Aftosmis, Michael J.; Pulliam, Thomas H.
2003-01-01
A modular framework for aerodynamic optimization of complex geometries is developed. By working directly with a parametric CAD system, complex-geometry models are modified nnd tessellated in an automatic fashion. The use of a component-based Cartesian method significantly reduces the demands on the CAD system, and also provides for robust and efficient flowfield analysis. The optimization is controlled using either a genetic or quasi-Newton algorithm. Parallel efficiency of the framework is maintained even when subject to limited CAD resources by dynamically re-allocating the processors of the flow solver. Overall, the resulting framework can explore designs incorporating large shape modifications and changes in topology.
Networks consolidation program: Maintenance and Operations (M&O) staffing estimates
NASA Technical Reports Server (NTRS)
Goodwin, J. P.
1981-01-01
The Mark IV-A consolidate deep space and high elliptical Earth orbiter (HEEO) missions tracking and implements centralized control and monitoring at the deep space communications complexes (DSCC). One of the objectives of the network design is to reduce maintenance and operations (M&O) costs. To determine if the system design meets this objective an M&O staffing model for Goldstone was developed which was used to estimate the staffing levels required to support the Mark IV-A configuration. The study was performed for the Goldstone complex and the program office translated these estimates for the overseas complexes to derive the network estimates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boros, Eszter; Srinivas, Raja; Kim, Hee -Kyung
Aqua ligands can undergo rapid internal rotation about the M-O bond. For magnetic resonance contrast agents, this rotation results in diminished relaxivity. Herein, we show that an intramolecular hydrogen bond to the aqua ligand can reduce this internal rotation and increase relaxivity. Molecular modeling was used to design a series of four Gd complexes capable of forming an intramolecular H-bond to the coordinated water ligand, and these complexes had anomalously high relaxivities compared to similar complexes lacking a H-bond acceptor. Molecular dynamics simulations supported the formation of a stable intramolecular H-bond, while alternative hypotheses that could explain the higher relaxivitymore » were systematically ruled out. Finally, intramolecular H-bonding represents a useful strategy to limit internal water rotational motion and increase relaxivity of Gd complexes.« less
Sparse intervertebral fence composition for 3D cervical vertebra segmentation
NASA Astrophysics Data System (ADS)
Liu, Xinxin; Yang, Jian; Song, Shuang; Cong, Weijian; Jiao, Peifeng; Song, Hong; Ai, Danni; Jiang, Yurong; Wang, Yongtian
2018-06-01
Statistical shape models are capable of extracting shape prior information, and are usually utilized to assist the task of segmentation of medical images. However, such models require large training datasets in the case of multi-object structures, and it also is difficult to achieve satisfactory results for complex shapes. This study proposed a novel statistical model for cervical vertebra segmentation, called sparse intervertebral fence composition (SiFC), which can reconstruct the boundary between adjacent vertebrae by modeling intervertebral fences. The complex shape of the cervical spine is replaced by a simple intervertebral fence, which considerably reduces the difficulty of cervical segmentation. The final segmentation results are obtained by using a 3D active contour deformation model without shape constraint, which substantially enhances the recognition capability of the proposed method for objects with complex shapes. The proposed segmentation framework is tested on a dataset with CT images from 20 patients. A quantitative comparison against corresponding reference vertebral segmentation yields an overall mean absolute surface distance of 0.70 mm and a dice similarity index of 95.47% for cervical vertebral segmentation. The experimental results show that the SiFC method achieves competitive cervical vertebral segmentation performances, and completely eliminates inter-process overlap.
NASA Astrophysics Data System (ADS)
Brown-Steiner, B.; Selin, N. E.; Prinn, R. G.; Monier, E.; Garcia-Menendez, F.; Tilmes, S.; Emmons, L. K.; Lamarque, J. F.; Cameron-Smith, P. J.
2017-12-01
We summarize two methods to aid in the identification of ozone signals from underlying spatially and temporally heterogeneous data in order to help research communities avoid the sometimes burdensome computational costs of high-resolution high-complexity models. The first method utilizes simplified chemical mechanisms (a Reduced Hydrocarbon Mechanism and a Superfast Mechanism) alongside a more complex mechanism (MOZART-4) within CESM CAM-Chem to extend the number of simulated meteorological years (or add additional members to an ensemble) for a given modeling problem. The Reduced Hydrocarbon mechanism is twice as fast, and the Superfast mechanism is three times faster than the MOZART-4 mechanism. We show that simplified chemical mechanisms are largely capable of simulating surface ozone across the globe as well as the more complex chemical mechanisms, and where they are not capable, a simple standardized anomaly emulation approach can correct for their inadequacies. The second method uses strategic averaging over both temporal and spatial scales to filter out the highly heterogeneous noise that underlies ozone observations and simulations. This method allows for a selection of temporal and spatial averaging scales that match a particular signal strength (between 0.5 and 5 ppbv), and enables the identification of regions where an ozone signal can rise above the ozone noise over a given region and a given period of time. In conjunction, these two methods can be used to "scale down" chemical mechanism complexity and quantitatively determine spatial and temporal scales that could enable research communities to utilize simplified representations of atmospheric chemistry and thereby maximize their productivity and efficiency given computational constraints. While this framework is here applied to ozone data, it could also be applied to a broad range of geospatial data sets (observed or modeled) that have spatial and temporal coverage.
Movement decoupling control for two-axis fast steering mirror
NASA Astrophysics Data System (ADS)
Wang, Rui; Qiao, Yongming; Lv, Tao
2017-02-01
Based on flexure hinge and piezoelectric actuator of two-axis fast steering mirror is a complex system with time varying, uncertain and strong coupling. It is extremely difficult to achieve high precision decoupling control with the traditional PID control method. The feedback error learning method was established an inverse hysteresis model which was based inner product dynamic neural network nonlinear and no-smooth for piezo-ceramic. In order to improve the actuator high precision, a method was proposed, which was based piezo-ceramic inverse model of two dynamic neural network adaptive control. The experiment result indicated that, compared with two neural network adaptive movement decoupling control algorithm, static relative error is reduced from 4.44% to 0.30% and coupling degree is reduced from 12.71% to 0.60%, while dynamic relative error is reduced from 13.92% to 2.85% and coupling degree is reduced from 2.63% to 1.17%.
International Land Model Benchmarking (ILAMB) Workshop Report, Technical Report DOE/SC-0186
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffman, Forrest M.; Koven, Charles D.; Kappel-Aleks, Gretchen
2016-11-01
As Earth system models become increasingly complex, there is a growing need for comprehensive and multi-faceted evaluation of model projections. To advance understanding of biogeochemical processes and their interactions with hydrology and climate under conditions of increasing atmospheric carbon dioxide, new analysis methods are required that use observations to constrain model predictions, inform model development, and identify needed measurements and field experiments. Better representations of biogeochemistry–climate feedbacks and ecosystem processes in these models are essential for reducing uncertainties associated with projections of climate change during the remainder of the 21st century.
Charles H. Luce; David G. Tarboton; Erkan Istanbulluoglu; Robert T. Pack
2005-01-01
Rhodes [2005] brings up some excellent points in his comments on the work of Istanbulluoglu et al. [2004]. We appreciate the opportunity to respond because it is likely that other readers will also wonder how they can apply the relatively simple analysis to important policy questions. Models necessarily reduce the complexity of the problem to make it tractable and...
Biodiversity loss decreases parasite diversity: theory and patterns
Lafferty, Kevin D.
2012-01-01
Past models have suggested host–parasite coextinction could lead to linear, or concave down relationships between free-living species richness and parasite richness. I explored several models for the relationship between parasite richness and biodiversity loss. Life cycle complexity, low generality of parasites and sensitivity of hosts reduced the robustness of parasite species to the loss of free-living species diversity. Food-web complexity and the ordering of extinctions altered these relationships in unpredictable ways. Each disassembly of a food web resulted in a unique relationship between parasite richness and the richness of free-living species, because the extinction trajectory of parasites was sensitive to the order of extinctions of free-living species. However, the average of many disassemblies tended to approximate an analytical model. Parasites of specialist hosts and hosts higher on food chains were more likely to go extinct in food-web models. Furthermore, correlated extinctions between hosts and parasites (e.g. if parasites share a host with a specialist predator) led to steeper declines in parasite richness with biodiversity loss. In empirical food webs with random removals of free-living species, the relationship between free-living species richness and parasite richness was, on average, quasi-linear, suggesting biodiversity loss reduces parasite diversity more than previously thought.
Jasper, Micah N; Martin, Sheppard A; Oshiro, Wendy M; Ford, Jermaine; Bushnell, Philip J; El-Masri, Hisham
2016-03-15
People are often exposed to complex mixtures of environmental chemicals such as gasoline, tobacco smoke, water contaminants, or food additives. We developed an approach that applies chemical lumping methods to complex mixtures, in this case gasoline, based on biologically relevant parameters used in physiologically based pharmacokinetic (PBPK) modeling. Inhalation exposures were performed with rats to evaluate the performance of our PBPK model and chemical lumping method. There were 109 chemicals identified and quantified in the vapor in the chamber. The time-course toxicokinetic profiles of 10 target chemicals were also determined from blood samples collected during and following the in vivo experiments. A general PBPK model was used to compare the experimental data to the simulated values of blood concentration for 10 target chemicals with various numbers of lumps, iteratively increasing from 0 to 99. Large reductions in simulation error were gained by incorporating enzymatic chemical interactions, in comparison to simulating the individual chemicals separately. The error was further reduced by lumping the 99 nontarget chemicals. The same biologically based lumping approach can be used to simplify any complex mixture with tens, hundreds, or thousands of constituents.
NASA Astrophysics Data System (ADS)
Siade, Adam J.; Hall, Joel; Karelse, Robert N.
2017-11-01
Regional groundwater flow models play an important role in decision making regarding water resources; however, the uncertainty embedded in model parameters and model assumptions can significantly hinder the reliability of model predictions. One way to reduce this uncertainty is to collect new observation data from the field. However, determining where and when to obtain such data is not straightforward. There exist a number of data-worth and experimental design strategies developed for this purpose. However, these studies often ignore issues related to real-world groundwater models such as computational expense, existing observation data, high-parameter dimension, etc. In this study, we propose a methodology, based on existing methods and software, to efficiently conduct such analyses for large-scale, complex regional groundwater flow systems for which there is a wealth of available observation data. The method utilizes the well-established d-optimality criterion, and the minimax criterion for robust sampling strategies. The so-called Null-Space Monte Carlo method is used to reduce the computational burden associated with uncertainty quantification. And, a heuristic methodology, based on the concept of the greedy algorithm, is proposed for developing robust designs with subsets of the posterior parameter samples. The proposed methodology is tested on a synthetic regional groundwater model, and subsequently applied to an existing, complex, regional groundwater system in the Perth region of Western Australia. The results indicate that robust designs can be obtained efficiently, within reasonable computational resources, for making regional decisions regarding groundwater level sampling.
A dynamic subgrid scale model for Large Eddy Simulations based on the Mori-Zwanzig formalism
NASA Astrophysics Data System (ADS)
Parish, Eric J.; Duraisamy, Karthik
2017-11-01
The development of reduced models for complex multiscale problems remains one of the principal challenges in computational physics. The optimal prediction framework of Chorin et al. [1], which is a reformulation of the Mori-Zwanzig (M-Z) formalism of non-equilibrium statistical mechanics, provides a framework for the development of mathematically-derived reduced models of dynamical systems. Several promising models have emerged from the optimal prediction community and have found application in molecular dynamics and turbulent flows. In this work, a new M-Z-based closure model that addresses some of the deficiencies of existing methods is developed. The model is constructed by exploiting similarities between two levels of coarse-graining via the Germano identity of fluid mechanics and by assuming that memory effects have a finite temporal support. The appeal of the proposed model, which will be referred to as the 'dynamic-MZ-τ' model, is that it is parameter-free and has a structural form imposed by the mathematics of the coarse-graining process (rather than the phenomenological assumptions made by the modeler, such as in classical subgrid scale models). To promote the applicability of M-Z models in general, two procedures are presented to compute the resulting model form, helping to bypass the tedious error-prone algebra that has proven to be a hindrance to the construction of M-Z-based models for complex dynamical systems. While the new formulation is applicable to the solution of general partial differential equations, demonstrations are presented in the context of Large Eddy Simulation closures for the Burgers equation, decaying homogeneous turbulence, and turbulent channel flow. The performance of the model and validity of the underlying assumptions are investigated in detail.
The kinetics of thermal generation of flavour.
Parker, Jane K
2013-01-01
Control and optimisation of flavour is the ultimate challenge for the food and flavour industry. The major route to flavour formation during thermal processing is the Maillard reaction, which is a complex cascade of interdependent reactions initiated by the reaction between a reducing sugar and an amino compound. The complexity of the reaction means that researchers turn to kinetic modelling in order to understand the control points of the reaction and to manipulate the flavour profile. Studies of the kinetics of flavour formation have developed over the past 30 years from single- response empirical models of binary aqueous systems to sophisticated multi-response models in food matrices, based on the underlying chemistry, with the power to predict the formation of some key aroma compounds. This paper discusses in detail the development of kinetic models of thermal generation of flavour and looks at the challenges involved in predicting flavour. Copyright © 2012 Society of Chemical Industry.
2009-01-01
Current care guidelines recommend glucose control (GC) in critically ill patients. To achieve GC, many ICUs have implemented a (nurse-based) protocol on paper. However, such protocols are often complex, time-consuming, and can cause iatrogenic hypoglycemia. Computerized glucose regulation protocols may improve patient safety, efficiency, and nurse compliance. Such computerized clinical decision support systems (Cuss) use more complex logic to provide an insulin infusion rate based on previous blood glucose levels and other parameters. A computerized CDSS for glucose control has the potential to reduce overall workload, reduce the chance of human cognitive failure, and improve glucose control. Several computer-assisted glucose regulation programs have been published recently. In order of increasing complexity, the three main types of algorithms used are computerized flowcharts, Proportional-Integral-Derivative (PID), and Model Predictive Control (MPC). PID is essentially a closed-loop feedback system, whereas MPC models the behavior of glucose and insulin in ICU patients. Although the best approach has not yet been determined, it should be noted that PID controllers are generally thought to be more robust than MPC systems. The computerized Cuss that are most likely to emerge are those that are fully a part of the routine workflow, use patient-specific characteristics and apply variable sampling intervals. PMID:19849827
Goel, Honey; Tiwary, Ashok K; Rana, Vikas
2011-01-01
The objective of the present work was to optimize the formulation of fast disintegrating tablets (FDTs) of ondansetron HCl containing novel superdisintegrants, possessing sufficient mechanical strength and disintegration time comparable to those containing crospovidone or croscarmellose sodium. The FDTs were formulated using a novel superdisintegrant (chitosan-alginate (1:1) interpolymer complex and chitin) to achieve a sweet tasting disintegrating system. The results revealed that chitin (5-20%) increased the porosity and decreased the DT of tablets. At higher concentrations chitin maintained tablet porosity even at 5.5 kg crushing strength. Ondansetron HCl was found to antagonize the wicking action of glycine. Further, evaluation of the mechanism of disintegration revealed that glycine transported the aqueous medium to different parts of the tablets while the chitosan-alginate complex swelled up due to transfer of moisture from glycine. This phenomenon resulted in breakage of the tablet within seconds. For preparing optimized FDTs, the reduced model equations generated from Box-Behnken design (BBD) were solved after substituting the known disintegration time of FDTs containing superdisintegrants in the reduced model equations. The results suggested that excipient system under investigation not only improved the disintegration time but also made it possible to prepare FDTs with higher crushing strength as compared to tablets containing known superdisintegrants.
Lagrue, E; Abert, B; Nadal, L; Tabone, L; Bodard, S; Medja, F; Lombes, A; Chalon, S; Castelnau, P
2009-06-01
The basal ganglia, which are interconnected in the striato-nigral dopaminergic network, are affected in several childhood diseases including Leigh syndrome (LS). LS is the most common mitochondrial disorder affecting children and usually arise from inhibition of the respiratory chain. This vulnerability is attributed to a particular susceptibility to energetic stress, with mitochondrial inhibition as a common pathogenic pathway. In this study we developed a LS model for neuroprotection trials in mice by using the complex I inhibitor MPTP. We first verified that MPTP significantly inhibits the mitochondrial complex I in the brain (p = 0.018). This model also reproduced the biochemical and pathological features of LS: MPTP increased plasmatic lactate levels (p = 0.023) and triggered basal ganglia degeneration, as evaluated through dopamine transporter (DAT) autoradiography, tyrosine hydroxylase (TH) immunohistochemistry, and dopamine dosage. Striatal DAT levels were markedly decreased after MPTP treatment (p = 0.003). TH immunoreactivity was reduced in the striatum and substantia nigra (p = 0.005), and striatal dopamine was significantly reduced (p < 0.01). Taken together, these results confirm that acute MPTP intoxication in young mice provides a reproducible pharmacological paradigm of LS, thus opening new avenues for neuroprotection research.
NASA Astrophysics Data System (ADS)
Squire, O. J.; Archibald, A. T.; Griffiths, P. T.; Jenkin, M. E.; Pyle, J. A.
2014-09-01
Isoprene is a precursor to tropospheric ozone, a key pollutant and greenhouse gas. Anthropogenic activity over the coming century is likely to cause large changes in atmospheric CO2 levels, climate and land use, all of which will alter the global vegetation distribution leading to changes in isoprene emissions. Previous studies have used global chemistry-climate models to assess how possible changes in climate and land use could affect isoprene emissions and hence tropospheric ozone. The chemistry of isoprene oxidation, which can alter the concentration of ozone, is highly complex, therefore it must be parameterised in these models. In this work we compare the effect of four different reduced isoprene chemical mechanisms, all currently used in Earth-system models, on tropospheric ozone. Using a box model we compare ozone in these reduced schemes to that in a more explicit scheme (the MCM) over a range of NOx and isoprene emissions, through the use of O3 isopleths. We find that there is some variability, especially at high isoprene emissions, caused by differences in isoprene-derived NOx reservoir species. A global model is then used to examine how the different reduced schemes respond to potential future changes in climate, isoprene emissions, anthropogenic emissions and land use change. We find that, particularly in isoprene rich regions, the response of the schemes varies considerably. The wide ranging response is due to differences in the types of peroxy radicals produced by isoprene oxidation, and their relative rates of reaction towards NO, leading to ozone formation, or HO2, leading to termination. Also important is the yield of isoprene-nitrates and peroxyacyl nitrate precursors from isoprene oxidation. Those schemes that produce less of these NOx reservoir species, tend to produce more ozone locally and less away from the source region. Additionally, by combining the emissions and O3 data from all of the global model integrations, we are able to construct isopleth plots comparable to those from the box model analysis. We find that the global and box model isopleths show good qualitative agreement, suggesting that comparing chemical mechanisms with a box model in this framework is a useful tool for assessing mechanistic performance in complex global models. We conclude that as the choice of reduced isoprene mechanism may alter both the magnitude and sign of the ozone response, how isoprene chemistry is parameterised in perturbation experiments such as these is a crucially important consideration. More measurements are needed to validate these reduced mechanisms especially in high-VOC, low-NOx environments.
Model-based spectral estimation of Doppler signals using parallel genetic algorithms.
Solano González, J; Rodríguez Vázquez, K; García Nocetti, D F
2000-05-01
Conventional spectral analysis methods use a fast Fourier transform (FFT) on consecutive or overlapping windowed data segments. For Doppler ultrasound signals, this approach suffers from an inadequate frequency resolution due to the time segment duration and the non-stationarity characteristics of the signals. Parametric or model-based estimators can give significant improvements in the time-frequency resolution at the expense of a higher computational complexity. This work describes an approach which implements in real-time a parametric spectral estimator method using genetic algorithms (GAs) in order to find the optimum set of parameters for the adaptive filter that minimises the error function. The aim is to reduce the computational complexity of the conventional algorithm by using the simplicity associated to GAs and exploiting its parallel characteristics. This will allow the implementation of higher order filters, increasing the spectrum resolution, and opening a greater scope for using more complex methods.
A cross-validation package driving Netica with python
Fienen, Michael N.; Plant, Nathaniel G.
2014-01-01
Bayesian networks (BNs) are powerful tools for probabilistically simulating natural systems and emulating process models. Cross validation is a technique to avoid overfitting resulting from overly complex BNs. Overfitting reduces predictive skill. Cross-validation for BNs is known but rarely implemented due partly to a lack of software tools designed to work with available BN packages. CVNetica is open-source, written in Python, and extends the Netica software package to perform cross-validation and read, rebuild, and learn BNs from data. Insights gained from cross-validation and implications on prediction versus description are illustrated with: a data-driven oceanographic application; and a model-emulation application. These examples show that overfitting occurs when BNs become more complex than allowed by supporting data and overfitting incurs computational costs as well as causing a reduction in prediction skill. CVNetica evaluates overfitting using several complexity metrics (we used level of discretization) and its impact on performance metrics (we used skill).
Chromatic Image Analysis For Quantitative Thermal Mapping
NASA Technical Reports Server (NTRS)
Buck, Gregory M.
1995-01-01
Chromatic image analysis system (CIAS) developed for use in noncontact measurements of temperatures on aerothermodynamic models in hypersonic wind tunnels. Based on concept of temperature coupled to shift in color spectrum for optical measurement. Video camera images fluorescence emitted by phosphor-coated model at two wavelengths. Temperature map of model then computed from relative brightnesses in video images of model at those wavelengths. Eliminates need for intrusive, time-consuming, contact temperature measurements by gauges, making it possible to map temperatures on complex surfaces in timely manner and at reduced cost.
Feary, Simon
2009-01-01
As the development of complex manufacturing models and virtual companies become more prevalent in today's growing global markets, it is increasingly important to support the relationships between manufacturer and supplier. Utilising these relationships will ensure that supply chains operate more effectively and reduce costs, risks and time-to-market time frames, whilst maintaining product quality.
ENVIRONMENTAL CONSEQUENCES OF LAND USE CHANGE: ACCOUNTING FOR COMPLEXITY WITH AGENT-BASED MODELS
The effects of people on ecosystems and the impacts of ecosystem services on human well-being are being viewed increasingly as an integrated system. Demographic and economic pressures change a variety of ecological indicators, which can then result in reduced quality of ecosystem...
Elements of Engagement: A Model of Teacher Interactions via Professional Learning Networks
ERIC Educational Resources Information Center
Krutka, Daniel G.; Carpenter, Jeffrey P.; Trust, Torrey
2016-01-01
In recent years, many educators have turned to participatory online affinity spaces for professional growth with peers who are more accessible because of reduced temporal and spatial constraints. Specifically, professional learning networks (PLNs) are "uniquely personalized, complex systems of interactions consisting of people, resources, and…
Complex food webs prevent competitive exclusion among producer species.
Brose, Ulrich
2008-11-07
Herbivorous top-down forces and bottom-up competition for nutrients determine the coexistence and relative biomass patterns of producer species. Combining models of predator-prey and producer-nutrient interactions with a structural model of complex food webs, I investigated these two aspects in a dynamic food-web model. While competitive exclusion leads to persistence of only one producer species in 99.7% of the simulated simple producer communities without consumers, embedding the same producer communities in complex food webs generally yields producer coexistence. In simple producer communities, the producers with the most efficient nutrient-intake rates increase in biomass until they competitively exclude inferior producers. In food webs, herbivory predominantly reduces the biomass density of those producers that dominated in producer communities, which yields a more even biomass distribution. In contrast to prior analyses of simple modules, this facilitation of producer coexistence by herbivory does not require a trade-off between the nutrient-intake efficiency and the resistance to herbivory. The local network structure of food webs (top-down effects of the number of herbivores and the herbivores' maximum consumption rates) and the nutrient supply (bottom-up effect) interactively determine the relative biomass densities of the producer species. A strong negative feedback loop emerges in food webs: factors that increase producer biomasses also increase herbivory, which reduces producer biomasses. This negative feedback loop regulates the coexistence and biomass patterns of the producers by balancing biomass increases of producers and biomass fluxes to herbivores, which prevents competitive exclusion.
Haselmayer, Philipp; Camps, Montserrat; Muzerelle, Mathilde; El Bawab, Samer; Waltzinger, Caroline; Bruns, Lisa; Abla, Nada; Polokoff, Mark A.; Jond-Necand, Carole; Gaudet, Marilène; Benoit, Audrey; Bertschy Meier, Dominique; Martin, Catherine; Gretener, Denise; Lombardi, Maria Stella; Grenningloh, Roland; Ladel, Christoph; Petersen, Jørgen Søberg; Gaillard, Pascale; Ji, Hong
2014-01-01
SLE is a complex autoimmune inflammatory disease characterized by pathogenic autoantibody production as a consequence of uncontrolled T–B cell activity and immune-complex deposition in various organs, including kidney, leading to tissue damage and function loss. There is a high unmet need for better treatment options other than corticosteroids and immunosuppressants. Phosphoinositol-3 kinase δ (PI3Kδ) is a promising target in this respect as it is essential in mediating B- and T-cell function in mouse and human. We report the identification of selective PI3Kδ inhibitors that blocked B-, T-, and plasmacytoid dendritic cell activities in human peripheral blood and in primary cell co-cultures (BioMAP®) without detecting signs of undesired toxicity. In an IFNα-accelerated mouse SLE model, our PI3Kδ inhibitors blocked nephritis development, whether administered at the onset of autoantibody appearance or the onset of proteinuria. Disease amelioration correlated with normalized immune cell numbers in the spleen, reduced immune-complex deposition as well as reduced inflammation, fibrosis, and tissue damage in the kidney. Improvements were similar to those achieved with a frequently prescribed drug for lupus nephritis, the potent immunosuppressant mycophenolate mofetil. Finally, we established a pharmacodynamics/pharmacokinetic/efficacy model that revealed that a sustained PI3Kδ inhibition of 50% is sufficient to achieve full efficacy in our disease model. These data demonstrate the therapeutic potential of PI3Kδ inhibitors in SLE and lupus nephritis. PMID:24904582
Haselmayer, Philipp; Camps, Montserrat; Muzerelle, Mathilde; El Bawab, Samer; Waltzinger, Caroline; Bruns, Lisa; Abla, Nada; Polokoff, Mark A; Jond-Necand, Carole; Gaudet, Marilène; Benoit, Audrey; Bertschy Meier, Dominique; Martin, Catherine; Gretener, Denise; Lombardi, Maria Stella; Grenningloh, Roland; Ladel, Christoph; Petersen, Jørgen Søberg; Gaillard, Pascale; Ji, Hong
2014-01-01
SLE is a complex autoimmune inflammatory disease characterized by pathogenic autoantibody production as a consequence of uncontrolled T-B cell activity and immune-complex deposition in various organs, including kidney, leading to tissue damage and function loss. There is a high unmet need for better treatment options other than corticosteroids and immunosuppressants. Phosphoinositol-3 kinase δ (PI3Kδ) is a promising target in this respect as it is essential in mediating B- and T-cell function in mouse and human. We report the identification of selective PI3Kδ inhibitors that blocked B-, T-, and plasmacytoid dendritic cell activities in human peripheral blood and in primary cell co-cultures (BioMAP(®)) without detecting signs of undesired toxicity. In an IFNα-accelerated mouse SLE model, our PI3Kδ inhibitors blocked nephritis development, whether administered at the onset of autoantibody appearance or the onset of proteinuria. Disease amelioration correlated with normalized immune cell numbers in the spleen, reduced immune-complex deposition as well as reduced inflammation, fibrosis, and tissue damage in the kidney. Improvements were similar to those achieved with a frequently prescribed drug for lupus nephritis, the potent immunosuppressant mycophenolate mofetil. Finally, we established a pharmacodynamics/pharmacokinetic/efficacy model that revealed that a sustained PI3Kδ inhibition of 50% is sufficient to achieve full efficacy in our disease model. These data demonstrate the therapeutic potential of PI3Kδ inhibitors in SLE and lupus nephritis.
Yu, Bin; Xu, Jia-Meng; Li, Shan; Chen, Cheng; Chen, Rui-Xin; Wang, Lei; Zhang, Yan; Wang, Ming-Hui
2017-01-01
Gene regulatory networks (GRNs) research reveals complex life phenomena from the perspective of gene interaction, which is an important research field in systems biology. Traditional Bayesian networks have a high computational complexity, and the network structure scoring model has a single feature. Information-based approaches cannot identify the direction of regulation. In order to make up for the shortcomings of the above methods, this paper presents a novel hybrid learning method (DBNCS) based on dynamic Bayesian network (DBN) to construct the multiple time-delayed GRNs for the first time, combining the comprehensive score (CS) with the DBN model. DBNCS algorithm first uses CMI2NI (conditional mutual inclusive information-based network inference) algorithm for network structure profiles learning, namely the construction of search space. Then the redundant regulations are removed by using the recursive optimization algorithm (RO), thereby reduce the false positive rate. Secondly, the network structure profiles are decomposed into a set of cliques without loss, which can significantly reduce the computational complexity. Finally, DBN model is used to identify the direction of gene regulation within the cliques and search for the optimal network structure. The performance of DBNCS algorithm is evaluated by the benchmark GRN datasets from DREAM challenge as well as the SOS DNA repair network in Escherichia coli, and compared with other state-of-the-art methods. The experimental results show the rationality of the algorithm design and the outstanding performance of the GRNs. PMID:29113310
Yu, Bin; Xu, Jia-Meng; Li, Shan; Chen, Cheng; Chen, Rui-Xin; Wang, Lei; Zhang, Yan; Wang, Ming-Hui
2017-10-06
Gene regulatory networks (GRNs) research reveals complex life phenomena from the perspective of gene interaction, which is an important research field in systems biology. Traditional Bayesian networks have a high computational complexity, and the network structure scoring model has a single feature. Information-based approaches cannot identify the direction of regulation. In order to make up for the shortcomings of the above methods, this paper presents a novel hybrid learning method (DBNCS) based on dynamic Bayesian network (DBN) to construct the multiple time-delayed GRNs for the first time, combining the comprehensive score (CS) with the DBN model. DBNCS algorithm first uses CMI2NI (conditional mutual inclusive information-based network inference) algorithm for network structure profiles learning, namely the construction of search space. Then the redundant regulations are removed by using the recursive optimization algorithm (RO), thereby reduce the false positive rate. Secondly, the network structure profiles are decomposed into a set of cliques without loss, which can significantly reduce the computational complexity. Finally, DBN model is used to identify the direction of gene regulation within the cliques and search for the optimal network structure. The performance of DBNCS algorithm is evaluated by the benchmark GRN datasets from DREAM challenge as well as the SOS DNA repair network in Escherichia coli , and compared with other state-of-the-art methods. The experimental results show the rationality of the algorithm design and the outstanding performance of the GRNs.
Mathematical Modeling of Dual Layer Shell Type Recuperation System for Biogas Dehumidification
NASA Astrophysics Data System (ADS)
Gendelis, S.; Timuhins, A.; Laizans, A.; Bandeniece, L.
2015-12-01
The main aim of the current paper is to create a mathematical model for dual layer shell type recuperation system, which allows reducing the heat losses from the biomass digester and water amount in the biogas without any additional mechanical or chemical components. The idea of this system is to reduce the temperature of the outflowing gas by creating two-layered counter-flow heat exchanger around the walls of biogas digester, thus increasing a thermal resistance and the gas temperature, resulting in a condensation on a colder surface. Complex mathematical model, including surface condensation, is developed for this type of biogas dehumidifier and the parameter study is carried out for a wide range of parameters. The model is reduced to 1D case to make numerical calculations faster. It is shown that latent heat of condensation is very important for the total heat balance and the condensation rate is highly dependent on insulation between layers and outside temperature. Modelling results allow finding optimal geometrical parameters for the known gas flow and predicting the condensation rate for different system setups and seasons.
Widger, Leland R.; Jiang, Yunbo; Siegler, Maxime; Kumar, Devesh; Latifi, Reza; de Visser, Sam P.; Jameson, Guy N.L.; Goldberg, David P.
2013-01-01
The known iron(II) complex [FeII(LN3S)(OTf)] (1) was used as starting material to prepare the new biomimetic (N4S(thiolate)) iron(II) complexes [FeII(LN3S)(py)](OTf) (2) and [FeII(LN3S)(DMAP)](OTf) (3), where LN3S is a tetradentate bis(imino)pyridine (BIP) derivative with a covalently tethered phenylthiolate donor. These complexes were characterized by X-ray crystallography, UV-vis, 1H NMR, and Mössbauer spectroscopy, as well as electrochemistry. A nickel(II) analogue, [NiII(LN3S)](BF4) (5), was also synthesized and characterized by structural and spectroscopic methods. Cyclic voltammetric studies showed 1 – 3 and 5 undergo a single reduction process with E1/2 between −0.9 to −1.2 V versus Fc+/Fc. Treatment of 3 with 0.5% Na/Hg amalgam gave the mono-reduced complex [Fe(LN3S)(DMAP)]0 (4), which was characterized by X-ray crystallography, UV-vis, EPR (g = [2.155, 2.057, 2.038]) and Mössbauer (δ = 0.33 mm s−1; ΔEQ = 2.04 mm s−1) spectroscopies. Computational methods (DFT) were employed to model complexes 3 – 5. The combined experimental and computational studies show that 1 – 3 are 5-coordinate, high-spin (S = 2) FeII complexes, whereas 4 is best described as a 5-coordinate, intermediate-spin (S = 1) FeII complex antiferromagnetically coupled to a ligand radical. This unique electronic configuration leads to an overall doublet spin (Stotal = ½) ground state. Complexes 2 and 3 are shown to react with O2 to give S-oxygenated products, as previously reported for 1. In contrast, the mono-reduced 4 appears to react with O2 to give a mixture of S- and Fe-oxygenates. The nickel(II) complex 5 does not react with O2, and even when the mono-reduced nickel complex is produced, it appears to undergo only outer-sphere oxidation with O2. PMID:23992096
Quantum Gauss-Jordan Elimination and Simulation of Accounting Principles on Quantum Computers
NASA Astrophysics Data System (ADS)
Diep, Do Ngoc; Giang, Do Hoang; Van Minh, Nguyen
2017-06-01
The paper is devoted to a version of Quantum Gauss-Jordan Elimination and its applications. In the first part, we construct the Quantum Gauss-Jordan Elimination (QGJE) Algorithm and estimate the complexity of computation of Reduced Row Echelon Form (RREF) of N × N matrices. The main result asserts that QGJE has computation time is of order 2 N/2. The second part is devoted to a new idea of simulation of accounting by quantum computing. We first expose the actual accounting principles in a pure mathematics language. Then, we simulate the accounting principles on quantum computers. We show that, all accounting actions are exhousted by the described basic actions. The main problems of accounting are reduced to some system of linear equations in the economic model of Leontief. In this simulation, we use our constructed Quantum Gauss-Jordan Elimination to solve the problems and the complexity of quantum computing is a square root order faster than the complexity in classical computing.
A Four-Stage Hybrid Model for Hydrological Time Series Forecasting
Di, Chongli; Yang, Xiaohua; Wang, Xiaochao
2014-01-01
Hydrological time series forecasting remains a difficult task due to its complicated nonlinear, non-stationary and multi-scale characteristics. To solve this difficulty and improve the prediction accuracy, a novel four-stage hybrid model is proposed for hydrological time series forecasting based on the principle of ‘denoising, decomposition and ensemble’. The proposed model has four stages, i.e., denoising, decomposition, components prediction and ensemble. In the denoising stage, the empirical mode decomposition (EMD) method is utilized to reduce the noises in the hydrological time series. Then, an improved method of EMD, the ensemble empirical mode decomposition (EEMD), is applied to decompose the denoised series into a number of intrinsic mode function (IMF) components and one residual component. Next, the radial basis function neural network (RBFNN) is adopted to predict the trend of all of the components obtained in the decomposition stage. In the final ensemble prediction stage, the forecasting results of all of the IMF and residual components obtained in the third stage are combined to generate the final prediction results, using a linear neural network (LNN) model. For illustration and verification, six hydrological cases with different characteristics are used to test the effectiveness of the proposed model. The proposed hybrid model performs better than conventional single models, the hybrid models without denoising or decomposition and the hybrid models based on other methods, such as the wavelet analysis (WA)-based hybrid models. In addition, the denoising and decomposition strategies decrease the complexity of the series and reduce the difficulties of the forecasting. With its effective denoising and accurate decomposition ability, high prediction precision and wide applicability, the new model is very promising for complex time series forecasting. This new forecast model is an extension of nonlinear prediction models. PMID:25111782
Semi-supervised Machine Learning for Analysis of Hydrogeochemical Data and Models
NASA Astrophysics Data System (ADS)
Vesselinov, Velimir; O'Malley, Daniel; Alexandrov, Boian; Moore, Bryan
2017-04-01
Data- and model-based analyses such as uncertainty quantification, sensitivity analysis, and decision support using complex physics models with numerous model parameters and typically require a huge number of model evaluations (on order of 10^6). Furthermore, model simulations of complex physics may require substantial computational time. For example, accounting for simultaneously occurring physical processes such as fluid flow and biogeochemical reactions in heterogeneous porous medium may require several hours of wall-clock computational time. To address these issues, we have developed a novel methodology for semi-supervised machine learning based on Non-negative Matrix Factorization (NMF) coupled with customized k-means clustering. The algorithm allows for automated, robust Blind Source Separation (BSS) of groundwater types (contamination sources) based on model-free analyses of observed hydrogeochemical data. We have also developed reduced order modeling tools, which coupling support vector regression (SVR), genetic algorithms (GA) and artificial and convolutional neural network (ANN/CNN). SVR is applied to predict the model behavior within prior uncertainty ranges associated with the model parameters. ANN and CNN procedures are applied to upscale heterogeneity of the porous medium. In the upscaling process, fine-scale high-resolution models of heterogeneity are applied to inform coarse-resolution models which have improved computational efficiency while capturing the impact of fine-scale effects at the course scale of interest. These techniques are tested independently on a series of synthetic problems. We also present a decision analysis related to contaminant remediation where the developed reduced order models are applied to reproduce groundwater flow and contaminant transport in a synthetic heterogeneous aquifer. The tools are coded in Julia and are a part of the MADS high-performance computational framework (https://github.com/madsjulia/Mads.jl).
A four-stage hybrid model for hydrological time series forecasting.
Di, Chongli; Yang, Xiaohua; Wang, Xiaochao
2014-01-01
Hydrological time series forecasting remains a difficult task due to its complicated nonlinear, non-stationary and multi-scale characteristics. To solve this difficulty and improve the prediction accuracy, a novel four-stage hybrid model is proposed for hydrological time series forecasting based on the principle of 'denoising, decomposition and ensemble'. The proposed model has four stages, i.e., denoising, decomposition, components prediction and ensemble. In the denoising stage, the empirical mode decomposition (EMD) method is utilized to reduce the noises in the hydrological time series. Then, an improved method of EMD, the ensemble empirical mode decomposition (EEMD), is applied to decompose the denoised series into a number of intrinsic mode function (IMF) components and one residual component. Next, the radial basis function neural network (RBFNN) is adopted to predict the trend of all of the components obtained in the decomposition stage. In the final ensemble prediction stage, the forecasting results of all of the IMF and residual components obtained in the third stage are combined to generate the final prediction results, using a linear neural network (LNN) model. For illustration and verification, six hydrological cases with different characteristics are used to test the effectiveness of the proposed model. The proposed hybrid model performs better than conventional single models, the hybrid models without denoising or decomposition and the hybrid models based on other methods, such as the wavelet analysis (WA)-based hybrid models. In addition, the denoising and decomposition strategies decrease the complexity of the series and reduce the difficulties of the forecasting. With its effective denoising and accurate decomposition ability, high prediction precision and wide applicability, the new model is very promising for complex time series forecasting. This new forecast model is an extension of nonlinear prediction models.
Dynamics of an HBV Model with Drug Resistance Under Intermittent Antiviral Therapy
NASA Astrophysics Data System (ADS)
Zhang, Ben-Gong; Tanaka, Gouhei; Aihara, Kazuyuki; Honda, Masao; Kaneko, Shuichi; Chen, Luonan
2015-06-01
This paper studies the dynamics of the hepatitis B virus (HBV) model and the therapy regimens of HBV disease. First, we propose a new mathematical model of HBV with drug resistance, and then analyze its qualitative and dynamical properties. Combining the clinical data and theoretical analysis, we demonstrate that our model is biologically plausible and also computationally viable. Second, we demonstrate that the intermittent antiviral therapy regimen is one of the possible strategies to treat this kind of complex disease. There are two main advantages of this regimen, i.e. it not only may delay the development of drug resistance, but also may reduce the duration of on-treatment time compared with the long-term continuous medication. Moreover, such an intermittent antiviral therapy can reduce the adverse side effects. Our theoretical model and computational results provide qualitative insight into the progression of HBV, and also a possible new therapy for HBV disease.
NASA Astrophysics Data System (ADS)
Marsh, C.; Pomeroy, J. W.; Wheater, H. S.
2016-12-01
There is a need for hydrological land surface schemes that can link to atmospheric models, provide hydrological prediction at multiple scales and guide the development of multiple objective water predictive systems. Distributed raster-based models suffer from an overrepresentation of topography, leading to wasted computational effort that increases uncertainty due to greater numbers of parameters and initial conditions. The Canadian Hydrological Model (CHM) is a modular, multiphysics, spatially distributed modelling framework designed for representing hydrological processes, including those that operate in cold-regions. Unstructured meshes permit variable spatial resolution, allowing coarse resolutions at low spatial variability and fine resolutions as required. Model uncertainty is reduced by lessening the necessary computational elements relative to high-resolution rasters. CHM uses a novel multi-objective approach for unstructured triangular mesh generation that fulfills hydrologically important constraints (e.g., basin boundaries, water bodies, soil classification, land cover, elevation, and slope/aspect). This provides an efficient spatial representation of parameters and initial conditions, as well as well-formed and well-graded triangles that are suitable for numerical discretization. CHM uses high-quality open source libraries and high performance computing paradigms to provide a framework that allows for integrating current state-of-the-art process algorithms. The impact of changes to model structure, including individual algorithms, parameters, initial conditions, driving meteorology, and spatial/temporal discretization can be easily tested. Initial testing of CHM compared spatial scales and model complexity for a spring melt period at a sub-arctic mountain basin. The meshing algorithm reduced the total number of computational elements and preserved the spatial heterogeneity of predictions.
Yang, Xiaoying; Warren, Rachel; He, Yi; Ye, Jinyin; Li, Qiaoling; Wang, Guoqing
2018-02-15
It is increasingly recognized that climate change could affect the quality of water through complex natural and anthropogenic mechanisms. Previous studies on climate change and water quality have mostly focused on assessing its impact on pollutant loads from agricultural runoff. A sub-daily SWAT model was developed to simulate the discharge, transport, and transformation of nitrogen from all known anthropogenic sources including industries, municipal sewage treatment plants, concentrated and scattered feedlot operations, rural households, and crop production in the Upper Huai River Basin. This is a highly polluted basin with total nitrogen (TN) concentrations frequently exceeding Class V of the Chinese Surface Water Quality Standard (GB3838-2002). Climate change projections produced by 16 Global Circulation Models (GCMs) under the RCP 4.5 and RCP 8.5 scenarios in the mid (2040-2060) and late (2070-2090) century were used to drive the SWAT model to evaluate the impacts of climate change on both the TN loads and the effectiveness of three water pollution control measures (reducing fertilizer use, constructing vegetative filter strips, and improving septic tank performance) in the basin. SWAT simulation results have indicated that climate change is likely to cause an increase in both monthly average and extreme TN loads in February, May, and November. The projected impact of climate change on TN loads in August is more varied between GCMs. In addition, climate change is projected to have a negative impact on the effectiveness of septic tanks in reducing TN loads, while its impacts on the other two measures are more uncertain. Despite the uncertainty, reducing fertilizer use remains the most effective measure for reducing TN loads under different climate change scenarios. Meanwhile, improving septic tank performance is relatively more effective in reducing annual TN loads, while constructing vegetative filter strips is more effective in reducing annual maximum monthly TN loads. Copyright © 2017 Elsevier B.V. All rights reserved.
The dynamical analysis of modified two-compartment neuron model and FPGA implementation
NASA Astrophysics Data System (ADS)
Lin, Qianjin; Wang, Jiang; Yang, Shuangming; Yi, Guosheng; Deng, Bin; Wei, Xile; Yu, Haitao
2017-10-01
The complexity of neural models is increasing with the investigation of larger biological neural network, more various ionic channels and more detailed morphologies, and the implementation of biological neural network is a task with huge computational complexity and power consumption. This paper presents an efficient digital design using piecewise linearization on field programmable gate array (FPGA), to succinctly implement the reduced two-compartment model which retains essential features of more complicated models. The design proposes an approximate neuron model which is composed of a set of piecewise linear equations, and it can reproduce different dynamical behaviors to depict the mechanisms of a single neuron model. The consistency of hardware implementation is verified in terms of dynamical behaviors and bifurcation analysis, and the simulation results including varied ion channel characteristics coincide with the biological neuron model with a high accuracy. Hardware synthesis on FPGA demonstrates that the proposed model has reliable performance and lower hardware resource compared with the original two-compartment model. These investigations are conducive to scalability of biological neural network in reconfigurable large-scale neuromorphic system.
High-Performance Signal Detection for Adverse Drug Events using MapReduce Paradigm.
Fan, Kai; Sun, Xingzhi; Tao, Ying; Xu, Linhao; Wang, Chen; Mao, Xianling; Peng, Bo; Pan, Yue
2010-11-13
Post-marketing pharmacovigilance is important for public health, as many Adverse Drug Events (ADEs) are unknown when those drugs were approved for marketing. However, due to the large number of reported drugs and drug combinations, detecting ADE signals by mining these reports is becoming a challenging task in terms of computational complexity. Recently, a parallel programming model, MapReduce has been introduced by Google to support large-scale data intensive applications. In this study, we proposed a MapReduce-based algorithm, for common ADE detection approach, Proportional Reporting Ratio (PRR), and tested it in mining spontaneous ADE reports from FDA. The purpose is to investigate the possibility of using MapReduce principle to speed up biomedical data mining tasks using this pharmacovigilance case as one specific example. The results demonstrated that MapReduce programming model could improve the performance of common signal detection algorithm for pharmacovigilance in a distributed computation environment at approximately liner speedup rates.
Masum, M A; Pickering, M R; Lambert, A J; Scarvell, J M; Smith, P N
2017-09-06
In this paper, a novel multi-slice ultrasound (US) image calibration of an intelligent skin-marker used for soft tissue artefact compensation is proposed to align and orient image slices in an exact H-shaped pattern. Multi-slice calibration is complex, however, in the proposed method, a phantom based visual alignment followed by transform parameters estimation greatly reduces the complexity and provides sufficient accuracy. In this approach, the Hough Transform (HT) is used to further enhance the image features which originate from the image feature enhancing elements integrated into the physical phantom model, thus reducing feature detection uncertainty. In this framework, slice by slice image alignment and calibration are carried out and this provides manual ease and convenience. Copyright © 2016 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biyanto, Totok R.
Fouling in a heat exchanger in Crude Preheat Train (CPT) refinery is an unsolved problem that reduces the plant efficiency, increases fuel consumption and CO{sub 2} emission. The fouling resistance behavior is very complex. It is difficult to develop a model using first principle equation to predict the fouling resistance due to different operating conditions and different crude blends. In this paper, Artificial Neural Networks (ANN) MultiLayer Perceptron (MLP) with input structure using Nonlinear Auto-Regressive with eXogenous (NARX) is utilized to build the fouling resistance model in shell and tube heat exchanger (STHX). The input data of the model aremore » flow rates and temperatures of the streams of the heat exchanger, physical properties of product and crude blend data. This model serves as a predicting tool to optimize operating conditions and preventive maintenance of STHX. The results show that the model can capture the complexity of fouling characteristics in heat exchanger due to thermodynamic conditions and variations in crude oil properties (blends). It was found that the Root Mean Square Error (RMSE) are suitable to capture the nonlinearity and complexity of the STHX fouling resistance during phases of training and validation.« less
Li, Zhenlong; Yang, Chaowei; Jin, Baoxuan; Yu, Manzhu; Liu, Kai; Sun, Min; Zhan, Matthew
2015-01-01
Geoscience observations and model simulations are generating vast amounts of multi-dimensional data. Effectively analyzing these data are essential for geoscience studies. However, the tasks are challenging for geoscientists because processing the massive amount of data is both computing and data intensive in that data analytics requires complex procedures and multiple tools. To tackle these challenges, a scientific workflow framework is proposed for big geoscience data analytics. In this framework techniques are proposed by leveraging cloud computing, MapReduce, and Service Oriented Architecture (SOA). Specifically, HBase is adopted for storing and managing big geoscience data across distributed computers. MapReduce-based algorithm framework is developed to support parallel processing of geoscience data. And service-oriented workflow architecture is built for supporting on-demand complex data analytics in the cloud environment. A proof-of-concept prototype tests the performance of the framework. Results show that this innovative framework significantly improves the efficiency of big geoscience data analytics by reducing the data processing time as well as simplifying data analytical procedures for geoscientists. PMID:25742012
Li, Zhenlong; Yang, Chaowei; Jin, Baoxuan; Yu, Manzhu; Liu, Kai; Sun, Min; Zhan, Matthew
2015-01-01
Geoscience observations and model simulations are generating vast amounts of multi-dimensional data. Effectively analyzing these data are essential for geoscience studies. However, the tasks are challenging for geoscientists because processing the massive amount of data is both computing and data intensive in that data analytics requires complex procedures and multiple tools. To tackle these challenges, a scientific workflow framework is proposed for big geoscience data analytics. In this framework techniques are proposed by leveraging cloud computing, MapReduce, and Service Oriented Architecture (SOA). Specifically, HBase is adopted for storing and managing big geoscience data across distributed computers. MapReduce-based algorithm framework is developed to support parallel processing of geoscience data. And service-oriented workflow architecture is built for supporting on-demand complex data analytics in the cloud environment. A proof-of-concept prototype tests the performance of the framework. Results show that this innovative framework significantly improves the efficiency of big geoscience data analytics by reducing the data processing time as well as simplifying data analytical procedures for geoscientists.
NASA Technical Reports Server (NTRS)
Leser, Patrick E.; Hochhalter, Jacob D.; Newman, John A.; Leser, William P.; Warner, James E.; Wawrzynek, Paul A.; Yuan, Fuh-Gwo
2015-01-01
Utilizing inverse uncertainty quantification techniques, structural health monitoring can be integrated with damage progression models to form probabilistic predictions of a structure's remaining useful life. However, damage evolution in realistic structures is physically complex. Accurately representing this behavior requires high-fidelity models which are typically computationally prohibitive. In the present work, a high-fidelity finite element model is represented by a surrogate model, reducing computation times. The new approach is used with damage diagnosis data to form a probabilistic prediction of remaining useful life for a test specimen under mixed-mode conditions.
Reduced functional loads alter the physical characteristics of the bone-PDL-cementum complex
Niver, Eric L.; Leong, Narita; Greene, Janelle; Curtis, Donald; Ryder, Mark I.; Ho, Sunita P.
2011-01-01
Background Adaptive properties of the bone-PDL-tooth complex have been identified by changing the magnitude of functional loads using small-scale animal models such as rodents. Reported adaptive responses as a result of lower loads due to softer diet include decreased muscle development, change in structure-function relationship of the cranium, narrowed PDL-space, changes in mineral level of the cortical bone and alveolar jaw bone, and glycosaminoglycans of the alveolar bone. However, the adaptive role of the dynamic bone-PDL-cementum complex due to prolonged reduced loads has not been fully explained to date, especially with regards to concurrent adaptations of bone, PDL and cementum. Hence, the temporal effect of reduced functional loads on physical characteristics such as morphology and mechanical properties, and mineral profiles of the bone-periodontal ligament (PDL)-cementum complex using a rat model was investigated. Materials and Methods Two groups of six-week-old male Sprague-Dawley rats were fed nutritionally identical food with a stiffness range of 127–158N/mm for hard pellet or 0.32–0.47N/mm for soft powder forms. Spatio-temporal adaptation of the bone-PDL-cementum complex was identified by mapping changes in: 1) PDL-collagen orientation and birefringence using polarized light microscopy, bone and cementum adaptation using histochemistry, and bone and cementum morphology using micro X-ray computed tomography, 2) mineral profiles of the PDL-cementum and PDL-bone interfaces by X-ray attenuation, and 3) microhardness of bone and cementum by microindentation of specimens at ages six, eight, twelve, and fifteen weeks. Results Reduced functional loads over prolonged time resulted in 1) altered PDL orientation and decreased PDL collagen birefringence indicating decreased PDL turnover rate and decreased apical cementum resorption; 2) a gradual increase in X-ray attenuation, owing to mineral differences, at the PDL-bone and PDL-cementum interfaces without significant differences in the gradients for either group; 3) significantly (p<0.05) lower microhardness of alveolar bone (0.93±0.16 GPa) and secondary cementum (0.803±0.13 GPa) compared to the higher load group (1.10±0.17 GPa and 0.940±0.15 GPa respectively) at fifteen weeks indicating a temporal effect of loads on local mineralization of bone and cementum. Conclusions Based on the results from this study, the effect of reduced functional loads for a prolonged time could differentially affect morphology and mechanical properties, and mineral variations and of the local load-bearing sites in a bone-PDL-cementum complex. These observed local changes in turn could help explain the overall biomechanical function and adaptations of the tooth-bone joint. From a clinical translation perspective, our study provides an insight into modulation of load on the complex for improved tooth function during periodontal disease, and/or orthodontic and prosthodontic treatments. PMID:21848615
Comparison of alternative designs for reducing complex neurons to equivalent cables.
Burke, R E
2000-01-01
Reduction of the morphological complexity of actual neurons into accurate, computationally efficient surrogate models is an important problem in computational neuroscience. The present work explores the use of two morphoelectrotonic transformations, somatofugal voltage attenuation (AT cables) and signal propagation delay (DL cables), as bases for construction of electrotonically equivalent cable models of neurons. In theory, the AT and DL cables should provide more accurate lumping of membrane regions that have the same transmembrane potential than the familiar equivalent cables that are based only on somatofugal electrotonic distance (LM cables). In practice, AT and DL cables indeed provided more accurate simulations of the somatic transient responses produced by fully branched neuron models than LM cables. This was the case in the presence of a somatic shunt as well as when membrane resistivity was uniform.
A hand tracking algorithm with particle filter and improved GVF snake model
NASA Astrophysics Data System (ADS)
Sun, Yi-qi; Wu, Ai-guo; Dong, Na; Shao, Yi-zhe
2017-07-01
To solve the problem that the accurate information of hand cannot be obtained by particle filter, a hand tracking algorithm based on particle filter combined with skin-color adaptive gradient vector flow (GVF) snake model is proposed. Adaptive GVF and skin color adaptive external guidance force are introduced to the traditional GVF snake model, guiding the curve to quickly converge to the deep concave region of hand contour and obtaining the complex hand contour accurately. This algorithm realizes a real-time correction of the particle filter parameters, avoiding the particle drift phenomenon. Experimental results show that the proposed algorithm can reduce the root mean square error of the hand tracking by 53%, and improve the accuracy of hand tracking in the case of complex and moving background, even with a large range of occlusion.
Connor, Carol McDonald; Day, Stephanie L.; Phillips, Beth; Sparapani, Nicole; Ingebrand, Sarah W.; McLean, Leigh; Barrus, Angela; Kaschak, Michael P.
2016-01-01
Many assume that cognitive and linguistic processes, such as semantic knowledge (SK) and self-regulation (SR) subserve learned skills like reading. However, complex models of interacting and bootstrapping effects of SK, SR, instruction, and reading hypothesize reciprocal effects. Testing this “lattice” model with children (n = 852) followed from 1st–2nd grade (5.9–10.4 years-of-age), revealed reciprocal effects for reading and SR, and reading and SK, but not SR and SK. More effective literacy instruction reduced reading stability over time. Findings elucidate the synergistic and reciprocal effects of learning to read on other important linguistic, self-regulatory, and cognitive processes, the value of using complex models of development to inform intervention design, and how learned skills may influence development during middle childhood. PMID:27264645
Unbound (bioavailable) IGF1 enhances somatic growth
Elis, Sebastien; Wu, Yingjie; Courtland, Hayden-William; Cannata, Dara; Sun, Hui; Beth-On, Mordechay; Liu, Chengyu; Jasper, Hector; Domené, Horacio; Karabatas, Liliana; Guida, Clara; Basta-Pljakic, Jelena; Cardoso, Luis; Rosen, Clifford J.; Frystyk, Jan; Yakar, Shoshana
2011-01-01
SUMMARY Understanding insulin-like growth factor-1 (IGF1) biology is of particular importance because, apart from its role in mediating growth, it plays key roles in cellular transformation, organ regeneration, immune function, development of the musculoskeletal system and aging. IGF1 bioactivity is modulated by its binding to IGF-binding proteins (IGFBPs) and the acid labile subunit (ALS), which are present in serum and tissues. To determine whether IGF1 binding to IGFBPs is necessary to facilitate normal growth and development, we used a gene-targeting approach and generated two novel knock-in mouse models of mutated IGF1, in which the native Igf1 gene was replaced by Des-Igf1 (KID mice) or R3-Igf1 (KIR mice). The KID and KIR mutant proteins have reduced affinity for the IGFBPs, and therefore present as unbound IGF1, or ‘free IGF1’. We found that both KID and KIR mice have reduced serum IGF1 levels and a concomitant increase in serum growth hormone levels. Ternary complex formation of IGF1 with the IGFBPs and the ALS was markedly reduced in sera from KID and KIR mice compared with wild type. Both mutant mice showed increased body weight, body and bone lengths, and relative lean mass. We found selective organomegaly of the spleen, kidneys and uterus, enhanced mammary gland complexity, and increased skeletal acquisition. The KID and KIR models show unequivocally that IGF1-complex formation with the IGFBPs is fundamental for establishing normal body and organ size, and that uncontrolled IGF bioactivity could lead to pathological conditions. PMID:21628395
Unbound (bioavailable) IGF1 enhances somatic growth.
Elis, Sebastien; Wu, Yingjie; Courtland, Hayden-William; Cannata, Dara; Sun, Hui; Beth-On, Mordechay; Liu, Chengyu; Jasper, Hector; Domené, Horacio; Karabatas, Liliana; Guida, Clara; Basta-Pljakic, Jelena; Cardoso, Luis; Rosen, Clifford J; Frystyk, Jan; Yakar, Shoshana
2011-09-01
Understanding insulin-like growth factor-1 (IGF1) biology is of particular importance because, apart from its role in mediating growth, it plays key roles in cellular transformation, organ regeneration, immune function, development of the musculoskeletal system and aging. IGF1 bioactivity is modulated by its binding to IGF-binding proteins (IGFBPs) and the acid labile subunit (ALS), which are present in serum and tissues. To determine whether IGF1 binding to IGFBPs is necessary to facilitate normal growth and development, we used a gene-targeting approach and generated two novel knock-in mouse models of mutated IGF1, in which the native Igf1 gene was replaced by Des-Igf1 (KID mice) or R3-Igf1 (KIR mice). The KID and KIR mutant proteins have reduced affinity for the IGFBPs, and therefore present as unbound IGF1, or 'free IGF1'. We found that both KID and KIR mice have reduced serum IGF1 levels and a concomitant increase in serum growth hormone levels. Ternary complex formation of IGF1 with the IGFBPs and the ALS was markedly reduced in sera from KID and KIR mice compared with wild type. Both mutant mice showed increased body weight, body and bone lengths, and relative lean mass. We found selective organomegaly of the spleen, kidneys and uterus, enhanced mammary gland complexity, and increased skeletal acquisition. The KID and KIR models show unequivocally that IGF1-complex formation with the IGFBPs is fundamental for establishing normal body and organ size, and that uncontrolled IGF bioactivity could lead to pathological conditions.
2013-09-01
included an oxidant scavenger, (N- Acetylcysteine ), a drug that reduces mitochondrial superoxide production by blocking electron flow through complex I...selection of compounds included an oxidant scavenger, (N- Acetylcysteine ), a drug that reduces mitochondrial superoxide production by blocking...2mM N- acetylcysteine 5 3. 5mM NAC 5 4. 20mM NAC 5 5. 20µM Cytochalasin B 5 6. 10µM Nocodazole 4 7. 2.5mM Amobarbital 5 Table. Dose
Scramjet Combustor Simulations Using Reduced Chemical Kinetics for Practical Fuels
2003-12-01
the aerospace industry in reducing prototype and testing costs and the time needed to bring products to market . Accurate simulation of chemical...JP-8 kinetics and soot models into the UNICORN CFD code (Montgomery et al., 2003a) NSF Phase I and II SBIRs for development of a computer-assisted...divided by diameter QSS quasi-steady state REI Reaction Engineering International UNICORN UNsteady Ignition and COmbustion with ReactioNs VULCAN Viscous Upwind aLgorithm for Complex flow ANalysis
Henrickson, Leslie; McKelvey, Bill
2002-01-01
Since the death of positivism in the 1970s, philosophers have turned their attention to scientific realism, evolutionary epistemology, and the Semantic Conception of Theories. Building on these trends, Campbellian Realism allows social scientists to accept real-world phenomena as criterion variables against which theories may be tested without denying the reality of individual interpretation and social construction. The Semantic Conception reduces the importance of axioms, but reaffirms the role of models and experiments. Philosophers now see models as “autonomous agents” that exert independent influence on the development of a science, in addition to theory and data. The inappropriate molding effects of math models on social behavior modeling are noted. Complexity science offers a “new” normal science epistemology focusing on order creation by self-organizing heterogeneous agents and agent-based models. The more responsible core of postmodernism builds on the idea that agents operate in a constantly changing web of interconnections among other agents. The connectionist agent-based models of complexity science draw on the same conception of social ontology as do postmodernists. These recent developments combine to provide foundations for a “new” social science centered on formal modeling not requiring the mathematical assumptions of agent homogeneity and equilibrium conditions. They give this “new” social science legitimacy in scientific circles that current social science approaches lack. PMID:12011408
Lyle, Karen S; Haas, Jeffrey A; Fox, Brian G
2003-05-20
Stearoyl-ACP Delta9 desaturase (Delta9D) catalyzes the NADPH- and O(2)-dependent insertion of a cis double bond between the C9 and C10 positions of stearoyl-ACP (18:0-ACP) to produce oleoyl-ACP (18:1-ACP). This work revealed the ability of reduced [2Fe-2S] ferredoxin (Fd) to act as a catalytically competent electron donor during the rapid conversion of 18:0-ACP into 18:1-ACP. Experiments on the order of addition for substrate and reduced Fd showed high conversion of 18:0-ACP to 18:1-ACP (approximately 95% per Delta9D active site in a single turnover) when 18:0-ACP was added prior to reduced Fd. Reactions of the prereduced enzyme-substrate complex with O(2) and the oxidized enzyme-substrate complex with reduced Fd were studied by rapid-mix and chemical quench methods. For reaction of the prereduced enzyme-substrate complex, an exponential burst phase (k(burst) = 95 s(-1)) of product formation accounted for approximately 90% of the turnover expected for one subunit in the dimeric protein. This rapid phase was followed by a slower phase (k(linear) = 4.0 s(-1)) of product formation corresponding to the turnover expected from the second subunit. For reaction of the oxidized enzyme-substrate complex with excess reduced Fd, a slower, linear rate (k(obsd) = 3.4 s(-1)) of product formation was observed over approximately 1.5 turnovers per Delta9D active site potentially corresponding to a third phase of reaction. An analysis of the deuterium isotope effect on the two rapid-mix reaction sequences revealed only a modest effect on k(burst) ((D)k(burst) approximately 1.5) and k(linear) (D)k(linear) approximately 1.4), indicating C-H bond cleavage does not contribute significantly to the rate-limiting steps of pre-steady-state catalysis. These results were used to assemble and evaluate a minimal kinetic model for Delta9D catalysis.
Reinoso-Maset, Estela; Worsfold, Paul J; Keith-Roach, Miranda J
2013-05-01
Sorption processes play a key role in controlling radionuclide migration through subsurface environments and can be affected by the presence of anthropogenic organic complexing agents found at contaminated sites. The effect of these complexing agents on radionuclide-solid phase interactions is not well known. Therefore the aim of this study was to examine the processes by which EDTA, NTA and picolinate affect the sorption kinetics and equilibria of Cs(+), Sr(2+) and UO2(2+) onto natural sand. The caesium sorption rate and equilibrium were unaffected by the complexing agents. Strontium however showed greater interaction with EDTA and NTA in the presence of desorbed matrix cations than geochemical modelling predicted, with SrNTA(-) enhancing sorption and SrEDTA(2-) showing lower sorption than Sr(2+). Complexing agents reduced UO2(2+) sorption to silica and enhanced the sorption rate in the natural sand system. Elevated concentrations of picolinate reduced the sorption of Sr(2+) and increased the sorption rate of UO2(2+), demonstrating the potential importance of this complexing agent. These experiments provide a direct comparison of the sorption behaviour of Cs(+), Sr(2+) and UO2(2+)onto natural sand and an assessment of the relative effects of EDTA, NTA and picolinate on the selected elements. Copyright © 2013 Elsevier Ltd. All rights reserved.
Berndt, Nikolaus; Bulik, Sascha; Holzhütter, Hermann-Georg
2012-01-01
Reduced activity of brain α-ketoglutarate dehydrogenase complex (KGDHC) occurs in a number of neurodegenerative diseases like Parkinson's disease and Alzheimer's disease. In order to quantify the relation between diminished KGDHC activity and the mitochondrial ATP generation, redox state, transmembrane potential, and generation of reactive oxygen species (ROS) by the respiratory chain (RC), we developed a detailed kinetic model. Model simulations revealed a threshold-like decline of the ATP production rate at about 60% inhibition of KGDHC accompanied by a significant increase of the mitochondrial membrane potential. By contrast, progressive inhibition of the enzyme aconitase had only little impact on these mitochondrial parameters. As KGDHC is susceptible to ROS-dependent inactivation, we also investigated the reduction state of those sites of the RC proposed to be involved in ROS production. The reduction state of all sites except one decreased with increasing degree of KGDHC inhibition suggesting an ROS-reducing effect of KGDHC inhibition. Our model underpins the important role of reduced KGDHC activity in the energetic breakdown of neuronal cells during development of neurodegenerative diseases. PMID:22719765
Song, Mi-Ryoung; Sun, Yunfu; Bryson, Ami; Gill, Gordon N.; Evans, Sylvia M.; Pfaff, Samuel L.
2009-01-01
Summary LIM transcription factors bind to nuclear LIM interactor (Ldb/NLI/Clim) in specific ratios to form higher-order complexes that regulate gene expression. Here we examined how the dosage of LIM homeodomain proteins Isl1 and Isl2 and LIM-only protein Lmo4 influences the assembly and function of complexes involved in the generation of spinal motor neurons (MNs) and V2a interneurons (INs). Reducing the levels of Islet proteins using a graded series of mutations favored V2a IN differentiation at the expense of MN formation. Although LIM-only proteins (LMOs) are predicted to antagonize the function of Islet proteins, we found that the presence or absence of Lmo4 had little influence on MN or V2a IN specification. We did find, however, that the loss of MNs resulting from reduced Islet levels was rescued by eliminating Lmo4, unmasking a functional interaction between these proteins. Our findings demonstrate that MN and V2a IN fates are specified by distinct complexes that are sensitive to the relative stoichiometries of the constituent factors and we present a model to explain how LIM domain proteins modulate these complexes and, thereby, this binary-cell-fate decision. PMID:19666821
Electric Power Engineering Cost Predicting Model Based on the PCA-GA-BP
NASA Astrophysics Data System (ADS)
Wen, Lei; Yu, Jiake; Zhao, Xin
2017-10-01
In this paper a hybrid prediction algorithm: PCA-GA-BP model is proposed. PCA algorithm is established to reduce the correlation between indicators of original data and decrease difficulty of BP neural network in complex dimensional calculation. The BP neural network is established to estimate the cost of power transmission project. The results show that PCA-GA-BP algorithm can improve result of prediction of electric power engineering cost.
Design Change Model for Effective Scheduling Change Propagation Paths
NASA Astrophysics Data System (ADS)
Zhang, Hai-Zhu; Ding, Guo-Fu; Li, Rong; Qin, Sheng-Feng; Yan, Kai-Yin
2017-09-01
Changes in requirements may result in the increasing of product development project cost and lead time, therefore, it is important to understand how requirement changes propagate in the design of complex product systems and be able to select best options to guide design. Currently, a most approach for design change is lack of take the multi-disciplinary coupling relationships and the number of parameters into account integrally. A new design change model is presented to systematically analyze and search change propagation paths. Firstly, a PDS-Behavior-Structure-based design change model is established to describe requirement changes causing the design change propagation in behavior and structure domains. Secondly, a multi-disciplinary oriented behavior matrix is utilized to support change propagation analysis of complex product systems, and the interaction relationships of the matrix elements are used to obtain an initial set of change paths. Finally, a rough set-based propagation space reducing tool is developed to assist in narrowing change propagation paths by computing the importance of the design change parameters. The proposed new design change model and its associated tools have been demonstrated by the scheduling change propagation paths of high speed train's bogie to show its feasibility and effectiveness. This model is not only supportive to response quickly to diversified market requirements, but also helpful to satisfy customer requirements and reduce product development lead time. The proposed new design change model can be applied in a wide range of engineering systems design with improved efficiency.
Cognitive Virtualization: Combining Cognitive Models and Virtual Environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tuan Q. Tran; David I. Gertman; Donald D. Dudenhoeffer
2007-08-01
3D manikins are often used in visualizations to model human activity in complex settings. Manikins assist in developing understanding of human actions, movements and routines in a variety of different environments representing new conceptual designs. One such environment is a nuclear power plant control room, here they have the potential to be used to simulate more precise ergonomic assessments of human work stations. Next generation control rooms will pose numerous challenges for system designers. The manikin modeling approach by itself, however, may be insufficient for dealing with the desired technical advancements and challenges of next generation automated systems. Uncertainty regardingmore » effective staffing levels; and the potential for negative human performance consequences in the presence of advanced automated systems (e.g., reduced vigilance, poor situation awareness, mistrust or blind faith in automation, higher information load and increased complexity) call for further research. Baseline assessment of novel control room equipment(s) and configurations needs to be conducted. These design uncertainties can be reduced through complementary analysis that merges ergonomic manikin models with models of higher cognitive functions, such as attention, memory, decision-making, and problem-solving. This paper will discuss recent advancements in merging a theoretical-driven cognitive modeling framework within a 3D visualization modeling tool to evaluate of next generation control room human factors and ergonomic assessment. Though this discussion primary focuses on control room design, the application for such a merger between 3D visualization and cognitive modeling can be extended to various areas of focus such as training and scenario planning.« less
Rumor Spreading Model with Trust Mechanism in Complex Social Networks
NASA Astrophysics Data System (ADS)
Wang, Ya-Qi; Yang, Xiao-Yuan; Han, Yi-Liang; Wang, Xu-An
2013-04-01
In this paper, to study rumor spreading, we propose a novel susceptible-infected-removed (SIR) model by introducing the trust mechanism. We derive mean-field equations that describe the dynamics of the SIR model on homogeneous networks and inhomogeneous networks. Then a steady-state analysis is conducted to investigate the critical threshold and the final size of the rumor spreading. We show that the introduction of trust mechanism reduces the final rumor size and the velocity of rumor spreading, but increases the critical thresholds on both networks. Moreover, the trust mechanism not only greatly reduces the maximum rumor influence, but also postpones the rumor terminal time, which provides us with more time to take measures to control the rumor spreading. The theoretical results are confirmed by sufficient numerical simulations.
Kim, Yong Sun; Choi, Hyeong Ho; Cho, Young Nam; Park, Yong Jae; Lee, Jong B; Yang, King H; King, Albert I
2005-11-01
Although biomechanical studies on the knee-thigh-hip (KTH) complex have been extensive, interactions between the KTH and various vehicular interior design parameters in frontal automotive crashes for newer models have not been reported in the open literature to the best of our knowledge. A 3D finite element (FE) model of a 50(th) percentile male KTH complex, which includes explicit representations of the iliac wing, acetabulum, pubic rami, sacrum, articular cartilage, femoral head, femoral neck, femoral condyles, patella, and patella tendon, has been developed to simulate injuries such as fracture of the patella, femoral neck, acetabulum, and pubic rami of the KTH complex. Model results compared favorably against regional component test data including a three-point bending test of the femur, axial loading of the isolated knee-patella, axial loading of the KTH complex, axial loading of the femoral head, and lateral loading of the isolated pelvis. The model was further integrated into a Wayne State University upper torso model and validated against data obtained from whole body sled tests. The model was validated against these experimental data over a range of impact speeds, impactor masses and boundary conditions. Using Design Of Experiment (DOE) methods based on Taguchi's approach and the developed FE model of the whole body, including the KTH complex, eight vehicular interior design parameters, namely the load limiter force, seat belt elongation, pretensioner inlet amount, knee-knee bolster distance, knee bolster angle, knee bolster stiffness, toe board angle and impact speed, each with either two or three design levels, were simulated to predict their respective effects on the potential of KTH injury in frontal impacts. Simulation results proposed best design levels for vehicular interior design parameters to reduce the injury potential of the KTH complex due to frontal automotive crashes. This study is limited by the fact that prediction of bony fracture was based on an element elimination method available in the LS-DYNA code. No validation study was conducted to determine if this method is suitable when simulating fractures of biological tissues. More work is still needed to further validate the FE model of the KTH complex to increase its reliability in the assessment of various impact loading conditions associated with vehicular crash scenarios.
NASA Technical Reports Server (NTRS)
Schoenwald, Adam J.; Bradley, Damon C.; Mohammed, Priscilla N.; Piepmeier, Jeffrey R.; Wong, Mark
2016-01-01
In the field of microwave radiometry, Radio Frequency Interference (RFI) consistently degrades the value of scientific results. Through the use of digital receivers and signal processing, the effects of RFI on scientific measurements can be reduced depending on certain circumstances. As technology allows us to implement wider band digital receivers for radiometry, the problem of RFI mitigation changes. Our work focuses on finding a detector that outperforms real kurtosis in wide band scenarios. The algorithm implemented is a complex signal kurtosis detector which was modeled and simulated. The performance of both complex and real signal kurtosis is evaluated for continuous wave, pulsed continuous wave, and wide band quadrature phase shift keying (QPSK) modulations. The use of complex signal kurtosis increased the detectability of interference.
NASA Technical Reports Server (NTRS)
Shih, Ann T.; Ancel, Ersin; Jones, Sharon M.
2012-01-01
The concern for reducing aviation safety risk is rising as the National Airspace System in the United States transforms to the Next Generation Air Transportation System (NextGen). The NASA Aviation Safety Program is committed to developing an effective aviation safety technology portfolio to meet the challenges of this transformation and to mitigate relevant safety risks. The paper focuses on the reasoning of selecting Object-Oriented Bayesian Networks (OOBN) as the technique and commercial software for the accident modeling and portfolio assessment. To illustrate the benefits of OOBN in a large and complex aviation accident model, the in-flight Loss-of-Control Accident Framework (LOCAF) constructed as an influence diagram is presented. An OOBN approach not only simplifies construction and maintenance of complex causal networks for the modelers, but also offers a well-organized hierarchical network that is easier for decision makers to exploit the model examining the effectiveness of risk mitigation strategies through technology insertions.
Analytical Micromechanics Modeling Technique Developed for Ceramic Matrix Composites Analysis
NASA Technical Reports Server (NTRS)
Min, James B.
2005-01-01
Ceramic matrix composites (CMCs) promise many advantages for next-generation aerospace propulsion systems. Specifically, carbon-reinforced silicon carbide (C/SiC) CMCs enable higher operational temperatures and provide potential component weight savings by virtue of their high specific strength. These attributes may provide systemwide benefits. Higher operating temperatures lessen or eliminate the need for cooling, thereby reducing both fuel consumption and the complex hardware and plumbing required for heat management. This, in turn, lowers system weight, size, and complexity, while improving efficiency, reliability, and service life, resulting in overall lower operating costs.
Chatterjee, Sumantra; Kapoor, Ashish; Akiyama, Jennifer A.; ...
2016-09-29
Common sequence variants in cis-regulatory elements (CREs) are suspected etiological causes of complex disorders. We previously identified an intronic enhancer variant in the RET gene disrupting SOX10 binding and increasing Hirschsprung disease (HSCR) risk 4-fold. We now show that two other functionally independent CRE variants, one binding Gata2 and the other binding Rarb, also reduce Ret expression and increase risk 2- and 1.7-fold. By studying human and mouse fetal gut tissues and cell lines, we demonstrate that reduced RET expression propagates throughout its gene regulatory network, exerting effects on both its positive and negative feedback components. We also provide evidencemore » that the presence of a combination of CRE variants synergistically reduces RET expression and its effects throughout the GRN. These studies show how the effects of functionally independent non-coding variants in a coordinated gene regulatory network amplify their individually small effects, providing a model for complex disorders.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chatterjee, Sumantra; Kapoor, Ashish; Akiyama, Jennifer A.
Common sequence variants in cis-regulatory elements (CREs) are suspected etiological causes of complex disorders. We previously identified an intronic enhancer variant in the RET gene disrupting SOX10 binding and increasing Hirschsprung disease (HSCR) risk 4-fold. We now show that two other functionally independent CRE variants, one binding Gata2 and the other binding Rarb, also reduce Ret expression and increase risk 2- and 1.7-fold. By studying human and mouse fetal gut tissues and cell lines, we demonstrate that reduced RET expression propagates throughout its gene regulatory network, exerting effects on both its positive and negative feedback components. We also provide evidencemore » that the presence of a combination of CRE variants synergistically reduces RET expression and its effects throughout the GRN. These studies show how the effects of functionally independent non-coding variants in a coordinated gene regulatory network amplify their individually small effects, providing a model for complex disorders.« less
An, Yan; Zou, Zhihong; Li, Ranran
2014-01-01
A large number of parameters are acquired during practical water quality monitoring. If all the parameters are used in water quality assessment, the computational complexity will definitely increase. In order to reduce the input space dimensions, a fuzzy rough set was introduced to perform attribute reduction. Then, an attribute recognition theoretical model and entropy method were combined to assess water quality in the Harbin reach of the Songhuajiang River in China. A dataset consisting of ten parameters was collected from January to October in 2012. Fuzzy rough set was applied to reduce the ten parameters to four parameters: BOD5, NH3-N, TP, and F. coli (Reduct A). Considering that DO is a usual parameter in water quality assessment, another reduct, including DO, BOD5, NH3-N, TP, TN, F, and F. coli (Reduct B), was obtained. The assessment results of Reduct B show a good consistency with those of Reduct A, and this means that DO is not always necessary to assess water quality. The results with attribute reduction are not exactly the same as those without attribute reduction, which can be attributed to the α value decided by subjective experience. The assessment results gained by the fuzzy rough set obviously reduce computational complexity, and are acceptable and reliable. The model proposed in this paper enhances the water quality assessment system. PMID:24675643
A strategy to load balancing for non-connectivity MapReduce job
NASA Astrophysics Data System (ADS)
Zhou, Huaping; Liu, Guangzong; Gui, Haixia
2017-09-01
MapReduce has been widely used in large scale and complex datasets as a kind of distributed programming model. Original Hash partitioning function in MapReduce often results the problem of data skew when data distribution is uneven. To solve the imbalance of data partitioning, we proposes a strategy to change the remaining partitioning index when data is skewed. In Map phase, we count the amount of data which will be distributed to each reducer, then Job Tracker monitor the global partitioning information and dynamically modify the original partitioning function according to the data skew model, so the Partitioner can change the index of these partitioning which will cause data skew to the other reducer that has less load in the next partitioning process, and can eventually balance the load of each node. Finally, we experimentally compare our method with existing methods on both synthetic and real datasets, the experimental results show our strategy can solve the problem of data skew with better stability and efficiency than Hash method and Sampling method for non-connectivity MapReduce task.
Using Interactive 3D PDF for Exploring Complex Biomedical Data: Experiences and Solutions.
Newe, Axel; Becker, Linda
2016-01-01
The Portable Document Format (PDF) is the most commonly used file format for the exchange of electronic documents. A lesser-known feature of PDF is the possibility to embed three-dimensional models and to display these models interactively with a qualified reader. This technology is well suited to present, to explore and to communicate complex biomedical data. This applies in particular for data which would suffer from a loss of information if it was reduced to a static two-dimensional projection. In this article, we present applications of 3D PDF for selected scholarly and clinical use cases in the biomedical domain. Furthermore, we present a sophisticated tool for the generation of respective PDF documents.
Hennig, Maria; Fiedler, Saskia; Jux, Christian; Thierfelder, Ludwig; Drenckhahn, Jörg-Detlef
2017-08-04
Fetal growth impacts cardiovascular health throughout postnatal life in humans. Various animal models of intrauterine growth restriction exhibit reduced heart size at birth, which negatively influences cardiac function in adulthood. The mechanistic target of rapamycin complex 1 (mTORC1) integrates nutrient and growth factor availability with cell growth, thereby regulating organ size. This study aimed at elucidating a possible involvement of mTORC1 in intrauterine growth restriction and prenatal heart growth. We inhibited mTORC1 in fetal mice by rapamycin treatment of pregnant dams in late gestation. Prenatal rapamycin treatment reduces mTORC1 activity in various organs at birth, which is fully restored by postnatal day 3. Rapamycin-treated neonates exhibit a 16% reduction in body weight compared with vehicle-treated controls. Heart weight decreases by 35%, resulting in a significantly reduced heart weight/body weight ratio, smaller left ventricular dimensions, and reduced cardiac output in rapamycin- versus vehicle-treated mice at birth. Although proliferation rates in neonatal rapamycin-treated hearts are unaffected, cardiomyocyte size is reduced, and apoptosis increased compared with vehicle-treated neonates. Rapamycin-treated mice exhibit postnatal catch-up growth, but body weight and left ventricular mass remain reduced in adulthood. Prenatal mTORC1 inhibition causes a reduction in cardiomyocyte number in adult hearts compared with controls, which is partially compensated for by an increased cardiomyocyte volume, resulting in normal cardiac function without maladaptive left ventricular remodeling. Prenatal rapamycin treatment of pregnant dams represents a new mouse model of intrauterine growth restriction and identifies an important role of mTORC1 in perinatal cardiac growth. © 2017 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley.
Finite quasiparticle lifetime in disordered superconductors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zemlicka, M.; Neilinger, P.; Trgala, M
We investigate the complex conductivity of a highly disordered MoC superconducting film with k(F)l approximate to 1, where k(F) is the Fermi wave number and l is the mean free path, derived from experimental transmission characteristics of coplanar waveguide resonators in a wide temperature range below the superconducting transition temperature T-c. We find that the original Mattis-Bardeen model with a finite quasiparticle lifetime, tau, offers a perfect description of the experimentally observed complex conductivity. We show that iota is appreciably reduced by scattering effects. Characteristics of the scattering centers are independently found by scanning tunneling spectroscopy and agree with thosemore » determined from the complex conductivity.« less
A new method to real-normalize measured complex modes
NASA Technical Reports Server (NTRS)
Wei, Max L.; Allemang, Randall J.; Zhang, Qiang; Brown, David L.
1987-01-01
A time domain subspace iteration technique is presented to compute a set of normal modes from the measured complex modes. By using the proposed method, a large number of physical coordinates are reduced to a smaller number of model or principal coordinates. Subspace free decay time responses are computed using properly scaled complex modal vectors. Companion matrix for the general case of nonproportional damping is then derived in the selected vector subspace. Subspace normal modes are obtained through eigenvalue solution of the (M sub N) sup -1 (K sub N) matrix and transformed back to the physical coordinates to get a set of normal modes. A numerical example is presented to demonstrate the outlined theory.
NASA Astrophysics Data System (ADS)
Izraelevitz, Jacob; Triantafyllou, Michael
2016-11-01
Flapping wings in nature demonstrate a large force actuation envelope, with capabilities beyond the limits of static airfoil section coefficients. Puffins, guillemots, and other auks particularly showcase this mechanism, as they are able to both generate both enough thrust to swim and lift to fly, using the same wing, by changing the wing motion trajectory. The wing trajectory is therefore an additional design criterion to be optimized along with traditional aircraft parameters, and could possibly enable dual aerial/aquatic flight. We showcase finite aspect-ratio flapping wing experiments, dynamic similarity arguments, and reduced-order models for predicting the performance of flapping wings that carry out complex motion trajectories.
Orr, Mark G; Galea, Sandro; Riddle, Matt; Kaplan, George A
2014-08-01
Understanding how to mitigate the present black-white obesity disparity in the United States is a complex issue, stemming from a multitude of intertwined causes. An appropriate but underused approach to guiding policy approaches to this problem is to account for this complexity using simulation modeling. We explored the efficacy of a policy that improved the quality of neighborhood schools in reducing racial disparities in obesity-related behavior and the dependence of this effect on social network influence and norms. We used an empirically grounded agent-based model to generate simulation experiments. We used a 2 × 2 × 2 factorial design that represented the presence or absence of improved neighborhood school quality, the presence or absence of social influence, and the type of social norm (healthy or unhealthy). Analyses focused on time trends in sociodemographic variables and diet quality. First, the quality of schools and social network influence had independent and interactive effects on diet behavior. Second, the black-white disparity in diet behavior was considerably reduced under some conditions, but never completely eliminated. Third, the degree to which the disparity in diet behavior was reduced was a function of the type of social norm that was in place; the reduction was the smallest when the type of social norm was healthy. Improving school quality can reduce, but not eliminate racial disparities in obesity-related behavior, and the degree to which this is true depends partly on social network effects. Copyright © 2014 Elsevier Inc. All rights reserved.
Sood, Abhilasha; Mehrotra, Arpit; Dhawan, Devinder K; Sandhir, Rajat
2018-04-18
Stroke is an increasingly prevalent clinical condition and second leading cause of death globally. The present study evaluated the therapeutic potential of Indian Ginseng, also known as Withania somnifera (WS), supplementation on middle cerebral artery occlusion (MCAO) induced mitochondrial dysfunctions in experimental model of ischemic stroke. Stroke was induced in animals by occluding the middle cerebral artery, followed by reperfusion injury. Ischemia reperfusion injury resulted in increased oxidative stress indicated by increased reactive oxygen species and protein carbonyl levels; compromised antioxidant system; in terms of reduced superoxide dismutase and catalase activity, along with reduction in GSH levels and the redox ratio, impaired mitochondrial functions and enhanced expression of apoptosis markers. Ischemia reperfusion injury induced mitochondrial dysfunctions in terms of (i) reduced activity of the mitochondrial respiratory chain enzymes, (ii) reduced histochemical staining of complex-II and IV, (iii) reduced in-gel activity of mitochondrial complex-I to V, (iv) mitochondrial structural changes in terms of increased mitochondrial swelling, reduced mitochondrial membrane potential and ultrastructural changes. Additionally, an increase in the activity of caspase-3 and caspase-9 was also observed, along with altered expression of apoptotic proteins Bcl-2 and Bax in MCAO animals. MCAO animals also showed significant impairment in cognitive functions assessed using Y maze test. WS pre-supplementation, on the other hand ameliorated MCAO induced oxidative stress, mitochondrial dysfunctions, apoptosis and cognitive impairments. The results show protective effect of WS pre-supplementation in ischemic stroke and are suggestive of its potential application in stroke management.
A service model for delivering care closer to home.
Dodd, Joanna; Taylor, Charlotte Elizabeth; Bunyan, Paul; White, Philippa Mary; Thomas, Siân Myra; Upton, Dominic
2011-04-01
Upton Surgery (Worcestershire) has developed a flexible and responsive service model that facilitates multi-agency support for adult patients with complex care needs experiencing an acute health crisis. The purpose of this service is to provide appropriate interventions that avoid unnecessary hospital admissions or, alternatively, provide support to facilitate early discharge from secondary care. Key aspects of this service are the collaborative and proactive identification of patients at risk, rapid creation and deployment of a reactive multi-agency team and follow-up of patients with an appropriate long-term care plan. A small team of dedicated staff (the Complex Care Team) are pivotal to coordinating and delivering this service. Key skills are sophisticated leadership and project management skills, and these have been used sensitively to challenge some traditional roles and boundaries in the interests of providing effective, holistic care for the patient.This is a practical example of early implementation of the principles underlying the Department of Health's (DH) recent Best Practice Guidance, 'Delivering Care Closer to Home' (DH, July 2008) and may provide useful learning points for other general practice surgeries considering implementing similar models. This integrated case management approach has had enthusiastic endorsement from patients and carers. In addition to the enhanced quality of care and experience for the patient, this approach has delivered value for money. Secondary care costs have been reduced by preventing admissions and also by reducing excess bed-days. The savings achieved have justified the ongoing commitment to the service and the staff employed in the Complex Care Team. The success of this service model has been endorsed recently by the 'Customer Care' award by 'Management in Practice'. The Surgery was also awarded the 'Practice of the Year' award for this and a number of other customer-focussed projects.
Das, Narhari; Abdur Rahman, S. M.
2016-01-01
Purpose. The present study was designed to investigate the antinociceptive, anxiolytic, CNS depressant, and hypoglycemic effects of the naproxen metal complexes. Methods. The antinociceptive activity was evaluated by acetic acid-induced writhing method and radiant heat tail-flick method while anxiolytic activity was evaluated by elevated plus maze model. The CNS depressant activity of naproxen metal complexes was assessed using phenobarbitone-induced sleeping time test and the hypoglycemic test was performed using oral glucose tolerance test. Results. Metal complexes significantly (P < 0.001) reduced the number of abdominal muscle contractions induced by 0.7% acetic acid solution in a dose dependent manner. At the dose of 25 mg/kg body weight p.o. copper, cobalt, and zinc complexes exhibited higher antinociceptive activity having 59.15%, 60.56%, and 57.75% of writhing inhibition, respectively, than the parent ligand naproxen (54.93%). In tail-flick test, at both doses of 25 and 50 mg/kg, the copper, cobalt, silver, and zinc complexes showed higher antinociceptive activity after 90 minutes than the parent drug naproxen. In elevated plus maze (EPM) model the cobalt and zinc complexes of naproxen showed significant anxiolytic effects in dose dependent manner, while the copper, cobalt, and zinc complexes showed significant CNS depressant and hypoglycemic activity. Conclusion. The present study demonstrated that copper, cobalt, and zinc complexes possess higher antinociceptive, anxiolytic, CNS depressant, and hypoglycemic properties than the parent ligand. PMID:27478435
Copper-phospholipid interaction at cell membrane model hydrophobic surfaces.
Mlakar, Marina; Cuculić, Vlado; Frka, Sanja; Gašparović, Blaženka
2018-04-01
Detailed investigation of Cu (II) binding with natural lipid phosphatidylglycerol (PG) in aqueous solution was carried out by voltammetric measurements at the mercury drop electrode, complemented by monolayer studies in a Langmuir trough and electrophoretic measurements, all used as models for hydrophobic cell membranes. Penetration of copper ions into the PG layer was facilitated by the formation of hydrophilic Cu-Phenanthroline (Phen) complex in the subphase, followed by the mixed ligand Cu-Phen-PG complex formation at the hydrophobic interface. Electrophoretic measurements indicated a comparatively low abundance of the formed mixed ligand complex within the PG vesicles, resulting it the zeta potential change of +0.83mV, while monolayer studies confirmed their co-existence at the interface. The Cu-Phen-PG complex was identified in the pH range from 6 to 9. The stoichiometry of the complex ([PhenCuOHPG]), as well as its stability and kinetics of formation, were determined at the mercury drop electrode. Cu-Phen-PG reduces quasireversibly at about -0.7V vs. Ag/AgCl including reactant adsorption, followed by irreversible mixed complex dissociation, indicating a two-electron transfer - chemical reaction (EC mechanism). Consequently, the surface concentration (γ) of the adsorbed [PhenCuOHPG] complex at the hydrophobic electrode surface was calculated to be (3.35±0.67)×10 -11 molcm -2 . Information on the mechanism of Cu (II) - lipid complex formation is a significant contribution to the understanding of complex processes at natural cell membranes. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Singh, Th. David; Sumitra, Ch.; Yaiphaba, N.; Devi, H. Debecca; Devi, M. Indira; Singh, N. Rajmuhon
2005-04-01
The coordination chemistry of glutathione reduced (GSH) is of great importance as it acts as excellent model system for the binding of metal ions. The GSH complexation with metal ions is involved in the toxicology of different metal ions. Its coordination behaviour for soft metal ions and hard metal ions is found different because of the structure of GSH and its different potential binding sites. In our work we have studied two chemically dissimilar metal ions viz. Pr(III), which prefer hard donor site like carboxylic groups and Zn(II) the soft metal ion which prefer peptide-NH and sulphydryl groups. The absorption difference and comparative absorption spectroscopy involving 4f-4f transitions of the heterobimetallic Complexation of GSH with Pr(III) and Zn(II) has been explored in aqueous and aquated organic solvents. The variation in the energy parameters like Slater-Condon ( F K), Racah ( E K) and Lande ( ξ4f), Nephelauxetic parameter ( β) and bonding parameter ( b1/2) are computed to explain the nature of complexation.
Intelligent simulation of aquatic environment economic policy coupled ABM and SD models.
Wang, Huihui; Zhang, Jiarui; Zeng, Weihua
2018-03-15
Rapid urbanization and population growth have resulted in serious water shortage and pollution of the aquatic environment, which are important reasons for the complex increase in environmental deterioration in the region. This study examines the environmental consequences and economic impacts of water resource shortages under variant economic policies; however, this requires complex models that jointly consider variant agents and sectors within a systems perspective. Thus, we propose a complex system model that couples multi-agent based models (ABM) and system dynamics (SD) models to simulate the impact of alternative economic policies on water use and pricing. Moreover, this model took the constraint of the local water resources carrying capacity into consideration. Results show that to achieve the 13th Five Year Plan targets in Dianchi, water prices for local residents and industries should rise to 3.23 and 4.99 CNY/m 3 , respectively. The corresponding sewage treatment fees for residents and industries should rise to 1.50 and 2.25 CNY/m 3 , respectively, assuming comprehensive adjustment of industrial structure and policy. At the same time, the local government should exercise fine-scale economic policy combined with emission fees assessed for those exceeding a standard, and collect fines imposed as punishment for enterprises that exceed emission standards. When fines reach 500,000 CNY, the total number of enterprises that exceed emission standards in the basin can be controlled within 1%. Moreover, it is suggested that the volume of water diversion in Dianchi should be appropriately reduced to 3.06×10 8 m 3 . The reduced expense of water diversion should provide funds to use for the construction of recycled water facilities. Then the local rise in the rate of use of recycled water should reach 33%, and 1.4 CNY/m 3 for the price of recycled water could be provided to ensure the sustainable utilization of local water resources. Copyright © 2017 Elsevier B.V. All rights reserved.
Modeling software systems by domains
NASA Technical Reports Server (NTRS)
Dippolito, Richard; Lee, Kenneth
1992-01-01
The Software Architectures Engineering (SAE) Project at the Software Engineering Institute (SEI) has developed engineering modeling techniques that both reduce the complexity of software for domain-specific computer systems and result in systems that are easier to build and maintain. These techniques allow maximum freedom for system developers to apply their domain expertise to software. We have applied these techniques to several types of applications, including training simulators operating in real time, engineering simulators operating in non-real time, and real-time embedded computer systems. Our modeling techniques result in software that mirrors both the complexity of the application and the domain knowledge requirements. We submit that the proper measure of software complexity reflects neither the number of software component units nor the code count, but the locus of and amount of domain knowledge. As a result of using these techniques, domain knowledge is isolated by fields of engineering expertise and removed from the concern of the software engineer. In this paper, we will describe kinds of domain expertise, describe engineering by domains, and provide relevant examples of software developed for simulator applications using the techniques.
ERIC Educational Resources Information Center
Kumar; Payal; Singhal, Manish
2012-01-01
Implementation of change in an organisation through culture can elicit a wide array of reactions from organisational members, spanning from acceptance to resistance. Drawing on Hatch's cultural dynamics model and on Wegner's social theory of learning, this paper dwells on an underdeveloped area in the extant literature, namely understanding change…
Thermal Indices and Thermophysiological Modeling for Heat Stress.
Havenith, George; Fiala, Dusan
2015-12-15
The assessment of the risk of human exposure to heat is a topic as relevant today as a century ago. The introduction and use of heat stress indices and models to predict and quantify heat stress and heat strain has helped to reduce morbidity and mortality in industrial, military, sports, and leisure activities dramatically. Models used range from simple instruments that attempt to mimic the human-environment heat exchange to complex thermophysiological models that simulate both internal and external heat and mass transfer, including related processes through (protective) clothing. This article discusses the most commonly used indices and models and looks at how these are deployed in the different contexts of industrial, military, and biometeorological applications, with focus on use to predict related thermal sensations, acute risk of heat illness, and epidemiological analysis of morbidity and mortality. A critical assessment is made of tendencies to use simple indices such as WBGT in more complex conditions (e.g., while wearing protective clothing), or when employed in conjunction with inappropriate sensors. Regarding the more complex thermophysiological models, the article discusses more recent developments including model individualization approaches and advanced systems that combine simulation models with (body worn) sensors to provide real-time risk assessment. The models discussed in the article range from historical indices to recent developments in using thermophysiological models in (bio) meteorological applications as an indicator of the combined effect of outdoor weather settings on humans. Copyright © 2015 John Wiley & Sons, Inc.
Climate Change Impacts on Natural Sulfur Production: Ocean Acidification and Community Shifts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Menzo, Zachary; Elliott, Scott; Hartin, Corinne
Utilizing the reduced-complexity model Hector, a regional scale analysis was conducted quantifying the possible effects climate change may have on dimethyl sulfide (DMS) emissions within the oceans. The investigation began with a review of the sulfur cycle in modern Earth system models. We then expanded the biogeochemical representation within Hector to include a natural ocean component while accounting for acidification and planktonic community shifts. The report presents results from both a latitudinal and a global perspective. This new approach highlights disparate outcomes which have been inadequately characterized via planetary averages in past publications. Our findings suggest that natural sulfur emissionsmore » (ESN) may exert a forcing up to 4 times that of the CO2 marine feedback, 0.62 and 0.15 Wm-2, respectively, and reverse the radiative forcing sign in low latitudes. Additionally, sensitivity tests were conducted to demonstrate the need for further examination of the DMS loop. Ultimately, the present work attempts to include dynamic ESN within reduced-complexity simulations of the sulfur cycle, illustrating its impact on the global radiative budget« less
Gingras, Guillaume; Guertin, Marie-Hélène; Laprise, Jean-François; Drolet, Mélanie; Brisson, Marc
2016-01-01
Background We conducted a systematic review of mathematical models of transmission dynamic of Clostridium difficile infection (CDI) in healthcare settings, to provide an overview of existing models and their assessment of different CDI control strategies. Methods We searched MEDLINE, EMBASE and Web of Science up to February 3, 2016 for transmission-dynamic models of Clostridium difficile in healthcare settings. The models were compared based on their natural history representation of Clostridium difficile, which could include health states (S-E-A-I-R-D: Susceptible-Exposed-Asymptomatic-Infectious-Resistant-Deceased) and the possibility to include healthcare workers and visitors (vectors of transmission). Effectiveness of interventions was compared using the relative reduction (compared to no intervention or current practice) in outcomes such as incidence of colonization, CDI, CDI recurrence, CDI mortality, and length of stay. Results Nine studies describing six different models met the inclusion criteria. Over time, the models have generally increased in complexity in terms of natural history and transmission dynamics and number/complexity of interventions/bundles of interventions examined. The models were categorized into four groups with respect to their natural history representation: S-A-I-R, S-E-A-I, S-A-I, and S-E-A-I-R-D. Seven studies examined the impact of CDI control strategies. Interventions aimed at controlling the transmission, lowering CDI vulnerability and reducing the risk of recurrence/mortality were predicted to reduce CDI incidence by 3–49%, 5–43% and 5–29%, respectively. Bundles of interventions were predicted to reduce CDI incidence by 14–84%. Conclusions Although CDI is a major public health problem, there are very few published transmission-dynamic models of Clostridium difficile. Published models vary substantially in the interventions examined, the outcome measures used and the representation of the natural history of Clostridium difficile, which make it difficult to synthesize results and provide a clear picture of optimal intervention strategies. Future modeling efforts should pay specific attention to calibration, structural uncertainties, and transparent reporting practices. PMID:27690247
A practical approach to Sasang constitutional diagnosis using vocal features
2013-01-01
Background Sasang constitutional medicine (SCM) is a type of tailored medicine that divides human beings into four Sasang constitutional (SC) types. Diagnosis of SC types is crucial to proper treatment in SCM. Voice characteristics have been used as an essential clue for diagnosing SC types. In the past, many studies tried to extract quantitative vocal features to make diagnosis models; however, these studies were flawed by limited data collected from one or a few sites, long recording time, and low accuracy. We propose a practical diagnosis model having only a few variables, which decreases model complexity. This in turn, makes our model appropriate for clinical applications. Methods A total of 2,341 participants’ voice recordings were used in making a SC classification model and to test the generalization ability of the model. Although the voice data consisted of five vowels and two repeated sentences per participant, we used only the sentence part for our study. A total of 21 features were extracted, and an advanced feature selection method—the least absolute shrinkage and selection operator (LASSO)—was applied to reduce the number of variables for classifier learning. A SC classification model was developed using multinomial logistic regression via LASSO. Results We compared the proposed classification model to the previous study, which used both sentences and five vowels from the same patient’s group. The classification accuracies for the test set were 47.9% and 40.4% for male and female, respectively. Our result showed that the proposed method was superior to the previous study in that it required shorter voice recordings, is more applicable to practical use, and had better generalization performance. Conclusions We proposed a practical SC classification method and showed that our model having fewer variables outperformed the model having many variables in the generalization test. We attempted to reduce the number of variables in two ways: 1) the initial number of candidate features was decreased by considering shorter voice recording, and 2) LASSO was introduced for reducing model complexity. The proposed method is suitable for an actual clinical environment. Moreover, we expect it to yield more stable results because of the model’s simplicity. PMID:24200041
Multi-source micro-friction identification for a class of cable-driven robots with passive backbone
NASA Astrophysics Data System (ADS)
Tjahjowidodo, Tegoeh; Zhu, Ke; Dailey, Wayne; Burdet, Etienne; Campolo, Domenico
2016-12-01
This paper analyses the dynamics of cable-driven robots with a passive backbone and develops techniques for their dynamic identification, which are tested on the H-Man, a planar cabled differential transmission robot for haptic interaction. The mechanism is optimized for human-robot interaction by accounting for the cost-benefit-ratio of the system, specifically by eliminating the necessity of an external force sensor to reduce the overall cost. As a consequence, this requires an effective dynamic model for accurate force feedback applications which include friction behavior in the system. We first consider the significance of friction in both the actuator and backbone spaces. Subsequently, we study the required complexity of the stiction model for the application. Different models representing different levels of complexity are investigated, ranging from the conventional approach of Coulomb to an advanced model which includes hysteresis. The results demonstrate each model's ability to capture the dynamic behavior of the system. In general, it is concluded that there is a trade-off between model accuracy and the model cost.
Critical phenomena at the complex tensor ordering phase transition
NASA Astrophysics Data System (ADS)
Boettcher, Igor; Herbut, Igor F.
2018-02-01
We investigate the critical properties of the phase transition towards complex tensor order that has been proposed to occur in spin-orbit-coupled superconductors. For this purpose, we formulate the bosonic field theory for fluctuations of the complex irreducible second-rank tensor order parameter close to the transition. We then determine the scale dependence of the couplings of the theory by means of the perturbative renormalization group (RG). For the isotropic system, we generically detect a fluctuation-induced first-order phase transition. The initial values for the running couplings are determined by the underlying microscopic model for the tensorial order. As an example, we study three-dimensional Luttinger semimetals with electrons at a quadratic band-touching point. Whereas the strong-coupling transition of the model receives substantial fluctuation corrections, the weak-coupling transition at low temperatures is rendered only weakly first order due to the presence of a fixed point in the vicinity of the RG trajectory. If the number of fluctuating complex components of the order parameter is reduced by cubic anisotropy, the theory maps onto the field theory for frustrated magnetism.
Ibáñez, Juan José; Ortega, David; Campos, Daniel; Khalidi, Lamya; Méndez, Vicenç
2015-01-01
In this paper, we explore the conditions that led to the origins and development of the Near Eastern Neolithic using mathematical modelling of obsidian exchange. The analysis presented expands on previous research, which established that the down-the-line model could not explain long-distance obsidian distribution across the Near East during this period. Drawing from outcomes of new simulations and their comparison with archaeological data, we provide results that illuminate the presence of complex networks of interaction among the earliest farming societies. We explore a network prototype of obsidian exchange with distant links which replicates the long-distance movement of ideas, goods and people during the Early Neolithic. Our results support the idea that during the first (Pre-Pottery Neolithic A) and second (Pre-Pottery Neolithic B) phases of the Early Neolithic, the complexity of obsidian exchange networks gradually increased. We propose then a refined model (the optimized distant link model) whereby long-distance exchange was largely operated by certain interconnected villages, resulting in the appearance of a relatively homogeneous Neolithic cultural sphere. We hypothesize that the appearance of complex interaction and exchange networks reduced risks of isolation caused by restricted mobility as groups settled and argue that these networks partially triggered and were crucial for the success of the Neolithic Revolution. Communities became highly dynamic through the sharing of experiences and objects, while the networks that developed acted as a repository of innovations, limiting the risk of involution. PMID:25948614
Rands, Sean A.
2011-01-01
Functional explanations of behaviour often propose optimal strategies for organisms to follow. These ‘best’ strategies could be difficult to perform given biological constraints such as neural architecture and physiological constraints. Instead, simple heuristics or ‘rules-of-thumb’ that approximate these optimal strategies may instead be performed. From a modelling perspective, rules-of-thumb are also useful tools for considering how group behaviour is shaped by the behaviours of individuals. Using simple rules-of-thumb reduces the complexity of these models, but care needs to be taken to use rules that are biologically relevant. Here, we investigate the similarity between the outputs of a two-player dynamic foraging game (which generated optimal but complex solutions) and a computational simulation of the behaviours of the two members of a foraging pair, who instead followed a rule-of-thumb approximation of the game's output. The original game generated complex results, and we demonstrate here that the simulations following the much-simplified rules-of-thumb also generate complex results, suggesting that the rule-of-thumb was sufficient to make some of the model outcomes unpredictable. There was some agreement between both modelling techniques, but some differences arose – particularly when pair members were not identical in how they gained and lost energy. We argue that exploring how rules-of-thumb perform in comparison to their optimal counterparts is an important exercise for biologically validating the output of agent-based models of group behaviour. PMID:21765938
Rands, Sean A
2011-01-01
Functional explanations of behaviour often propose optimal strategies for organisms to follow. These 'best' strategies could be difficult to perform given biological constraints such as neural architecture and physiological constraints. Instead, simple heuristics or 'rules-of-thumb' that approximate these optimal strategies may instead be performed. From a modelling perspective, rules-of-thumb are also useful tools for considering how group behaviour is shaped by the behaviours of individuals. Using simple rules-of-thumb reduces the complexity of these models, but care needs to be taken to use rules that are biologically relevant. Here, we investigate the similarity between the outputs of a two-player dynamic foraging game (which generated optimal but complex solutions) and a computational simulation of the behaviours of the two members of a foraging pair, who instead followed a rule-of-thumb approximation of the game's output. The original game generated complex results, and we demonstrate here that the simulations following the much-simplified rules-of-thumb also generate complex results, suggesting that the rule-of-thumb was sufficient to make some of the model outcomes unpredictable. There was some agreement between both modelling techniques, but some differences arose - particularly when pair members were not identical in how they gained and lost energy. We argue that exploring how rules-of-thumb perform in comparison to their optimal counterparts is an important exercise for biologically validating the output of agent-based models of group behaviour.
Life Prediction of Large Lithium-Ion Battery Packs with Active and Passive Balancing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, Ying; Smith, Kandler A; Zane, Regan
Lithium-ion battery packs take a major part of large-scale stationary energy storage systems. One challenge in reducing battery pack cost is to reduce pack size without compromising pack service performance and lifespan. Prognostic life model can be a powerful tool to handle the state of health (SOH) estimate and enable active life balancing strategy to reduce cell imbalance and extend pack life. This work proposed a life model using both empirical and physical-based approaches. The life model described the compounding effect of different degradations on the entire cell with an empirical model. Then its lower-level submodels considered the complex physicalmore » links between testing statistics (state of charge level, C-rate level, duty cycles, etc.) and the degradation reaction rates with respect to specific aging mechanisms. The hybrid approach made the life model generic, robust and stable regardless of battery chemistry and application usage. The model was validated with a custom pack with both passive and active balancing systems implemented, which created four different aging paths in the pack. The life model successfully captured the aging trajectories of all four paths. The life model prediction errors on capacity fade and resistance growth were within +/-3% and +/-5% of the experiment measurements.« less
Green, Dale E; Hamory, Bruce H; Terrell, Grace E; O'Connell, Jasmine
2017-08-01
Over the course of a single year, Cornerstone Health Care, a multispecialty group practice in North Carolina, redesigned the underlying care models for 5 of its highest-risk populations-late-stage congestive heart failure, oncology, Medicare-Medicaid dual eligibles, those with 5 or more chronic conditions, and the most complex patients with multiple late-stage chronic conditions. At the 1-year mark, the results of the program were analyzed. Overall costs for the patients studied were reduced by 12.7% compared to the year before enrollment. All fully implemented programs delivered between 10% and 16% cost savings. The key area for savings factor was hospitalization, which was reduced by 30% across all programs. The greatest area of cost increase was "other," a category that consisted in large part of hospice services. Full implementation was key; 2 primary care sites that reverted to more traditional models failed to show the same pattern of savings.
Adaptive System Modeling for Spacecraft Simulation
NASA Technical Reports Server (NTRS)
Thomas, Justin
2011-01-01
This invention introduces a methodology and associated software tools for automatically learning spacecraft system models without any assumptions regarding system behavior. Data stream mining techniques were used to learn models for critical portions of the International Space Station (ISS) Electrical Power System (EPS). Evaluation on historical ISS telemetry data shows that adaptive system modeling reduces simulation error anywhere from 50 to 90 percent over existing approaches. The purpose of the methodology is to outline how someone can create accurate system models from sensor (telemetry) data. The purpose of the software is to support the methodology. The software provides analysis tools to design the adaptive models. The software also provides the algorithms to initially build system models and continuously update them from the latest streaming sensor data. The main strengths are as follows: Creates accurate spacecraft system models without in-depth system knowledge or any assumptions about system behavior. Automatically updates/calibrates system models using the latest streaming sensor data. Creates device specific models that capture the exact behavior of devices of the same type. Adapts to evolving systems. Can reduce computational complexity (faster simulations).
Well balancing of the SWE schemes for moving-water steady flows
NASA Astrophysics Data System (ADS)
Caleffi, Valerio; Valiani, Alessandro
2017-08-01
In this work, the exact reproduction of a moving-water steady flow via the numerical solution of the one-dimensional shallow water equations is studied. A new scheme based on a modified version of the HLLEM approximate Riemann solver (Dumbser and Balsara (2016) [18]) that exactly preserves the total head and the discharge in the simulation of smooth steady flows and that correctly dissipates mechanical energy in the presence of hydraulic jumps is presented. This model is compared with a selected set of schemes from the literature, including models that exactly preserve quiescent flows and models that exactly preserve moving-water steady flows. The comparison highlights the strengths and weaknesses of the different approaches. In particular, the results show that the increase in accuracy in the steady state reproduction is counterbalanced by a reduced robustness and numerical efficiency of the models. Some solutions to reduce these drawbacks, at the cost of increased algorithm complexity, are presented.
Low-complexity stochastic modeling of wall-bounded shear flows
NASA Astrophysics Data System (ADS)
Zare, Armin
Turbulent flows are ubiquitous in nature and they appear in many engineering applications. Transition to turbulence, in general, increases skin-friction drag in air/water vehicles compromising their fuel-efficiency and reduces the efficiency and longevity of wind turbines. While traditional flow control techniques combine physical intuition with costly experiments, their effectiveness can be significantly enhanced by control design based on low-complexity models and optimization. In this dissertation, we develop a theoretical and computational framework for the low-complexity stochastic modeling of wall-bounded shear flows. Part I of the dissertation is devoted to the development of a modeling framework which incorporates data-driven techniques to refine physics-based models. We consider the problem of completing partially known sample statistics in a way that is consistent with underlying stochastically driven linear dynamics. Neither the statistics nor the dynamics are precisely known. Thus, our objective is to reconcile the two in a parsimonious manner. To this end, we formulate optimization problems to identify the dynamics and directionality of input excitation in order to explain and complete available covariance data. For problem sizes that general-purpose solvers cannot handle, we develop customized optimization algorithms based on alternating direction methods. The solution to the optimization problem provides information about critical directions that have maximal effect in bringing model and statistics in agreement. In Part II, we employ our modeling framework to account for statistical signatures of turbulent channel flow using low-complexity stochastic dynamical models. We demonstrate that white-in-time stochastic forcing is not sufficient to explain turbulent flow statistics and develop models for colored-in-time forcing of the linearized Navier-Stokes equations. We also examine the efficacy of stochastically forced linearized NS equations and their parabolized equivalents in the receptivity analysis of velocity fluctuations to external sources of excitation as well as capturing the effect of the slowly-varying base flow on streamwise streaks and Tollmien-Schlichting waves. In Part III, we develop a model-based approach to design surface actuation of turbulent channel flow in the form of streamwise traveling waves. This approach is capable of identifying the drag reducing trends of traveling waves in a simulation-free manner. We also use the stochastically forced linearized NS equations to examine the Reynolds number independent effects of spanwise wall oscillations on drag reduction in turbulent channel flows. This allows us to extend the predictive capability of our simulation-free approach to high Reynolds numbers.
Direct-to-digital holography reduction of reference hologram noise and fourier space smearing
Voelkl, Edgar
2006-06-27
Systems and methods are described for reduction of reference hologram noise and reduction of Fourier space smearing, especially in the context of direct-to-digital holography (off-axis interferometry). A method of reducing reference hologram noise includes: recording a plurality of reference holograms; processing the plurality of reference holograms into a corresponding plurality of reference image waves; and transforming the corresponding plurality of reference image waves into a reduced noise reference image wave. A method of reducing smearing in Fourier space includes: recording a plurality of reference holograms; processing the plurality of reference holograms into a corresponding plurality of reference complex image waves; transforming the corresponding plurality of reference image waves into a reduced noise reference complex image wave; recording a hologram of an object; processing the hologram of the object into an object complex image wave; and dividing the complex image wave of the object by the reduced noise reference complex image wave to obtain a reduced smearing object complex image wave.
Memory-induced nonlinear dynamics of excitation in cardiac diseases.
Landaw, Julian; Qu, Zhilin
2018-04-01
Excitable cells, such as cardiac myocytes, exhibit short-term memory, i.e., the state of the cell depends on its history of excitation. Memory can originate from slow recovery of membrane ion channels or from accumulation of intracellular ion concentrations, such as calcium ion or sodium ion concentration accumulation. Here we examine the effects of memory on excitation dynamics in cardiac myocytes under two diseased conditions, early repolarization and reduced repolarization reserve, each with memory from two different sources: slow recovery of a potassium ion channel and slow accumulation of the intracellular calcium ion concentration. We first carry out computer simulations of action potential models described by differential equations to demonstrate complex excitation dynamics, such as chaos. We then develop iterated map models that incorporate memory, which accurately capture the complex excitation dynamics and bifurcations of the action potential models. Finally, we carry out theoretical analyses of the iterated map models to reveal the underlying mechanisms of memory-induced nonlinear dynamics. Our study demonstrates that the memory effect can be unmasked or greatly exacerbated under certain diseased conditions, which promotes complex excitation dynamics, such as chaos. The iterated map models reveal that memory converts a monotonic iterated map function into a nonmonotonic one to promote the bifurcations leading to high periodicity and chaos.
Memory-induced nonlinear dynamics of excitation in cardiac diseases
NASA Astrophysics Data System (ADS)
Landaw, Julian; Qu, Zhilin
2018-04-01
Excitable cells, such as cardiac myocytes, exhibit short-term memory, i.e., the state of the cell depends on its history of excitation. Memory can originate from slow recovery of membrane ion channels or from accumulation of intracellular ion concentrations, such as calcium ion or sodium ion concentration accumulation. Here we examine the effects of memory on excitation dynamics in cardiac myocytes under two diseased conditions, early repolarization and reduced repolarization reserve, each with memory from two different sources: slow recovery of a potassium ion channel and slow accumulation of the intracellular calcium ion concentration. We first carry out computer simulations of action potential models described by differential equations to demonstrate complex excitation dynamics, such as chaos. We then develop iterated map models that incorporate memory, which accurately capture the complex excitation dynamics and bifurcations of the action potential models. Finally, we carry out theoretical analyses of the iterated map models to reveal the underlying mechanisms of memory-induced nonlinear dynamics. Our study demonstrates that the memory effect can be unmasked or greatly exacerbated under certain diseased conditions, which promotes complex excitation dynamics, such as chaos. The iterated map models reveal that memory converts a monotonic iterated map function into a nonmonotonic one to promote the bifurcations leading to high periodicity and chaos.
NASA Technical Reports Server (NTRS)
Peabody, Hume; Guerrero, Sergio; Hawk, John; Rodriguez, Juan; McDonald, Carson; Jackson, Cliff
2016-01-01
The Wide Field Infrared Survey Telescope using Astrophysics Focused Telescope Assets (WFIRST-AFTA) utilizes an existing 2.4 m diameter Hubble sized telescope donated from elsewhere in the federal government for near-infrared sky surveys and Exoplanet searches to answer crucial questions about the universe and dark energy. The WFIRST design continues to increase in maturity, detail, and complexity with each design cycle leading to a Mission Concept Review and entrance to the Mission Formulation Phase. Each cycle has required a Structural-Thermal-Optical-Performance (STOP) analysis to ensure the design can meet the stringent pointing and stability requirements. As such, the models have also grown in size and complexity leading to increased model run time. This paper addresses efforts to reduce the run time while still maintaining sufficient accuracy for STOP analyses. A technique was developed to identify slews between observing orientations that were sufficiently different to warrant recalculation of the environmental fluxes to reduce the total number of radiation calculation points. The inclusion of a cryocooler fluid loop in the model also forced smaller time-steps than desired, which greatly increases the overall run time. The analysis of this fluid model required mitigation to drive the run time down by solving portions of the model at different time scales. Lastly, investigations were made into the impact of the removal of small radiation couplings on run time and accuracy. Use of these techniques allowed the models to produce meaningful results within reasonable run times to meet project schedule deadlines.
Leon, I E; Porro, V; Di Virgilio, A L; Naso, L G; Williams, P A M; Bollati-Fogolín, M; Etcheverry, S B
2014-01-01
Flavonoids are a large family of polyphenolic compounds synthesized by plants. They display interesting biological effects mainly related to their antioxidant properties. On the other hand, vanadium compounds also exhibit different biological and pharmacological effects in cell culture and in animal models. Since coordination of ligands to metals can improve or change the pharmacological properties, we report herein, for the first time, a detailed study of the mechanisms of action of an oxidovanadium(IV) complex with the flavonoid silibinin, Na2[VO(silibinin)2]·6H2O (VOsil), in a model of the human osteosarcoma derived cell line MG-63. The complex inhibited the viability of osteosarcoma cells in a dose-dependent manner with a greater potency than that of silibinin and oxidovanadium(IV) (p < 0.01), demonstrating the benefit of complexation. Cytotoxicity and genotoxicity studies also showed a concentration effect for VOsil. The increase in the levels of reactive oxygen species and the decrease of the ratio of the amount of reduced glutathione to the amount of oxidized glutathione were involved in the deleterious effects of the complex. Besides, the complex caused cell cycle arrest and activated caspase 3, triggering apoptosis as determined by flow cytometry. As a whole, these results show the main mechanisms of the deleterious effects of VOsil in the osteosarcoma cell line, demonstrating that this complex is a promising compound for cancer treatments.
Active vibration control with model correction on a flexible laboratory grid structure
NASA Technical Reports Server (NTRS)
Schamel, George C., II; Haftka, Raphael T.
1991-01-01
This paper presents experimental and computational comparisons of three active damping control laws applied to a complex laboratory structure. Two reduced structural models were used with one model being corrected on the basis of measured mode shapes and frequencies. Three control laws were investigated, a time-invariant linear quadratic regulator with state estimation and two direct rate feedback control laws. Experimental results for all designs were obtained with digital implementation. It was found that model correction improved the agreement between analytical and experimental results. The best agreement was obtained with the simplest direct rate feedback control.
Low Complexity Models to improve Incomplete Sensitivities for Shape Optimization
NASA Astrophysics Data System (ADS)
Stanciu, Mugurel; Mohammadi, Bijan; Moreau, Stéphane
2003-01-01
The present global platform for simulation and design of multi-model configurations treat shape optimization problems in aerodynamics. Flow solvers are coupled with optimization algorithms based on CAD-free and CAD-connected frameworks. Newton methods together with incomplete expressions of gradients are used. Such incomplete sensitivities are improved using reduced models based on physical assumptions. The validity and the application of this approach in real-life problems are presented. The numerical examples concern shape optimization for an airfoil, a business jet and a car engine cooling axial fan.
Modelling mitigation options to reduce diffuse nitrogen water pollution from agriculture.
Bouraoui, Fayçal; Grizzetti, Bruna
2014-01-15
Agriculture is responsible for large scale water quality degradation and is estimated to contribute around 55% of the nitrogen entering the European Seas. The key policy instrument for protecting inland, transitional and coastal water resources is the Water Framework Directive (WFD). Reducing nutrient losses from agriculture is crucial to the successful implementation of the WFD. There are several mitigation measures that can be implemented to reduce nitrogen losses from agricultural areas to surface and ground waters. For the selection of appropriate measures, models are useful for quantifying the expected impacts and the associated costs. In this article we review some of the models used in Europe to assess the effectiveness of nitrogen mitigation measures, ranging from fertilizer management to the construction of riparian areas and wetlands. We highlight how the complexity of models is correlated with the type of scenarios that can be tested, with conceptual models mostly used to evaluate the impact of reduced fertilizer application, and the physically-based models used to evaluate the timing and location of mitigation options and the response times. We underline the importance of considering the lag time between the implementation of measures and effects on water quality. Models can be effective tools for targeting mitigation measures (identifying critical areas and timing), for evaluating their cost effectiveness, for taking into consideration pollution swapping and considering potential trade-offs in contrasting environmental objectives. Models are also useful for involving stakeholders during the development of catchments mitigation plans, increasing their acceptability. © 2013.
Fernandes, M Marques; Scheinost, A C; Baeyens, B
2016-08-01
The credibility of long-term safety assessments of radioactive waste repositories may be greatly enhanced by a molecular level understanding of the sorption processes onto individual minerals present in the near- and far-fields. In this study we couple macroscopic sorption experiments to surface complexation modelling and spectroscopic investigations, including extended X-ray absorption fine structure (EXAFS) and time-resolved laser fluorescence spectroscopies (TRLFS), to elucidate the uptake mechanism of trivalent lanthanides and actinides (Ln/An(III)) by montmorillonite in the absence and presence of dissolved carbonate. Based on the experimental sorption isotherms for the carbonate-free system, the previously developed 2 site protolysis non electrostatic surface complexation and cation exchange (2SPNE SC/CE) model needed to be complemented with an additional surface complexation reaction onto weak sites. The fitting of sorption isotherms in the presence of carbonate required refinement of the previously published model by reducing the strong site capacity and by adding the formation of Ln/An(III)-carbonato complexes both on strong and weak sites. EXAFS spectra of selected Am samples and TRLFS spectra of selected Cm samples corroborate the model assumptions by showing the existence of different surface complexation sites and evidencing the formation of Ln/An(III) carbonate surface complexes. In the absence of carbonate and at low loadings, Ln/An(III) form strong inner-sphere complexes through binding to three Al(O,OH)6 octahedra, most likely by occupying vacant sites in the octahedral layers of montmorillonite, which are exposed on {010} and {110} edge faces. At higher loadings, Ln/An(III) binds to only one Al octahedron, forming a weaker, edge-sharing surface complex. In the presence of carbonate, we identified a ternary mono- or dicarbonato Ln/An(III) complex binding directly to one Al(O,OH)6 octahedron, revealing that type-A ternary complexes form with the one or two carbonate groups pointing away from the surface into the solution phase. Within the spectroscopically observable concentration range these complexes could only be identified on the weak sites, in line with the small strong site capacity suggested by the refined sorption model. When the solubility of carbonates was exceeded, formation of an Am carbonate hydroxide could be identified. The excellent agreement between the thermodynamic model parameters obtained by fitting the macroscopic data, and the spectroscopically identified mechanisms, demonstrates the mature state of the 2SPNE SC/CE model for predicting and quantifying the retention of Ln/An(III) elements by montmorillonite-rich clay rocks. Copyright © 2016 Elsevier Ltd. All rights reserved.
Dutt, Arun K
2005-09-22
We have investigated the short-wave instability due to Hopf bifurcation in a reaction-diffusion model of glycolytic oscillations. Very low values of the ratio d of the diffusion coefficient of the inhibitor (ATP) and that of the activator (ADP) do help to create short waves, whereas high values of the ratio d and the complexing reaction of the activator ADP reduces drastically the wave-instability domain, generating much longer wavelengths.
Reduced Basis and Stochastic Modeling of Liquid Propellant Rocket Engine as a Complex System
2015-07-02
additions, the approach will be extended to a real- gas system so that it can be used to investigate model multi-element liquid rocket combustors in a...Sirignano (2010). In the following discussion, we examine the various conservation principles for the gas and liquid phases. The hyperbolic nature of the...conservation equations for the gas and liquid phases. Mass conservation of individual chemical species or of individual classes of liquid droplets will
Yin, J.; Haggerty, R.; Stoliker, D.L.; Kent, D.B.; Istok, J.D.; Greskowiak, J.; Zachara, J.M.
2011-01-01
In the 300 Area of a U(VI)-contaminated aquifer at Hanford, Washington, USA, inorganic carbon and major cations, which have large impacts on U(VI) transport, change on an hourly and seasonal basis near the Columbia River. Batch and column experiments were conducted to investigate the factors controlling U(VI) adsorption/desorption by changing chemical conditions over time. Low alkalinity and low Ca concentrations (Columbia River water) enhanced adsorption and reduced aqueous concentrations. Conversely, high alkalinity and high Ca concentrations (Hanford groundwater) reduced adsorption and increased aqueous concentrations of U(VI). An equilibrium surface complexation model calibrated using laboratory batch experiments accounted for the decrease in U(VI) adsorption observed with increasing (bi)carbonate concentrations and other aqueous chemical conditions. In the column experiment, alternating pulses of river and groundwater caused swings in aqueous U(VI) concentration. A multispecies multirate surface complexation reactive transport model simulated most of the major U(VI) changes in two column experiments. The modeling results also indicated that U(VI) transport in the studied sediment could be simulated by using a single kinetic rate without loss of accuracy in the simulations. Moreover, the capability of the model to predict U(VI) transport in Hanford groundwater under transient chemical conditions depends significantly on the knowledge of real-time change of local groundwater chemistry. Copyright 2011 by the American Geophysical Union.
Yin, Jun; Haggerty, Roy; Stoliker, Deborah L.; Kent, Douglas B.; Istok, Jonathan D.; Greskowiak, Janek; Zachara, John M.
2011-01-01
In the 300 Area of a U(VI)-contaminated aquifer at Hanford, Washington, USA, inorganic carbon and major cations, which have large impacts on U(VI) transport, change on an hourly and seasonal basis near the Columbia River. Batch and column experiments were conducted to investigate the factors controlling U(VI) adsorption/desorption by changing chemical conditions over time. Low alkalinity and low Ca concentrations (Columbia River water) enhanced adsorption and reduced aqueous concentrations. Conversely, high alkalinity and high Ca concentrations (Hanford groundwater) reduced adsorption and increased aqueous concentrations of U(VI). An equilibrium surface complexation model calibrated using laboratory batch experiments accounted for the decrease in U(VI) adsorption observed with increasing (bi)carbonate concentrations and other aqueous chemical conditions. In the column experiment, alternating pulses of river and groundwater caused swings in aqueous U(VI) concentration. A multispecies multirate surface complexation reactive transport model simulated most of the major U(VI) changes in two column experiments. The modeling results also indicated that U(VI) transport in the studied sediment could be simulated by using a single kinetic rate without loss of accuracy in the simulations. Moreover, the capability of the model to predict U(VI) transport in Hanford groundwater under transient chemical conditions depends significantly on the knowledge of real-time change of local groundwater chemistry.
Schizophrenia: an integrative approach to modelling a complex disorder
Robertson, George S.; Hori, Sarah E.; Powell, Kelly J.
2006-01-01
The discovery of candidate susceptibility genes for schizophrenia and the generation of mice lacking proteins that reproduce biochemical processes that are disrupted in this mental illness offer unprecedented opportunities for improved modelling of this complex disorder. Several lines of evidence indicate that obstetrical complications, as well as fetal or neonatal exposure to viral infection, are predisposing events for some forms of schizophrenia. These environmental events can be modelled in animals, resulting in some of the characteristic features of schizophrenia; however, animal models have yet to be developed that encompass both environmental and genetic aspects of this mental illness. A large number of candidate schizophrenia susceptibility genes have been identified that encode proteins implicated in the regulation of synaptic plasticity, neurotransmission, neuronal migration, cell adherence, signal transduction, energy metabolism and neurite outgrowth. In support of the importance of these processes in schizophrenia, mice that have reduced levels or completely lack proteins that control glutamatergic neurotransmission, neuronal migration, cell adherence, signal transduction, neurite outgrowth and synaptic plasticity display many features reminiscent of schizophrenia. In the present review, we discuss strategies for modelling schizophrenia that involve treating mice that bear these mutations in a variety of ways to better model both environmental and genetic factors responsible for this complex mental illness according to a “two-hit hypothesis.” Because rodents are able to perform complex cognitive tasks using odour but not visual or auditory cues, we hypothesize that olfactory-based tests of cognitive performance should be used to search for novel therapeutics that ameliorate the cognitive deficits that are a feature of this devastating mental disorder. PMID:16699601
McOmish, Caitlin E; Burrows, Emma L; Hannan, Anthony J
2014-10-01
Psychiatric disorders affect a substantial proportion of the population worldwide. This high prevalence, combined with the chronicity of the disorders and the major social and economic impacts, creates a significant burden. As a result, an important priority is the development of novel and effective interventional strategies for reducing incidence rates and improving outcomes. This review explores the progress that has been made to date in establishing valid animal models of psychiatric disorders, while beginning to unravel the complex factors that may be contributing to the limitations of current methodological approaches. We propose some approaches for optimizing the validity of animal models and developing effective interventions. We use schizophrenia and autism spectrum disorders as examples of disorders for which development of valid preclinical models, and fully effective therapeutics, have proven particularly challenging. However, the conclusions have relevance to various other psychiatric conditions, including depression, anxiety and bipolar disorders. We address the key aspects of construct, face and predictive validity in animal models, incorporating genetic and environmental factors. Our understanding of psychiatric disorders is accelerating exponentially, revealing extraordinary levels of genetic complexity, heterogeneity and pleiotropy. The environmental factors contributing to individual, and multiple, disorders also exhibit breathtaking complexity, requiring systematic analysis to experimentally explore the environmental mediators and modulators which constitute the 'envirome' of each psychiatric disorder. Ultimately, genetic and environmental factors need to be integrated via animal models incorporating the spatiotemporal complexity of gene-environment interactions and experience-dependent plasticity, thus better recapitulating the dynamic nature of brain development, function and dysfunction. © 2014 The British Pharmacological Society.
Gene-environment interactions and construct validity in preclinical models of psychiatric disorders.
Burrows, Emma L; McOmish, Caitlin E; Hannan, Anthony J
2011-08-01
The contributions of genetic risk factors to susceptibility for brain disorders are often so closely intertwined with environmental factors that studying genes in isolation cannot provide the full picture of pathogenesis. With recent advances in our understanding of psychiatric genetics and environmental modifiers we are now in a position to develop more accurate animal models of psychiatric disorders which exemplify the complex interaction of genes and environment. Here, we consider some of the insights that have emerged from studying the relationship between defined genetic alterations and environmental factors in rodent models. A key issue in such animal models is the optimization of construct validity, at both genetic and environmental levels. Standard housing of laboratory mice and rats generally includes ad libitum food access and limited opportunity for physical exercise, leading to metabolic dysfunction under control conditions, and thus reducing validity of animal models with respect to clinical populations. A related issue, of specific relevance to neuroscientists, is that most standard-housed rodents have limited opportunity for sensory and cognitive stimulation, which in turn provides reduced incentive for complex motor activity. Decades of research using environmental enrichment has demonstrated beneficial effects on brain and behavior in both wild-type and genetically modified rodent models, relative to standard-housed littermate controls. One interpretation of such studies is that environmentally enriched animals more closely approximate average human levels of cognitive and sensorimotor stimulation, whereas the standard housing currently used in most laboratories models a more sedentary state of reduced mental and physical activity and abnormal stress levels. The use of such standard housing as a single environmental variable may limit the capacity for preclinical models to translate into successful clinical trials. Therefore, there is a need to optimize 'environmental construct validity' in animal models, while maintaining comparability between laboratories, so as to ensure optimal scientific and medical outcomes. Utilizing more sophisticated models to elucidate the relative contributions of genetic and environmental factors will allow for improved construct, face and predictive validity, thus facilitating the identification of novel therapeutic targets. Copyright © 2010 Elsevier Inc. All rights reserved.
Thermoelectric Properties of Complex Zintl Phases
NASA Astrophysics Data System (ADS)
Snyder, G. Jeffrey
2008-03-01
Complex Zintl phases make ideal thermoelectric materials because they can exhibit the necessary ``electron-crystal, phonon-glass'' properties required for high thermoelectric efficiency. Complex crystal structures can lead to high thermoelectric figure of merit (zT) by having extraordinarily low lattice thermal conductivity. A recent example is the discovery that Yb14MnSb11, a complex Zintl compound, has twice the zT as the SiGe based material currently in use at NASA. The high temperature (300K - 1300K) electronic properties of Yb14MnSb11 can be understood using models for heavily doped semiconductors. The free hole concentration, confirmed by Hall effect measurements, is set by the electron counting rules of Zintl and the valence of the transition metal (Mn^+2). Substitution of nonmagnetic Zn^+2 for the magnetic Mn^+2 reduces the spin-disorder scattering and leads to increased zT (10%). The reduction of spin-disorder scattering is consistent with the picture of Yb14MnSb11 as an underscreened Kondo lattice as derived from low temperature measurements. The hole concentration can be reduced by the substitution of Al^+3 for Mn^+2, which leads to an increase in the Seebeck coefficient and electrical resistivity consistent with models for degenerate semiconductors. This leads to further improvements (about 25%) in zT and a reduction in the temperature where the zT peaks. The peak in zT is due to the onset of minority carrier conduction and can be correlated with reduction in Seebeck coefficient, increase in electrical conductivity and increase in thermal conductivity due to bipolar thermal conduction.
Urban warming trumps natural enemy regulation of herbivorous pests.
Dale, Adam G; Frank, Steven D
Trees provide ecosystem services that counter negative effects of urban habitats on human and environmental health. Unfortunately, herbivorous arthropod pests are often more abundant on urban than rural trees, reducing tree growth, survival, and ecosystem services. Previous research where vegetation complexity was reduced has attributed elevated urban pest abundance to decreased regulation by natural enemies. However, reducing vegetation complexity, particularly the density of overstory trees, also makes cities hotter than natural habitats. We ask how urban habitat characteristics influence an abiotic factor, temperature, and a biotic factor, natural enemy abundance, in regulating the abundance of an urban forest pest, the gloomy scale, (Melanaspis tenebricosa). We used a map of surface temperature to select red maple trees (Acer rubrum) at warmer and cooler sites in Raleigh, North Carolina, USA. We quantified habitat complexity by measuring impervious surface cover, local vegetation structural complexity, and landscape scale vegetation cover around each tree. Using path analysis, we determined that impervious surface (the most important habitat variable) increased scale insect abundance by increasing tree canopy temperature, rather than by reducing natural enemy abundance or percent parasitism. As a mechanism for this response, we found that increasing temperature significantly increases scale insect fecundity and contributes to greater population increase. Specifically, adult female M. tenebricosa egg sets increased by approximately 14 eggs for every 1°C increase in temperature. Climate change models predict that the global climate will increase by 2–3°C in the next 50–100 years, which we found would increase scale insect abundance by three orders of magnitude. This result supports predictions that urban and natural forests will face greater herbivory in the future, and suggests that a primary cause could be direct, positive effects of warming on herbivore fitness rather than altered trophic interactions.
Anticonvulsant activity of PNU-151774E in the amygdala kindled model of complex partial seizures.
Maj, R; Fariello, R G; Pevarello, P; Varasi, M; McArthur, R A; Salvati, P
1999-11-01
PNU-151774E [(S)-(+)-2-(4-(3-fluorobenzyloxy) benzylamino) propanamide, methanesulfonate] is a novel antiepileptic drug (AED) with a broad spectrum of activity in a variety of chemically and mechanically induced seizures. The objective of this study was to evaluate the activity of PNU-151774E in the amygdala fully kindled rat model of complex partial seizures, and to compare its effects with those of carbamazepine (CBZ), phenytoin (PHT), lamotrigine (LTG), and gabapentin (GBP), drugs used to treat this disease state. Male Wistar rats were stimulated daily through electrodes implanted in the amygdala with a threshold current until fully generalized seizures developed. The rats were then treated with various doses of a single compound. Control values for each rat and drug dose were determined after vehicle administration followed by electrical stimulation 1 day before drug treatment. PNU-151774E (1, 10, 30 mg/kg; i.p.) reduced the duration of behavioral seizures significantly and dose-dependently at doses starting from 1 mg/kg. Higher doses significantly reduced seizure severity and afterdischarge duration. In contrast, no dose-related effects were noted after administration of PHT, whereas after CBZ treatment, a plateau of activity was noted from the intermediate to higher doses. The effects of PNU-151774E were comparable to those of LTG and GBP. The activity shown by PNU-151774E at doses similar to those that are active in models of generalized seizures indicates that PNU-151774E would also have potential efficacy in the treatment of complex partial seizures.
Alternative mitochondrial electron transfer as a novel strategy for neuroprotection.
Wen, Yi; Li, Wenjun; Poteet, Ethan C; Xie, Luokun; Tan, Cong; Yan, Liang-Jun; Ju, Xiaohua; Liu, Ran; Qian, Hai; Marvin, Marian A; Goldberg, Matthew S; She, Hua; Mao, Zixu; Simpkins, James W; Yang, Shao-Hua
2011-05-06
Neuroprotective strategies, including free radical scavengers, ion channel modulators, and anti-inflammatory agents, have been extensively explored in the last 2 decades for the treatment of neurological diseases. Unfortunately, none of the neuroprotectants has been proved effective in clinical trails. In the current study, we demonstrated that methylene blue (MB) functions as an alternative electron carrier, which accepts electrons from NADH and transfers them to cytochrome c and bypasses complex I/III blockage. A de novo synthesized MB derivative, with the redox center disabled by N-acetylation, had no effect on mitochondrial complex activities. MB increases cellular oxygen consumption rates and reduces anaerobic glycolysis in cultured neuronal cells. MB is protective against various insults in vitro at low nanomolar concentrations. Our data indicate that MB has a unique mechanism and is fundamentally different from traditional antioxidants. We examined the effects of MB in two animal models of neurological diseases. MB dramatically attenuates behavioral, neurochemical, and neuropathological impairment in a Parkinson disease model. Rotenone caused severe dopamine depletion in the striatum, which was almost completely rescued by MB. MB rescued the effects of rotenone on mitochondrial complex I-III inhibition and free radical overproduction. Rotenone induced a severe loss of nigral dopaminergic neurons, which was dramatically attenuated by MB. In addition, MB significantly reduced cerebral ischemia reperfusion damage in a transient focal cerebral ischemia model. The present study indicates that rerouting mitochondrial electron transfer by MB or similar molecules provides a novel strategy for neuroprotection against both chronic and acute neurological diseases involving mitochondrial dysfunction.
Characterization of craniofacial sutures using the finite element method.
Maloul, Asmaa; Fialkov, Jeffrey; Wagner, Diane; Whyne, Cari M
2014-01-03
Characterizing the biomechanical behavior of sutures in the human craniofacial skeleton (CFS) is essential to understand the global impact of these articulations on load transmission, but is challenging due to the complexity of their interdigitated morphology, the multidirectional loading they are exposed to and the lack of well-defined suture material properties. This study aimed to quantify the impact of morphological features, direction of loading and suture material properties on the mechanical behavior of sutures and surrounding bone in the CFS. Thirty-six idealized finite element (FE) models were developed. One additional specimen-specific FE model was developed based on the morphology obtained from a µCT scan to represent the morphological complexity inherent in CFS sutures. Outcome variables of strain energy (SE) and von Mises stress (σvm) were evaluated to characterize the sutures' biomechanical behavior. Loading direction was found to impact the relationship between SE and interdigitation index and yielded varied patterns of σvm in both the suture and surrounding bone. Adding bone connectivity reduced suture strain energy and altered the σvm distribution. Incorporating transversely isotropic material properties was found to reduce SE, but had little impact on stress patterns. High-resolution µCT scanning of the suture revealed a complex morphology with areas of high and low interdigitations. The specimen specific suture model results were reflective of SE absorption and σvm distribution patterns consistent with the simplified FE results. Suture mechanical behavior is impacted by morphologic factors (interdigitation and connectivity), which may be optimized for regional loading within the CFS. © 2013 Elsevier Ltd. All rights reserved.
van Gestel, Aukje; Severens, Johan L; Webers, Carroll A B; Beckers, Henny J M; Jansonius, Nomdo M; Schouten, Jan S A G
2010-01-01
Discrete event simulation (DES) modeling has several advantages over simpler modeling techniques in health economics, such as increased flexibility and the ability to model complex systems. Nevertheless, these benefits may come at the cost of reduced transparency, which may compromise the model's face validity and credibility. We aimed to produce a transparent report on the construction and validation of a DES model using a recently developed model of ocular hypertension and glaucoma. Current evidence of associations between prognostic factors and disease progression in ocular hypertension and glaucoma was translated into DES model elements. The model was extended to simulate treatment decisions and effects. Utility and costs were linked to disease status and treatment, and clinical and health economic outcomes were defined. The model was validated at several levels. The soundness of design and the plausibility of input estimates were evaluated in interdisciplinary meetings (face validity). Individual patients were traced throughout the simulation under a multitude of model settings to debug the model, and the model was run with a variety of extreme scenarios to compare the outcomes with prior expectations (internal validity). Finally, several intermediate (clinical) outcomes of the model were compared with those observed in experimental or observational studies (external validity) and the feasibility of evaluating hypothetical treatment strategies was tested. The model performed well in all validity tests. Analyses of hypothetical treatment strategies took about 30 minutes per cohort and lead to plausible health-economic outcomes. There is added value of DES models in complex treatment strategies such as glaucoma. Achieving transparency in model structure and outcomes may require some effort in reporting and validating the model, but it is feasible.
Connor, Carol McDonald; Day, Stephanie L; Phillips, Beth; Sparapani, Nicole; Ingebrand, Sarah W; McLean, Leigh; Barrus, Angela; Kaschak, Michael P
2016-11-01
Many assume that cognitive and linguistic processes, such as semantic knowledge (SK) and self-regulation (SR), subserve learned skills like reading. However, complex models of interacting and bootstrapping effects of SK, SR, instruction, and reading hypothesize reciprocal effects. Testing this "lattice" model with children (n = 852) followed from first to second grade (5.9-10.4 years of age) revealed reciprocal effects for reading and SR, and reading and SK, but not SR and SK. More effective literacy instruction reduced reading stability over time. Findings elucidate the synergistic and reciprocal effects of learning to read on other important linguistic, self-regulatory, and cognitive processes; the value of using complex models of development to inform intervention design; and how learned skills may influence development during middle childhood. © 2016 The Authors. Child Development © 2016 Society for Research in Child Development, Inc.
Linear-algebraic bath transformation for simulating complex open quantum systems
Huh, Joonsuk; Mostame, Sarah; Fujita, Takatoshi; ...
2014-12-02
In studying open quantum systems, the environment is often approximated as a collection of non-interacting harmonic oscillators, a configuration also known as the star-bath model. It is also well known that the star-bath can be transformed into a nearest-neighbor interacting chain of oscillators. The chain-bath model has been widely used in renormalization group approaches. The transformation can be obtained by recursion relations or orthogonal polynomials. Based on a simple linear algebraic approach, we propose a bath partition strategy to reduce the system-bath coupling strength. As a result, the non-interacting star-bath is transformed into a set of weakly coupled multiple parallelmore » chains. Furthermore, the transformed bath model allows complex problems to be practically implemented on quantum simulators, and it can also be employed in various numerical simulations of open quantum dynamics.« less
Ludwig, T; Kern, P; Bongards, M; Wolf, C
2011-01-01
The optimization of relaxation and filtration times of submerged microfiltration flat modules in membrane bioreactors used for municipal wastewater treatment is essential for efficient plant operation. However, the optimization and control of such plants and their filtration processes is a challenging problem due to the underlying highly nonlinear and complex processes. This paper presents the use of genetic algorithms for this optimization problem in conjunction with a fully calibrated simulation model, as computational intelligence methods are perfectly suited to the nonconvex multi-objective nature of the optimization problems posed by these complex systems. The simulation model is developed and calibrated using membrane modules from the wastewater simulation software GPS-X based on the Activated Sludge Model No.1 (ASM1). Simulation results have been validated at a technical reference plant. They clearly show that filtration process costs for cleaning and energy can be reduced significantly by intelligent process optimization.
Darré, Leonardo; Machado, Matías Rodrigo; Brandner, Astrid Febe; González, Humberto Carlos; Ferreira, Sebastián; Pantano, Sergio
2015-02-10
Modeling of macromolecular structures and interactions represents an important challenge for computational biology, involving different time and length scales. However, this task can be facilitated through the use of coarse-grained (CG) models, which reduce the number of degrees of freedom and allow efficient exploration of complex conformational spaces. This article presents a new CG protein model named SIRAH, developed to work with explicit solvent and to capture sequence, temperature, and ionic strength effects in a topologically unbiased manner. SIRAH is implemented in GROMACS, and interactions are calculated using a standard pairwise Hamiltonian for classical molecular dynamics simulations. We present a set of simulations that test the capability of SIRAH to produce a qualitatively correct solvation on different amino acids, hydrophilic/hydrophobic interactions, and long-range electrostatic recognition leading to spontaneous association of unstructured peptides and stable structures of single polypeptides and protein-protein complexes.
Wilkes, Daniel R; Duncan, Alec J
2015-04-01
This paper presents a numerical model for the acoustic coupled fluid-structure interaction (FSI) of a submerged finite elastic body using the fast multipole boundary element method (FMBEM). The Helmholtz and elastodynamic boundary integral equations (BIEs) are, respectively, employed to model the exterior fluid and interior solid domains, and the pressure and displacement unknowns are coupled between conforming meshes at the shared boundary interface to achieve the acoustic FSI. The low frequency FMBEM is applied to both BIEs to reduce the algorithmic complexity of the iterative solution from O(N(2)) to O(N(1.5)) operations per matrix-vector product for N boundary unknowns. Numerical examples are presented to demonstrate the algorithmic and memory complexity of the method, which are shown to be in good agreement with the theoretical estimates, while the solution accuracy is comparable to that achieved by a conventional finite element-boundary element FSI model.
Pricing strategy in a dual-channel and remanufacturing supply chain system
NASA Astrophysics Data System (ADS)
Jiang, Chengzhi; Xu, Feng; Sheng, Zhaohan
2010-07-01
This article addresses the pricing strategy problems in a supply chain system where the manufacturer sells original products and remanufactured products via indirect retailer channels and direct Internet channels. Due to the complexity of that system, agent technologies that provide a new way for analysing complex systems are used for modelling. Meanwhile, in order to reduce the computational load of searching procedure for optimal prices and profits, a learning search algorithm is designed and implemented within the multi-agent supply chain model. The simulation results show that the proposed model can find out optimal prices of original products and remanufactured products in both channels, which lead to optimal profits of the manufacturer and the retailer. It is also found that the optimal profits are increased by introducing direct channel and remanufacturing. Furthermore, the effect of customer preference, direct channel cost and remanufactured unit cost on optimal prices and profits are examined.
NASA Technical Reports Server (NTRS)
Kline, S. J. (Editor); Cantwell, B. J. (Editor); Lilley, G. M.
1982-01-01
Computational techniques for simulating turbulent flows were explored, together with the results of experimental investigations. Particular attention was devoted to the possibility of defining a universal closure model, applicable for all turbulence situations; however, conclusions were drawn that zonal models, describing localized structures, were the most promising techniques to date. The taxonomy of turbulent flows was summarized, as were algebraic, differential, integral, and partial differential methods for numerical depiction of turbulent flows. Numerous comparisons of theoretically predicted and experimentally obtained data for wall pressure distributions, velocity profiles, turbulent kinetic energy profiles, Reynolds shear stress profiles, and flows around transonic airfoils were presented. Simplifying techniques for reducing the necessary computational time for modeling complex flowfields were surveyed, together with the industrial requirements and applications of computational fluid dynamics techniques.
NASA Astrophysics Data System (ADS)
Schulze, Jan; Shibl, Mohamed F.; Al-Marri, Mohammed J.; Kühn, Oliver
2016-05-01
The coupled quantum dynamics of excitonic and vibrational degrees of freedom is investigated for high-dimensional models of the Fenna-Matthews-Olson complex. This includes a seven- and an eight-site model with 518 and 592 harmonic vibrational modes, respectively. The coupling between local electronic transitions and vibrations is described within the Huang-Rhys model using parameters that are obtained by discretization of an experimental spectral density. Different pathways of excitation energy flow are analyzed in terms of the reduced one-exciton density matrix, focussing on the role of vibrational and vibronic excitations. Distinct features due to both competing time scales of vibrational and exciton motion and vibronically assisted transfer are observed. The question of the effect of initial state preparation is addressed by comparing the case of an instantaneous Franck-Condon excitation at a single site with that of a laser field excitation.
[Model aeroplanes: a not to be ignored source of complex injuries].
Laback, C; Vasilyeva, A; Rappl, T; Lumenta, D; Giunta, R E; Kamolz, L
2013-12-01
With the incidence of work-related injuries decreasing, we continue to observe an unchanged trend in leisure-related accidents. As in any other hobby, model flying devices bear the risk for accidents among builders and flyers ranging from skin lacerations to complicated and even life-threatening injuries. The fast-moving razor-sharp propeller blades predominantly cause trauma to the hands and fingers resulting in typical multiple parallel skin injuries also affecting structures deep to the dermis (e. g., tendons, vessels and nerves). The resultant clinical management involves complex reconstructive surgical procedures and prolonged rehabilitative follow-up. Improving the legal framework (e. g., warnings by the manufacturer) on the one hand, providing informative action and sensitising those affected on the other, should form a basis for an altered prevention strategy to reduce model flying device-related injuries in the future. © Georg Thieme Verlag KG Stuttgart · New York.
Leherte, Laurence; Vercauteren, Daniel P
2017-10-26
We investigate the influence of various solvent models on the structural stability and protein-water interface of three ubiquitin complexes (PDB access codes: 1Q0W , 2MBB , 2G3Q ) modeled using the Amber99sb force field (FF) and two different point charge distributions. A previously developed reduced point charge model (RPCM), wherein each amino acid residue is described by a limited number of point charges, is tested and compared to its all-atom (AA) version. The complexes are solvated in TIP4P-Ew or TIP3P type water molecules, involving either the scaling of the Lennard-Jones protein-O water interaction parameters, or the coarse-grain (CG) SIRAH water description. The best agreements between the RPCM and AA models were obtained for structural, protein-water, and ligand-ubiquitin properties when using the TIP4P-Ew water FF with a scaling factor γ of 0.7. At the RPCM level, a decrease in γ, or the inclusion of SIRAH particles, allows weakening of the protein-water interactions. It results in a slight collapse of the protein structure and a less compact hydration shell and, thus, in a decrease in the number of protein-water and water-water H-bonds. The dynamics of the surface protein atoms and of the water shell molecules are also slightly refrained, which allow the generation of stable RPCM trajectories.
Children with medical complexity: a scoping review of interventions to support caregiver stress.
Edelstein, H; Schippke, J; Sheffe, S; Kingsnorth, S
2017-05-01
Caring for children with chronic and complex medical needs places extraordinary stress on parents and other family members. A scoping review was undertaken to identify and describe the full range of current interventions for reducing caregiver stress. Applying a broad definition of caregiver stress, a systematic search of three scientific databases (CINAHL, Embase and Ovid Medline), a general internet search and hand searching of key peer-reviewed articles were conducted. Inclusion criteria were as follows: (i) published in English between 2004-2016; (ii) focused on familial caregivers, defined as parents, siblings or extended family; (iii) targeted children/youth with medical complexity between the ages of 1-24 years; and (iv) described an intervention and impact on caregiver stress. Data on type of intervention, study design and methods, measures and overall findings were extracted. Forty-nine studies were included from a list of 22 339 unique titles. Six domains of interventions were found: care coordination models (n = 23); respite care (n = 8); telemedicine (n = 5); peer and emotional support (n = 6); insurance and employment benefits (n = 4); and health and related supports (n = 3). Across studies, there was a wide variety of designs, outcomes and measures used. Positive findings of reductions in caregiver stress were noted within an emerging body of evidence on effective interventions for families of children with medical complexity. A commonality across domains was a significant focus on streamlining services and reducing the burden of care related to varied pressures experienced, including time, finances, care needs and service access, among others. There was non-conclusive evidence however around which of the six identified intervention domains or combination thereof are most effective for reducing stress. These promising findings demonstrate that stress reduction is possible with the right support and that multiple interventions may be effective in reducing burdens of care experienced by families of children with medical complexity. © 2016 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Saletti, M.; Molnar, P.; Hassan, M. A.
2017-12-01
Granular processes have been recognized as key drivers in earth surface dynamics, especially in steep landscapes because of the large size of sediment found in channels. In this work we focus on step-pool morphologies, studying the effect of particle jamming on step formation. Starting from the jammed-state hypothesis, we assume that grains generate steps because of particle jamming and those steps are inherently more stable because of additional force chains in the transversal direction. We test this hypothesis with a particle-based reduced-complexity model, CAST2, where sediment is organized in patches and entrainment, transport and deposition of grains depend on flow stage and local topography through simplified phenomenological rules. The model operates with 2 grain sizes: fine grains, that can be mobilized both my large and moderate flows, and coarse grains, mobile only during large floods. First, we identify the minimum set of processes necessary to generate and maintain steps in a numerical channel: (a) occurrence of floods, (b) particle jamming, (c) low sediment supply, and (d) presence of sediment with different entrainment probabilities. Numerical results are compared with field observations collected in different step-pool channels in terms of step density, a variable that captures the proportion of the channel occupied by steps. Not only the longitudinal profiles of numerical channels display step sequences similar to those observed in real step-pool streams, but also the values of step density are very similar when all the processes mentioned before are considered. Moreover, with CAST2 it is possible to run long simulations with repeated flood events, to test the effect of flood frequency on step formation. Numerical results indicate that larger step densities belong to system more frequently perturbed by floods, compared to system having a lower flood frequency. Our results highlight the important interactions between external hydrological forcing and internal geomorphic adjustment (e.g. jamming) on the response of step-pool streams, showing the potential of reduced-complexity models in fluvial geomorphology.
Metabolic flexibility of mitochondrial respiratory chain disorders predicted by computer modelling.
Zieliński, Łukasz P; Smith, Anthony C; Smith, Alexander G; Robinson, Alan J
2016-11-01
Mitochondrial respiratory chain dysfunction causes a variety of life-threatening diseases affecting about 1 in 4300 adults. These diseases are genetically heterogeneous, but have the same outcome; reduced activity of mitochondrial respiratory chain complexes causing decreased ATP production and potentially toxic accumulation of metabolites. Severity and tissue specificity of these effects varies between patients by unknown mechanisms and treatment options are limited. So far most research has focused on the complexes themselves, and the impact on overall cellular metabolism is largely unclear. To illustrate how computer modelling can be used to better understand the potential impact of these disorders and inspire new research directions and treatments, we simulated them using a computer model of human cardiomyocyte mitochondrial metabolism containing over 300 characterised reactions and transport steps with experimental parameters taken from the literature. Overall, simulations were consistent with patient symptoms, supporting their biological and medical significance. These simulations predicted: complex I deficiencies could be compensated using multiple pathways; complex II deficiencies had less metabolic flexibility due to impacting both the TCA cycle and the respiratory chain; and complex III and IV deficiencies caused greatest decreases in ATP production with metabolic consequences that parallel hypoxia. Our study demonstrates how results from computer models can be compared to a clinical phenotype and used as a tool for hypothesis generation for subsequent experimental testing. These simulations can enhance understanding of dysfunctional mitochondrial metabolism and suggest new avenues for research into treatment of mitochondrial disease and other areas of mitochondrial dysfunction. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Sancho, Matias I; Andujar, Sebastian; Porasso, Rodolfo D; Enriz, Ricardo D
2016-03-31
The inclusion complexes formed by chalcone and 2',4'-dihydroxychalcone with β-cyclodextrin have been studied combining experimental (phase solubility diagrams, Fourier transform infrared spectroscopy) and molecular modeling (molecular dynamics, quantum mechanics/molecular mechanics calculations) techniques. The formation constants of the complexes were determined at different temperatures, and the thermodynamic parameters of the process were obtained. The inclusion of chalcone in β-cyclodextrin is an exothermic process, while the inclusion of 2',4'-dihydroxychalcone is endothermic. Free energy profiles, derived from umbrella sampling using molecular dynamics simulations, were constructed to analyze the binding affinity and the complexation reaction at a molecular level. Hybrid QM/MM calculations were also employed to obtain a better description of the energetic and structural aspects of the complexes. The intermolecular interactions that stabilize both inclusion complexes were characterized by means of quantum atoms in molecules theory and reduce density gradient method. The calculated interactions were experimentally observed using FTIR.
Numerical simulations of a reduced model for blood coagulation
NASA Astrophysics Data System (ADS)
Pavlova, Jevgenija; Fasano, Antonio; Sequeira, Adélia
2016-04-01
In this work, the three-dimensional numerical resolution of a complex mathematical model for the blood coagulation process is presented. The model was illustrated in Fasano et al. (Clin Hemorheol Microcirc 51:1-14, 2012), Pavlova et al. (Theor Biol 380:367-379, 2015). It incorporates the action of the biochemical and cellular components of blood as well as the effects of the flow. The model is characterized by a reduction in the biochemical network and considers the impact of the blood slip at the vessel wall. Numerical results showing the capacity of the model to predict different perturbations in the hemostatic system are discussed.
Using Agent-Based Modeling to Enhance System-Level Real-time Control of Urban Stormwater Systems
NASA Astrophysics Data System (ADS)
Rimer, S.; Mullapudi, A. M.; Kerkez, B.
2017-12-01
The ability to reduce combined-sewer overflow (CSO) events is an issue that challenges over 800 U.S. municipalities. When the volume of a combined sewer system or wastewater treatment plant is exceeded, untreated wastewater then overflows (a CSO event) into nearby streams, rivers, or other water bodies causing localized urban flooding and pollution. The likelihood and impact of CSO events has only exacerbated due to urbanization, population growth, climate change, aging infrastructure, and system complexity. Thus, there is an urgent need for urban areas to manage CSO events. Traditionally, mitigating CSO events has been carried out via time-intensive and expensive structural interventions such as retention basins or sewer separation, which are able to reduce CSO events, but are costly, arduous, and only provide a fixed solution to a dynamic problem. Real-time control (RTC) of urban drainage systems using sensor and actuator networks has served as an inexpensive and versatile alternative to traditional CSO intervention. In particular, retrofitting individual stormwater elements for sensing and automated active distributed control has been shown to significantly reduce the volume of discharge during CSO events, with some RTC models demonstrating a reduction upwards of 90% when compared to traditional passive systems. As more stormwater elements become retrofitted for RTC, system-level RTC across complete watersheds is an attainable possibility. However, when considering the diverse set of control needs of each of these individual stormwater elements, such system-level RTC becomes a far more complex problem. To address such diverse control needs, agent-based modeling is employed such that each individual stormwater element is treated as an autonomous agent with a diverse decision making capabilities. We present preliminary results and limitations of utilizing the agent-based modeling computational framework for the system-level control of diverse, interacting stormwater elements.
Antecedents and consequences of intra-group conflict among nurses.
Almost, Joan; Doran, Diane M; McGillis Hall, Linda; Spence Laschinger, Heather K
2010-11-01
To test a theoretical model linking selected antecedent variables to intra-group conflict among nurses, and subsequently conflict management style, job stress and job satisfaction. A contributing factor to the nursing shortage is job dissatisfaction as a result of conflict among nurses. To develop strategies to reduce conflict, research is needed to understand the causes and outcomes of conflict in nursing work environments. A predictive, non-experimental design was used in a random sample of 277 acute care nurses. Structural equation modelling was used to analyse the hypothesised model. Nurses' core self-evaluations, complexity of care and relationships with managers and nursing colleagues influenced their perceived level of conflict. Conflict management style partially mediated the relationship between conflict and job satisfaction. Job stress had a direct effect on job satisfaction and core self-evaluation had a direct effect on job stress. Conflict and its associated outcomes is a complex process, affected by dispositional, contextual and interpersonal factors. How nurses manage conflict may not prevent the negative effects of conflict, however, learning to manage conflict using collaboration and accommodation may help nurses experience greater job satisfaction. Strategies to manage and reduce conflict include building interactional justice practices and positive interpersonal relationships. © 2010 The Authors. Journal compilation © 2010 Blackwell Publishing Ltd.
Palma, P N; Moura, I; LeGall, J; Van Beeumen, J; Wampler, J E; Moura, J J
1994-05-31
Small electron-transfer proteins such as flavodoxin (16 kDa) and the tetraheme cytochrome c3 (13 kDa) have been used to mimic, in vitro, part of the complex electron-transfer chain operating between substrate electron donors and respiratory electron acceptors, in sulfate-reducing bacteria (Desulfovibrio species). The nature and properties of the complex formed between these proteins are revealed by 1H-NMR and molecular modeling approaches. Our previous study with the Desulfovibrio vulgaris proteins [Moura, I., Moura, J.J. G., Santos, M.H., & Xavier, A. V. (1980) Cienc. Biol. (Portugal) 5, 195-197; Stewart, D.E. LeGall, J., Moura, I., Moura, J. J. G., Peck, H.D. Jr., Xavier, A. V., Weiner, P. K., & Wampler, J.E. (1988) Biochemistry 27, 2444-2450] indicated that the complex between cytochrome c3 and flavodoxin could be monitored by changes in the NMR signals of the heme methyl groups of the cytochrome and that the electrostatic surface charge (Coulomb's law) on the two proteins favored interaction between one unique heme of the cytochrome with flavodoxin. If the interaction is indeed driven by the electrostatic complementarity between the acidic flavodoxin and a unique positive region of the cytochrome c3, other homologous proteins from these two families of proteins might be expected to interact similarly. In this study, three homologous Desulfovibrio cytochromes c3 were used, which show a remarkable variation in their individual isoelectric points (ranging from 5.5 to 9.5). On the basis of data obtained from protein-protein titrations followed at specific proton NMR signals (i.e., heme methyl resonances), a binding model for this complex has been developed with evaluation of stoichiometry and binding constants. This binding model involves one site on the cytochromes c3 and two sites on the flavodoxin, with formation of a ternary complex at saturation. In order to understand the potential chemical form of the binding model, a structural model for the hypothetical ternary complex, formed between one molecule of Desulfovibrio salexigens flavodoxin and two molecules of cytochrome c3, is proposed. These molecular models of the complexes were constructed on the basis of complementarity of Coulombic electrostatic surface potentials, using the available X-ray structures of the isolated proteins and, when required, model structures (D. salexigens flavodoxin and Desulfovibrio desulfuricans ATCC 27774 cytochrome c3) predicted by homology modeling.
Effect of Rho-kinase inhibition on complexity of breathing pattern in a guinea pig model of asthma
Pazhoohan, Saeed; Javan, Mohammad; Hajizadeh, Sohrab
2017-01-01
Asthma represents an episodic and fluctuating behavior characterized with decreased complexity of respiratory dynamics. Several evidence indicate that asthma severity or control is associated with alteration in variability of lung function. The pathophysiological basis of alteration in complexity of breathing pattern in asthma has remained poorly understood. Regarding the point that Rho-kinase is involved in pathophysiology of asthma, in present study we investigated the effect of Rho-kinase inhibition on complexity of respiratory dynamics in a guinea pig model of asthma. Male Dunkin Hartley guinea pigs were exposed to 12 series of inhalations with ovalbumin or saline. Animals were treated by the Rho-kinase inhibitor Y-27632 (1mM aerosols) prior to each allergen challenge. We recorded respiration of conscious animals using whole-body plethysmography. Exposure to ovalbumin induced lung inflammation, airway hyperresponsiveness and remodeling including goblet cell hyperplasia, increase in the thickness of airways smooth muscles and subepithelial collagen deposition. Complexity analysis of respiratory dynamics revealed a dramatic decrease in irregularity of respiratory rhythm representing less complexity in asthmatic guinea pigs. Inhibition of Rho-kinase reduced the airway remodeling and hyperreponsiveness, but had no significant effect on lung inflammation and complexity of respiratory dynamics in asthmatic animals. It seems that airway hyperresponsiveness and remodeling do not significantly affect the complexity of respiratory dynamics. Our results suggest that inflammation might be the probable cause of shift in the respiratory dynamics away from the normal fluctuation in asthma. PMID:29088265
Modeling of reduced secondary electron emission yield from a foam or fuzz surface
Swanson, Charles; Kaganovich, Igor D.
2018-01-10
Complex structures on a material surface can significantly reduce the total secondary electron emission yield from that surface. A foam or fuzz is a solid surface above which is placed a layer of isotropically aligned whiskers. Primary electrons that penetrate into this layer produce secondary electrons that become trapped and do not escape into the bulk plasma. In this manner the secondary electron yield (SEY) may be reduced. We developed an analytic model and conducted numerical simulations of secondary electron emission from a foam to determine the extent of SEY reduction. We find that the relevant condition for SEY minimization ismore » $$\\bar{u}$$≡AD/2>>1 while D <<1, where D is the volume fill fraction and A is the aspect ratio of the whisker layer, the ratio of the thickness of the layer to the radius of the fibers. As a result, we find that foam cannot reduce the SEY from a surface to less than 0.3 of its flat value.« less
Modeling of reduced secondary electron emission yield from a foam or fuzz surface
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swanson, Charles; Kaganovich, Igor D.
Complex structures on a material surface can significantly reduce the total secondary electron emission yield from that surface. A foam or fuzz is a solid surface above which is placed a layer of isotropically aligned whiskers. Primary electrons that penetrate into this layer produce secondary electrons that become trapped and do not escape into the bulk plasma. In this manner the secondary electron yield (SEY) may be reduced. We developed an analytic model and conducted numerical simulations of secondary electron emission from a foam to determine the extent of SEY reduction. We find that the relevant condition for SEY minimization ismore » $$\\bar{u}$$≡AD/2>>1 while D <<1, where D is the volume fill fraction and A is the aspect ratio of the whisker layer, the ratio of the thickness of the layer to the radius of the fibers. As a result, we find that foam cannot reduce the SEY from a surface to less than 0.3 of its flat value.« less
NASA Astrophysics Data System (ADS)
Gosses, Moritz; Nowak, Wolfgang; Wöhling, Thomas
2018-05-01
In recent years, proper orthogonal decomposition (POD) has become a popular model reduction method in the field of groundwater modeling. It is used to mitigate the problem of long run times that are often associated with physically-based modeling of natural systems, especially for parameter estimation and uncertainty analysis. POD-based techniques reproduce groundwater head fields sufficiently accurate for a variety of applications. However, no study has investigated how POD techniques affect the accuracy of different boundary conditions found in groundwater models. We show that the current treatment of boundary conditions in POD causes inaccuracies for these boundaries in the reduced models. We provide an improved method that splits the POD projection space into a subspace orthogonal to the boundary conditions and a separate subspace that enforces the boundary conditions. To test the method for Dirichlet, Neumann and Cauchy boundary conditions, four simple transient 1D-groundwater models, as well as a more complex 3D model, are set up and reduced both by standard POD and POD with the new extension. We show that, in contrast to standard POD, the new method satisfies both Dirichlet and Neumann boundary conditions. It can also be applied to Cauchy boundaries, where the flux error of standard POD is reduced by its head-independent contribution. The extension essentially shifts the focus of the projection towards the boundary conditions. Therefore, we see a slight trade-off between errors at model boundaries and overall accuracy of the reduced model. The proposed POD extension is recommended where exact treatment of boundary conditions is required.
Response of SOM Decomposition to Anthropogenic N Deposition: Simulations From the PnET-SOM Model.
NASA Astrophysics Data System (ADS)
Tonitto, C.; Goodale, C. L.; Ollinger, S. V.; Jenkins, J. P.
2008-12-01
Anthropogenic forcing of the C and N cycles has caused rapid change in atmospheric CO2 and N deposition, with complex and uncertain effects on forest C and N balance. With some exceptions, models of forest ecosystem response to anthropogenic perturbation have historically focused more on aboveground than belowground processes; the complexity of soil organic matter (SOM) is often represented with abstract or incomplete SOM pools, and remains difficult to quantify. We developed a model of SOM dynamics in northern hardwood forests with explicit feedbacks between C and N cycles. The soil model is linked to the aboveground dynamics of the PnET model to form PnET-SOM. The SOM model includes: 1) physically measurable SOM pools, including humic and mineral-associated SOM in O, A, and B soil horizons, 2) empirical soil turnover times based on 14C data, 3) alternative SOM decomposition algorithms with and without explicit microbial processing, and 4) soluble element transport explicitly linked to the hydrologic cycle. We tested model sensitivity to changes in litter decomposition rate (k) and completeness of decomposition (limit value) by altering these parameters based on experimental observations from long-term litter decomposition experiments with N fertilization treatments. After a 100 year simulation, the Oe+Oa horizon SOC pool was reduced by 15 % and the A-horizon humified SOC was reduced by 7 % for N deposition scenarios relative to forests without N fertilization. In contrast, predictions for slower time-scale pools showed negligible variation in response to variation in the limit values tested, with A-horizon mineral SOC pools reduced by < 3 % and B-horizon mineral SOC reduced by 0.1 % for N deposition scenarios relative to forests without N fertilization. The model was also used to test the effect of varying initial litter decomposition rate to simulate response to N deposition. In contrast to the effect of varying limit values, simulations in which only k-values were varied did not drastically alter the predicted SOC pool distribution throughout the soil profile, but did significantly alter the Oi SOC pool. These results suggest that describing soil response to N deposition via alteration of the limit value alone, or as a combined alteration of limit value and the initial decomposition rate, can lead to significant variation in predicted long-term C storage.
Reduced order model based on principal component analysis for process simulation and optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lang, Y.; Malacina, A.; Biegler, L.
2009-01-01
It is well-known that distributed parameter computational fluid dynamics (CFD) models provide more accurate results than conventional, lumped-parameter unit operation models used in process simulation. Consequently, the use of CFD models in process/equipment co-simulation offers the potential to optimize overall plant performance with respect to complex thermal and fluid flow phenomena. Because solving CFD models is time-consuming compared to the overall process simulation, we consider the development of fast reduced order models (ROMs) based on CFD results to closely approximate the high-fidelity equipment models in the co-simulation. By considering process equipment items with complicated geometries and detailed thermodynamic property models,more » this study proposes a strategy to develop ROMs based on principal component analysis (PCA). Taking advantage of commercial process simulation and CFD software (for example, Aspen Plus and FLUENT), we are able to develop systematic CFD-based ROMs for equipment models in an efficient manner. In particular, we show that the validity of the ROM is more robust within well-sampled input domain and the CPU time is significantly reduced. Typically, it takes at most several CPU seconds to evaluate the ROM compared to several CPU hours or more to solve the CFD model. Two case studies, involving two power plant equipment examples, are described and demonstrate the benefits of using our proposed ROM methodology for process simulation and optimization.« less
Ghose, Sanchayita; Rajshekaran, Rupshika; Labanca, Marisa; Conley, Lynn
2017-01-06
Trisulfides can be a common post-translational modification in many recombinant monoclonal antibodies. These are a source of product heterogeneity that add to the complexity of product characterization and hence, need to be reduced for consistent product quality. Trisulfide bonds can be converted to the regular disulfide bonds by incorporating a novel cysteine wash step during Protein A affinity chromatography. An empirical model is developed for this on-column reduction reaction to compare the reaction rates as a function of typical operating parameters such as temperature, cysteine concentration, reaction time and starting level of trisulfides. The model presented here is anticipated to assist in the development of optimal wash conditions for the Protein A step to effectively reduce trisulfides to desired levels. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Molnar, I. L.; Krol, M.; Mumford, K. G.
2016-12-01
Geoenvironmental models are becoming increasingly sophisticated as they incorporate rising numbers of mechanisms and process couplings to describe environmental scenarios. When combined with advances in computing and numerical techniques, these already complicated models are experiencing large increases in code complexity and simulation time. Although, this complexity has enabled breakthroughs in the ability to describe environmental problems, it is difficult to ensure that complex models are sufficiently robust and behave as intended. Many development tools used for testing software robustness have not seen widespread use in geoenvironmental sciences despite an increasing reliance on complex numerical models, leaving many models at risk of undiscovered errors and potentially improper validations. This study explores the use of unit testing, which independently examines small code elements to ensure each unit is working as intended as well as their integrated behaviour, to test the functionality and robustness of a coupled Electrical Resistive Heating (ERH) - Macroscopic Invasion Percolation (MIP) model. ERH is a thermal remediation technique where the soil is heated until boiling and volatile contaminants are stripped from the soil. There is significant interest in improving the efficiency of ERH, including taking advantage of low-temperature co-boiling behaviour which may reduce energy consumption. However, at lower co-boiling temperatures gas bubbles can form, mobilize and collapse in cooler areas, potentially contaminating previously clean zones. The ERH-MIP model was created to simulate the behaviour of gas bubbles in the subsurface and to evaluate ERH during co-boiling1. This study demonstrates how unit testing ensures that the model behaves in an expected manner and examines the robustness of every component within the ERH-MIP model. Once unit testing is established, the MIP module (a discrete gas transport algorithm for gas expansion, mobilization and fragmentation2) was validated against a two-dimensional light transmission visualization experiment 3. 1. Krol, M. M., et al. (2011), Adv. Water Resour. 2011, 34 (4), 537-549. 2. Mumford, K. G., et al. (2010), Adv. Water Resour. 2010, 33 (4), 504-513. 3. Hegele, P. R. and Mumford, K. G. Journal of Contaminant Hydrology 2014, 165, 24-36.
Spatial Distributions of Guest Molecule and Hydration Level in Dendrimer-Based Guest–Host Complex
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Chih-Ying; Chen, Hsin-Lung; Do, Changwoo
2016-08-09
Using the electrostatic complex of G4 poly(amidoamine) (PAMAM) dendrimer with an amphiphilic surfactant as a model system, contrast variation small angle neutron scattering (SANS) is implemented to resolve the key structural characteristics of dendrimer-based guest–host system. Quantifications of the radial distributions of the scattering length density and the hydration level within the complex molecule reveal that the surfactant is embedded in the peripheral region of dendrimer and the steric crowding in this region increases the backfolding of the dendritic segments, thereby reducing the hydration level throughout the complex molecule. Here, the insights into the spatial location of the guest moleculesmore » as well as the perturbations of dendrimer conformation and hydration level deduced here are crucial for the delicate design of dendrimer-based guest–host system for biomedical applications.« less
NASA Workshop on Distributed Parameter Modeling and Control of Flexible Aerospace Systems
NASA Technical Reports Server (NTRS)
Marks, Virginia B. (Compiler); Keckler, Claude R. (Compiler)
1994-01-01
Although significant advances have been made in modeling and controlling flexible systems, there remains a need for improvements in model accuracy and in control performance. The finite element models of flexible systems are unduly complex and are almost intractable to optimum parameter estimation for refinement using experimental data. Distributed parameter or continuum modeling offers some advantages and some challenges in both modeling and control. Continuum models often result in a significantly reduced number of model parameters, thereby enabling optimum parameter estimation. The dynamic equations of motion of continuum models provide the advantage of allowing the embedding of the control system dynamics, thus forming a complete set of system dynamics. There is also increased insight provided by the continuum model approach.
Computational State Space Models for Activity and Intention Recognition. A Feasibility Study
Krüger, Frank; Nyolt, Martin; Yordanova, Kristina; Hein, Albert; Kirste, Thomas
2014-01-01
Background Computational state space models (CSSMs) enable the knowledge-based construction of Bayesian filters for recognizing intentions and reconstructing activities of human protagonists in application domains such as smart environments, assisted living, or security. Computational, i. e., algorithmic, representations allow the construction of increasingly complex human behaviour models. However, the symbolic models used in CSSMs potentially suffer from combinatorial explosion, rendering inference intractable outside of the limited experimental settings investigated in present research. The objective of this study was to obtain data on the feasibility of CSSM-based inference in domains of realistic complexity. Methods A typical instrumental activity of daily living was used as a trial scenario. As primary sensor modality, wearable inertial measurement units were employed. The results achievable by CSSM methods were evaluated by comparison with those obtained from established training-based methods (hidden Markov models, HMMs) using Wilcoxon signed rank tests. The influence of modeling factors on CSSM performance was analyzed via repeated measures analysis of variance. Results The symbolic domain model was found to have more than states, exceeding the complexity of models considered in previous research by at least three orders of magnitude. Nevertheless, if factors and procedures governing the inference process were suitably chosen, CSSMs outperformed HMMs. Specifically, inference methods used in previous studies (particle filters) were found to perform substantially inferior in comparison to a marginal filtering procedure. Conclusions Our results suggest that the combinatorial explosion caused by rich CSSM models does not inevitably lead to intractable inference or inferior performance. This means that the potential benefits of CSSM models (knowledge-based model construction, model reusability, reduced need for training data) are available without performance penalty. However, our results also show that research on CSSMs needs to consider sufficiently complex domains in order to understand the effects of design decisions such as choice of heuristics or inference procedure on performance. PMID:25372138
POD Model Reconstruction for Gray-Box Fault Detection
NASA Technical Reports Server (NTRS)
Park, Han; Zak, Michail
2007-01-01
Proper orthogonal decomposition (POD) is the mathematical basis of a method of constructing low-order mathematical models for the "gray-box" fault-detection algorithm that is a component of a diagnostic system known as beacon-based exception analysis for multi-missions (BEAM). POD has been successfully applied in reducing computational complexity by generating simple models that can be used for control and simulation for complex systems such as fluid flows. In the present application to BEAM, POD brings the same benefits to automated diagnosis. BEAM is a method of real-time or offline, automated diagnosis of a complex dynamic system.The gray-box approach makes it possible to utilize incomplete or approximate knowledge of the dynamics of the system that one seeks to diagnose. In the gray-box approach, a deterministic model of the system is used to filter a time series of system sensor data to remove the deterministic components of the time series from further examination. What is left after the filtering operation is a time series of residual quantities that represent the unknown (or at least unmodeled) aspects of the behavior of the system. Stochastic modeling techniques are then applied to the residual time series. The procedure for detecting abnormal behavior of the system then becomes one of looking for statistical differences between the residual time series and the predictions of the stochastic model.
NASA Astrophysics Data System (ADS)
Chen, X.; Zachara, J. M.; Vermeul, V. R.; Freshley, M.; Hammond, G. E.
2015-12-01
The behavior of a persistent uranium plume in an extended groundwater- river water (GW-SW) interaction zone at the DOE Hanford site is dominantly controlled by river stage fluctuations in the adjacent Columbia River. The plume behavior is further complicated by substantial heterogeneity in physical and geochemical properties of the host aquifer sediments. Multi-scale field and laboratory experiments and reactive transport modeling were integrated to understand the complex plume behavior influenced by highly variable hydrologic and geochemical conditions in time and space. In this presentation we (1) describe multiple data sets from field-scale uranium adsorption and desorption experiments performed at our experimental well-field, (2) develop a reactive transport model that incorporates hydrologic and geochemical heterogeneities characterized from multi-scale and multi-type datasets and a surface complexation reaction network based on laboratory studies, and (3) compare the modeling and observation results to provide insights on how to refine the conceptual model and reduce prediction uncertainties. The experimental results revealed significant spatial variability in uranium adsorption/desorption behavior, while modeling demonstrated that ambient hydrologic and geochemical conditions and heterogeneities in sediment physical and chemical properties both contributed to complex plume behavior and its persistence. Our analysis provides important insights into the characterization, understanding, modeling, and remediation of groundwater contaminant plumes influenced by surface water and groundwater interactions.
Gasche, Loïc; Mahévas, Stéphanie; Marchal, Paul
2013-01-01
Ecosystems are usually complex, nonlinear and strongly influenced by poorly known environmental variables. Among these systems, marine ecosystems have high uncertainties: marine populations in general are known to exhibit large levels of natural variability and the intensity of fishing efforts can change rapidly. These uncertainties are a source of risks that threaten the sustainability of both fish populations and fishing fleets targeting them. Appropriate management measures have to be found in order to reduce these risks and decrease sensitivity to uncertainties. Methods have been developed within decision theory that aim at allowing decision making under severe uncertainty. One of these methods is the information-gap decision theory. The info-gap method has started to permeate ecological modelling, with recent applications to conservation. However, these practical applications have so far been restricted to simple models with analytical solutions. Here we implement a deterministic approach based on decision theory in a complex model of the Eastern English Channel. Using the ISIS-Fish modelling platform, we model populations of sole and plaice in this area. We test a wide range of values for ecosystem, fleet and management parameters. From these simulations, we identify management rules controlling fish harvesting that allow reaching management goals recommended by ICES (International Council for the Exploration of the Sea) working groups while providing the highest robustness to uncertainties on ecosystem parameters. PMID:24204873
Gasche, Loïc; Mahévas, Stéphanie; Marchal, Paul
2013-01-01
Ecosystems are usually complex, nonlinear and strongly influenced by poorly known environmental variables. Among these systems, marine ecosystems have high uncertainties: marine populations in general are known to exhibit large levels of natural variability and the intensity of fishing efforts can change rapidly. These uncertainties are a source of risks that threaten the sustainability of both fish populations and fishing fleets targeting them. Appropriate management measures have to be found in order to reduce these risks and decrease sensitivity to uncertainties. Methods have been developed within decision theory that aim at allowing decision making under severe uncertainty. One of these methods is the information-gap decision theory. The info-gap method has started to permeate ecological modelling, with recent applications to conservation. However, these practical applications have so far been restricted to simple models with analytical solutions. Here we implement a deterministic approach based on decision theory in a complex model of the Eastern English Channel. Using the ISIS-Fish modelling platform, we model populations of sole and plaice in this area. We test a wide range of values for ecosystem, fleet and management parameters. From these simulations, we identify management rules controlling fish harvesting that allow reaching management goals recommended by ICES (International Council for the Exploration of the Sea) working groups while providing the highest robustness to uncertainties on ecosystem parameters.
Abraham, Gad; Kowalczyk, Adam; Zobel, Justin; Inouye, Michael
2013-02-01
A central goal of medical genetics is to accurately predict complex disease from genotypes. Here, we present a comprehensive analysis of simulated and real data using lasso and elastic-net penalized support-vector machine models, a mixed-effects linear model, a polygenic score, and unpenalized logistic regression. In simulation, the sparse penalized models achieved lower false-positive rates and higher precision than the other methods for detecting causal SNPs. The common practice of prefiltering SNP lists for subsequent penalized modeling was examined and shown to substantially reduce the ability to recover the causal SNPs. Using genome-wide SNP profiles across eight complex diseases within cross-validation, lasso and elastic-net models achieved substantially better predictive ability in celiac disease, type 1 diabetes, and Crohn's disease, and had equivalent predictive ability in the rest, with the results in celiac disease strongly replicating between independent datasets. We investigated the effect of linkage disequilibrium on the predictive models, showing that the penalized methods leverage this information to their advantage, compared with methods that assume SNP independence. Our findings show that sparse penalized approaches are robust across different disease architectures, producing as good as or better phenotype predictions and variance explained. This has fundamental ramifications for the selection and future development of methods to genetically predict human disease. © 2012 WILEY PERIODICALS, INC.
Modeling of the competition life cycle using the software complex of cellular automata PyCAlab
NASA Astrophysics Data System (ADS)
Berg, D. B.; Beklemishev, K. A.; Medvedev, A. N.; Medvedeva, M. A.
2015-11-01
The aim of the work is to develop a numerical model of the life cycle of competition on the basis of software complex cellular automata PyCAlab. The model is based on the general patterns of growth of various systems in resource-limited settings. At examples it is shown that the period of transition from an unlimited growth of the market agents to the stage of competitive growth takes quite a long time and may be characterized as monotonic. During this period two main strategies of competitive selection coexist: 1) capture of maximum market space with any reasonable costs; 2) saving by reducing costs. The obtained results allow concluding that the competitive strategies of companies must combine two mentioned types of behavior, and this issue needs to be given adequate attention in the academic literature on management. The created numerical model may be used for market research when developing of the strategies for promotion of new goods and services.
A mathematical model for foreign body reactions in 2D.
Su, Jianzhong; Gonzales, Humberto Perez; Todorov, Michail; Kojouharov, Hristo; Tang, Liping
2011-02-01
The foreign body reactions are commonly referred to the network of immune and inflammatory reactions of human or animals to foreign objects placed in tissues. They are basic biological processes, and are also highly relevant to bioengineering applications in implants, as fibrotic tissue formations surrounding medical implants have been found to substantially reduce the effectiveness of devices. Despite of intensive research on determining the mechanisms governing such complex responses, few mechanistic mathematical models have been developed to study such foreign body reactions. This study focuses on a kinetics-based predictive tool in order to analyze outcomes of multiple interactive complex reactions of various cells/proteins and biochemical processes and to understand transient behavior during the entire period (up to several months). A computational model in two spatial dimensions is constructed to investigate the time dynamics as well as spatial variation of foreign body reaction kinetics. The simulation results have been consistent with experimental data and the model can facilitate quantitative insights for study of foreign body reaction process in general.
Fast intersection detection algorithm for PC-based robot off-line programming
NASA Astrophysics Data System (ADS)
Fedrowitz, Christian H.
1994-11-01
This paper presents a method for fast and reliable collision detection in complex production cells. The algorithm is part of the PC-based robot off-line programming system of the University of Siegen (Ropsus). The method is based on a solid model which is managed by a simplified constructive solid geometry model (CSG-model). The collision detection problem is divided in two steps. In the first step the complexity of the problem is reduced in linear time. In the second step the remaining solids are tested for intersection. For this the Simplex algorithm, which is known from linear optimization, is used. It computes a point which is common to two convex polyhedra. The polyhedra intersect, if such a point exists. Regarding the simplified geometrical model of Ropsus the algorithm runs also in linear time. In conjunction with the first step a resultant collision detection algorithm is found which requires linear time in all. Moreover it computes the resultant intersection polyhedron using the dual transformation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cartas, Raul; Mimendia, Aitor; Valle, Manel del
2009-05-23
Calibration models for multi-analyte electronic tongues have been commonly built using a set of sensors, at least one per analyte under study. Complex signals recorded with these systems are formed by the sensors' responses to the analytes of interest plus interferents, from which a multivariate response model is then developed. This work describes a data treatment method for the simultaneous quantification of two species in solution employing the signal from a single sensor. The approach used here takes advantage of the complex information recorded with one electrode's transient after insertion of sample for building the calibration models for both analytes.more » The departure information from the electrode was firstly processed by discrete wavelet for transforming the signals to extract useful information and reduce its length, and then by artificial neural networks for fitting a model. Two different potentiometric sensors were used as study case for simultaneously corroborating the effectiveness of the approach.« less
Multidomain proteins under force
NASA Astrophysics Data System (ADS)
Valle-Orero, Jessica; Andrés Rivas-Pardo, Jaime; Popa, Ionel
2017-04-01
Advancements in single-molecule force spectroscopy techniques such as atomic force microscopy and magnetic tweezers allow investigation of how domain folding under force can play a physiological role. Combining these techniques with protein engineering and HaloTag covalent attachment, we investigate similarities and differences between four model proteins: I10 and I91—two immunoglobulin-like domains from the muscle protein titin, and two α + β fold proteins—ubiquitin and protein L. These proteins show a different mechanical response and have unique extensions under force. Remarkably, when normalized to their contour length, the size of the unfolding and refolding steps as a function of force reduces to a single master curve. This curve can be described using standard models of polymer elasticity, explaining the entropic nature of the measured steps. We further validate our measurements with a simple energy landscape model, which combines protein folding with polymer physics and accounts for the complex nature of tandem domains under force. This model can become a useful tool to help in deciphering the complexity of multidomain proteins operating under force.
Large Animal Models of an In Vivo Bioreactor for Engineering Vascularized Bone.
Akar, Banu; Tatara, Alexander M; Sutradhar, Alok; Hsiao, Hui-Yi; Miller, Michael; Cheng, Ming-Huei; Mikos, Antonios G; Brey, Eric M
2018-04-12
Reconstruction of large skeletal defects is challenging due to the requirement for large volumes of donor tissue and the often complex surgical procedures. Tissue engineering has the potential to serve as a new source of tissue for bone reconstruction, but current techniques are often limited in regards to the size and complexity of tissue that can be formed. Building tissue using an in vivo bioreactor approach may enable the production of appropriate amounts of specialized tissue, while reducing issues of donor site morbidity and infection. Large animals are required to screen and optimize new strategies for growing clinically appropriate volumes of tissues in vivo. In this article, we review both ovine and porcine models that serve as models of the technique proposed for clinical engineering of bone tissue in vivo. Recent findings are discussed with these systems, as well as description of next steps required for using these models, to develop clinically applicable tissue engineering applications.
Transport coefficient computation based on input/output reduced order models
NASA Astrophysics Data System (ADS)
Hurst, Joshua L.
The guiding purpose of this thesis is to address the optimal material design problem when the material description is a molecular dynamics model. The end goal is to obtain a simplified and fast model that captures the property of interest such that it can be used in controller design and optimization. The approach is to examine model reduction analysis and methods to capture a specific property of interest, in this case viscosity, or more generally complex modulus or complex viscosity. This property and other transport coefficients are defined by a input/output relationship and this motivates model reduction techniques that are tailored to preserve input/output behavior. In particular Singular Value Decomposition (SVD) based methods are investigated. First simulation methods are identified that are amenable to systems theory analysis. For viscosity, these models are of the Gosling and Lees-Edwards type. They are high order nonlinear Ordinary Differential Equations (ODEs) that employ Periodic Boundary Conditions. Properties can be calculated from the state trajectories of these ODEs. In this research local linear approximations are rigorously derived and special attention is given to potentials that are evaluated with Periodic Boundary Conditions (PBC). For the Gosling description LTI models are developed from state trajectories but are found to have limited success in capturing the system property, even though it is shown that full order LTI models can be well approximated by reduced order LTI models. For the Lees-Edwards SLLOD type model nonlinear ODEs will be approximated by a Linear Time Varying (LTV) model about some nominal trajectory and both balanced truncation and Proper Orthogonal Decomposition (POD) will be used to assess the plausibility of reduced order models to this system description. An immediate application of the derived LTV models is Quasilinearization or Waveform Relaxation. Quasilinearization is a Newton's method applied to the ODE operator equation. Its a recursive method that solves nonlinear ODE's by solving a LTV systems at each iteration to obtain a new closer solution. LTV models are derived for both Gosling and Lees-Edwards type models. Particular attention is given to SLLOD Lees-Edwards models because they are in a form most amenable to performing Taylor series expansion, and the most commonly used model to examine viscosity. With linear models developed a method is presented to calculate viscosity based on LTI Gosling models but is shown to have some limitations. To address these issues LTV SLLOD models are analyzed with both Balanced Truncation and POD and both show that significant order reduction is possible. By examining the singular values of both techniques it is shown that Balanced Truncation has a potential to offer greater reduction, which should be expected as it is based on the input/output mapping instead of just the state information as in POD. Obtaining reduced order systems that capture the property of interest is challenging. For Balanced Truncation reduced order models for 1-D LJ and FENE systems are obtained and are shown to capture the output of interest fairly well. However numerical challenges currently limit this analysis to small order systems. Suggestions are presented to extend this method to larger systems. In addition reduced 2nd order systems are obtained from POD. Here the challenge is extending the solution beyond the original period used for the projection, in particular identifying the manifold the solution travels along. The remaining challenges are presented and discussed.
NASA Astrophysics Data System (ADS)
Lin, Tsungpo
Performance engineers face the major challenge in modeling and simulation for the after-market power system due to system degradation and measurement errors. Currently, the majority in power generation industries utilizes the deterministic data matching method to calibrate the model and cascade system degradation, which causes significant calibration uncertainty and also the risk of providing performance guarantees. In this research work, a maximum-likelihood based simultaneous data reconciliation and model calibration (SDRMC) is used for power system modeling and simulation. By replacing the current deterministic data matching with SDRMC one can reduce the calibration uncertainty and mitigate the error propagation to the performance simulation. A modeling and simulation environment for a complex power system with certain degradation has been developed. In this environment multiple data sets are imported when carrying out simultaneous data reconciliation and model calibration. Calibration uncertainties are estimated through error analyses and populated to performance simulation by using principle of error propagation. System degradation is then quantified by performance comparison between the calibrated model and its expected new & clean status. To mitigate smearing effects caused by gross errors, gross error detection (GED) is carried out in two stages. The first stage is a screening stage, in which serious gross errors are eliminated in advance. The GED techniques used in the screening stage are based on multivariate data analysis (MDA), including multivariate data visualization and principal component analysis (PCA). Subtle gross errors are treated at the second stage, in which the serial bias compensation or robust M-estimator is engaged. To achieve a better efficiency in the combined scheme of the least squares based data reconciliation and the GED technique based on hypotheses testing, the Levenberg-Marquardt (LM) algorithm is utilized as the optimizer. To reduce the computation time and stabilize the problem solving for a complex power system such as a combined cycle power plant, meta-modeling using the response surface equation (RSE) and system/process decomposition are incorporated with the simultaneous scheme of SDRMC. The goal of this research work is to reduce the calibration uncertainties and, thus, the risks of providing performance guarantees arisen from uncertainties in performance simulation.
NASA Astrophysics Data System (ADS)
Demaria, Eleonora M.; Nijssen, Bart; Wagener, Thorsten
2007-06-01
Current land surface models use increasingly complex descriptions of the processes that they represent. Increase in complexity is accompanied by an increase in the number of model parameters, many of which cannot be measured directly at large spatial scales. A Monte Carlo framework was used to evaluate the sensitivity and identifiability of ten parameters controlling surface and subsurface runoff generation in the Variable Infiltration Capacity model (VIC). Using the Monte Carlo Analysis Toolbox (MCAT), parameter sensitivities were studied for four U.S. watersheds along a hydroclimatic gradient, based on a 20-year data set developed for the Model Parameter Estimation Experiment (MOPEX). Results showed that simulated streamflows are sensitive to three parameters when evaluated with different objective functions. Sensitivity of the infiltration parameter (b) and the drainage parameter (exp) were strongly related to the hydroclimatic gradient. The placement of vegetation roots played an important role in the sensitivity of model simulations to the thickness of the second soil layer (thick2). Overparameterization was found in the base flow formulation indicating that a simplified version could be implemented. Parameter sensitivity was more strongly dictated by climatic gradients than by changes in soil properties. Results showed how a complex model can be reduced to a more parsimonious form, leading to a more identifiable model with an increased chance of successful regionalization to ungauged basins. Although parameter sensitivities are strictly valid for VIC, this model is representative of a wider class of macroscale hydrological models. Consequently, the results and methodology will have applicability to other hydrological models.
Mathematical Modeling for Scrub Typhus and Its Implications for Disease Control.
Min, Kyung Duk; Cho, Sung Il
2018-03-19
The incidence rate of scrub typhus has been increasing in the Republic of Korea. Previous studies have suggested that this trend may have resulted from the effects of climate change on the transmission dynamics among vectors and hosts, but a clear explanation of the process is still lacking. In this study, we applied mathematical models to explore the potential factors that influence the epidemiology of tsutsugamushi disease. We developed mathematical models of ordinary differential equations including human, rodent and mite groups. Two models, including simple and complex models, were developed, and all parameters employed in the models were adopted from previous articles that represent epidemiological situations in the Republic of Korea. The simulation results showed that the force of infection at the equilibrium state under the simple model was 0.236 (per 100,000 person-months), and that in the complex model was 26.796 (per 100,000 person-months). Sensitivity analyses indicated that the most influential parameters were rodent and mite populations and contact rate between them for the simple model, and trans-ovarian transmission for the complex model. In both models, contact rate between humans and mites is more influential than morality rate of rodent and mite group. The results indicate that the effect of controlling either rodents or mites could be limited, and reducing the contact rate between humans and mites is more practical and effective strategy. However, the current level of control would be insufficient relative to the growing mite population. © 2018 The Korean Academy of Medical Sciences.
Roy, Souvik; Das, Rituparna; Ghosh, Balaram; Chakraborty, Tania
2018-06-01
Flavonoids are the most investigated phytochemicals due to their pharmacological and therapeutic activities. Their ability to chelate with metal ions has resulted in the emergence of a new category of molecules with a broader spectrum of pharmacological activities. In this study, the ruthenium quercetin complex has been synthesized and anticancer activity has been evaluated on a well-defined model of DMH followed by DSS induced rat colon cancer and on human colon cancer cell line HT-29. The characterizations accomplished through UV-visible, NMR, IR, Mass spectra and XRD techniques, and antioxidant activity was assessed by DPPH, FRAP, and ABTS methods. In vitro study confirmed that the complex increased p53 expression, reduced VEGF and mTOR expression, apoptosis induction, and DNA fragmentation in the HT-29 cells. Acute and subacute toxicity study was also assessed and results from in vivo study revealed that complex was efficient to suppress ACF multiplicity and hyperplastic lesions and elevated the CAT, SOD, and glutathione levels. Furthermore, the complex was found to decrease cell proliferation and increased apoptotic events in tumor cells correlates upregulation of p53 and Bax and downregulation of Bcl2 expression. Our findings from the in vitro and in vivo study support the continued investigation of ruthenium quercetin complex possesses a potential chemotherapeutic activity against colon cancer and was efficient in reducing ACF multiplicity, hyperplastic lesions in the colon tissues of rats by inducing apoptosis. © 2018 Wiley Periodicals, Inc.
A simplified fuel control approach for low cost aircraft gas turbines
NASA Technical Reports Server (NTRS)
Gold, H.
1973-01-01
Reduction in the complexity of gas turbine fuel controls without loss of control accuracy, reliability, or effectiveness as a method for reducing engine costs is discussed. A description and analysis of hydromechanical approach are presented. A computer simulation of the control mechanism is given and performance of a physical model in engine test is reported.
Wen J. Wang; Hong S. He; Martin A. Spetich; Stephen R. Shifley; Frank R. III Thompson; Jacob S. Fraser
2013-01-01
Oak decline is a process induced by complex interactions of predisposing factors, inciting factors, and contributing factors operating at tree, stand, and landscape scales. It has greatly altered species composition and stand structure in affected areas. Thinning, clearcutting, and group selection are widely adopted harvest alternatives for reducing forest...
NASA Astrophysics Data System (ADS)
Creixell-Mediante, Ester; Jensen, Jakob S.; Naets, Frank; Brunskog, Jonas; Larsen, Martin
2018-06-01
Finite Element (FE) models of complex structural-acoustic coupled systems can require a large number of degrees of freedom in order to capture their physical behaviour. This is the case in the hearing aid field, where acoustic-mechanical feedback paths are a key factor in the overall system performance and modelling them accurately requires a precise description of the strong interaction between the light-weight parts and the internal and surrounding air over a wide frequency range. Parametric optimization of the FE model can be used to reduce the vibroacoustic feedback in a device during the design phase; however, it requires solving the model iteratively for multiple frequencies at different parameter values, which becomes highly time consuming when the system is large. Parametric Model Order Reduction (pMOR) techniques aim at reducing the computational cost associated with each analysis by projecting the full system into a reduced space. A drawback of most of the existing techniques is that the vector basis of the reduced space is built at an offline phase where the full system must be solved for a large sample of parameter values, which can also become highly time consuming. In this work, we present an adaptive pMOR technique where the construction of the projection basis is embedded in the optimization process and requires fewer full system analyses, while the accuracy of the reduced system is monitored by a cheap error indicator. The performance of the proposed method is evaluated for a 4-parameter optimization of a frequency response for a hearing aid model, evaluated at 300 frequencies, where the objective function evaluations become more than one order of magnitude faster than for the full system.
Testing a Firefly-Inspired Synchronization Algorithm in a Complex Wireless Sensor Network
Hao, Chuangbo; Song, Ping; Yang, Cheng; Liu, Xiongjun
2017-01-01
Data acquisition is the foundation of soft sensor and data fusion. Distributed data acquisition and its synchronization are the important technologies to ensure the accuracy of soft sensors. As a research topic in bionic science, the firefly-inspired algorithm has attracted widespread attention as a new synchronization method. Aiming at reducing the design difficulty of firefly-inspired synchronization algorithms for Wireless Sensor Networks (WSNs) with complex topologies, this paper presents a firefly-inspired synchronization algorithm based on a multiscale discrete phase model that can optimize the performance tradeoff between the network scalability and synchronization capability in a complex wireless sensor network. The synchronization process can be regarded as a Markov state transition, which ensures the stability of this algorithm. Compared with the Miroll and Steven model and Reachback Firefly Algorithm, the proposed algorithm obtains better stability and performance. Finally, its practicality has been experimentally confirmed using 30 nodes in a real multi-hop topology with low quality links. PMID:28282899
Emergence Processes up to Consciousness Using the Multiplicity Principle and Quantum Physics
NASA Astrophysics Data System (ADS)
Ehresmann, Andrée C.; Vanbremeersch, Jean-Paul
2002-09-01
Evolution is marked by the emergence of new objects and interactions. Pursuing our preceding work on Memory Evolutive Systems (MES; cf. our Internet site), we propose a general mathematical model for this process, based on Category Theory. Its main characteristics is the Multiplicity Principle (MP) which asserts the existence of complex objects with several possible configurations. The MP entails the emergence of non-reducible more and more complex objects (emergentist reductionism). From the laws of Quantum Physics, it follows that the MP is valid for the category of particles and atoms, hence, by complexification, for any natural autonomous anticipatory complex system, such as biological systems up to neural systems, or social systems. Applying the model to the MES of neurons, we describe the emergence of higher and higher cognitive processes and of a semantic memory. Consciousness is characterized by the development of a permanent `personal' memory, the archetypal core, which allows the formation of extended landscapes with an integration of the temporal dimensions.
NASA Astrophysics Data System (ADS)
Durand, P.
The integrated nitrogen model INCA (Integrated Nitrogen in Catchments) was used to analyse the nitrogen dynamics in a small rural catchment in Western France. The agrosystem studied is very complex, with: extensive use of different organic fertilisers, a variety of crop rotations, a structural excess of nitrogen (i.e. more animal N produced by the intensive farming than the N requirements of the crops and pastures), and nitrate retention in both hydrological stores and riparian zones. The original model features were adapted here to describe this complexity. The calibration results are satisfactory, although the daily variations in stream nitrate are not simulated in detail. Different climate scenarios, based on observed climate records, were tested; all produced a worsening of the pollution in the short term. Scenarios of alternative agricultural practices (reduced fertilisation and catch crops) were also analysed, suggesting that a reduction by 40% of the fertilisation combined with the introduction of catch crops would be necessary to stop the degradation of water quality.
Testing a Firefly-Inspired Synchronization Algorithm in a Complex Wireless Sensor Network.
Hao, Chuangbo; Song, Ping; Yang, Cheng; Liu, Xiongjun
2017-03-08
Data acquisition is the foundation of soft sensor and data fusion. Distributed data acquisition and its synchronization are the important technologies to ensure the accuracy of soft sensors. As a research topic in bionic science, the firefly-inspired algorithm has attracted widespread attention as a new synchronization method. Aiming at reducing the design difficulty of firefly-inspired synchronization algorithms for Wireless Sensor Networks (WSNs) with complex topologies, this paper presents a firefly-inspired synchronization algorithm based on a multiscale discrete phase model that can optimize the performance tradeoff between the network scalability and synchronization capability in a complex wireless sensor network. The synchronization process can be regarded as a Markov state transition, which ensures the stability of this algorithm. Compared with the Miroll and Steven model and Reachback Firefly Algorithm, the proposed algorithm obtains better stability and performance. Finally, its practicality has been experimentally confirmed using 30 nodes in a real multi-hop topology with low quality links.
Benchmarking novel approaches for modelling species range dynamics
Zurell, Damaris; Thuiller, Wilfried; Pagel, Jörn; Cabral, Juliano S; Münkemüller, Tamara; Gravel, Dominique; Dullinger, Stefan; Normand, Signe; Schiffers, Katja H.; Moore, Kara A.; Zimmermann, Niklaus E.
2016-01-01
Increasing biodiversity loss due to climate change is one of the most vital challenges of the 21st century. To anticipate and mitigate biodiversity loss, models are needed that reliably project species’ range dynamics and extinction risks. Recently, several new approaches to model range dynamics have been developed to supplement correlative species distribution models (SDMs), but applications clearly lag behind model development. Indeed, no comparative analysis has been performed to evaluate their performance. Here, we build on process-based, simulated data for benchmarking five range (dynamic) models of varying complexity including classical SDMs, SDMs coupled with simple dispersal or more complex population dynamic models (SDM hybrids), and a hierarchical Bayesian process-based dynamic range model (DRM). We specifically test the effects of demographic and community processes on model predictive performance. Under current climate, DRMs performed best, although only marginally. Under climate change, predictive performance varied considerably, with no clear winners. Yet, all range dynamic models improved predictions under climate change substantially compared to purely correlative SDMs, and the population dynamic models also predicted reasonable extinction risks for most scenarios. When benchmarking data were simulated with more complex demographic and community processes, simple SDM hybrids including only dispersal often proved most reliable. Finally, we found that structural decisions during model building can have great impact on model accuracy, but prior system knowledge on important processes can reduce these uncertainties considerably. Our results reassure the clear merit in using dynamic approaches for modelling species’ response to climate change but also emphasise several needs for further model and data improvement. We propose and discuss perspectives for improving range projections through combination of multiple models and for making these approaches operational for large numbers of species. PMID:26872305
Benchmarking novel approaches for modelling species range dynamics.
Zurell, Damaris; Thuiller, Wilfried; Pagel, Jörn; Cabral, Juliano S; Münkemüller, Tamara; Gravel, Dominique; Dullinger, Stefan; Normand, Signe; Schiffers, Katja H; Moore, Kara A; Zimmermann, Niklaus E
2016-08-01
Increasing biodiversity loss due to climate change is one of the most vital challenges of the 21st century. To anticipate and mitigate biodiversity loss, models are needed that reliably project species' range dynamics and extinction risks. Recently, several new approaches to model range dynamics have been developed to supplement correlative species distribution models (SDMs), but applications clearly lag behind model development. Indeed, no comparative analysis has been performed to evaluate their performance. Here, we build on process-based, simulated data for benchmarking five range (dynamic) models of varying complexity including classical SDMs, SDMs coupled with simple dispersal or more complex population dynamic models (SDM hybrids), and a hierarchical Bayesian process-based dynamic range model (DRM). We specifically test the effects of demographic and community processes on model predictive performance. Under current climate, DRMs performed best, although only marginally. Under climate change, predictive performance varied considerably, with no clear winners. Yet, all range dynamic models improved predictions under climate change substantially compared to purely correlative SDMs, and the population dynamic models also predicted reasonable extinction risks for most scenarios. When benchmarking data were simulated with more complex demographic and community processes, simple SDM hybrids including only dispersal often proved most reliable. Finally, we found that structural decisions during model building can have great impact on model accuracy, but prior system knowledge on important processes can reduce these uncertainties considerably. Our results reassure the clear merit in using dynamic approaches for modelling species' response to climate change but also emphasize several needs for further model and data improvement. We propose and discuss perspectives for improving range projections through combination of multiple models and for making these approaches operational for large numbers of species. © 2016 John Wiley & Sons Ltd.
Olsson, Pontus; Nysjö, Fredrik; Hirsch, Jan-Michaél; Carlbom, Ingrid B
2013-11-01
Cranio-maxillofacial (CMF) surgery to restore normal skeletal anatomy in patients with serious trauma to the face can be both complex and time-consuming. But it is generally accepted that careful pre-operative planning leads to a better outcome with a higher degree of function and reduced morbidity in addition to reduced time in the operating room. However, today's surgery planning systems are primitive, relying mostly on the user's ability to plan complex tasks with a two-dimensional graphical interface. A system for planning the restoration of skeletal anatomy in facial trauma patients using a virtual model derived from patient-specific CT data. The system combines stereo visualization with six degrees-of-freedom, high-fidelity haptic feedback that enables analysis, planning, and preoperative testing of alternative solutions for restoring bone fragments to their proper positions. The stereo display provides accurate visual spatial perception, and the haptics system provides intuitive haptic feedback when bone fragments are in contact as well as six degrees-of-freedom attraction forces for precise bone fragment alignment. A senior surgeon without prior experience of the system received 45 min of system training. Following the training session, he completed a virtual reconstruction in 22 min of a complex mandibular fracture with an adequately reduced result. Preliminary testing with one surgeon indicates that our surgery planning system, which combines stereo visualization with sophisticated haptics, has the potential to become a powerful tool for CMF surgery planning. With little training, it allows a surgeon to complete a complex plan in a short amount of time.
PyGirl: Generating Whole-System VMs from High-Level Prototypes Using PyPy
NASA Astrophysics Data System (ADS)
Bruni, Camillo; Verwaest, Toon
Virtual machines (VMs) emulating hardware devices are generally implemented in low-level languages for performance reasons. This results in unmaintainable systems that are difficult to understand. In this paper we report on our experience using the PyPy toolchain to improve the portability and reduce the complexity of whole-system VM implementations. As a case study we implement a VM prototype for a Nintendo Game Boy, called PyGirl, in which the high-level model is separated from low-level VM implementation issues. We shed light on the process of refactoring from a low-level VM implementation in Java to a high-level model in RPython. We show that our whole-system VM written with PyPy is significantly less complex than standard implementations, without substantial loss in performance.
Niver, E L; Leong, N; Greene, J; Curtis, D; Ryder, M I; Ho, S P
2011-12-01
Adaptive properties of the bone-periodontal ligament-tooth complex have been identified by changing the magnitude of functional loads using small-scale animal models, such as rodents. Reported adaptive responses as a result of lower loads due to softer diet include decreased muscle development, change in structure-function relationship of the cranium, narrowed periodontal ligament space, and changes in the mineral level of the cortical bone and alveolar jaw bone and in the glycosaminoglycans of the alveolar bone. However, the adaptive role of the dynamic bone-periodontal ligament-cementum complex to prolonged reduced loads has not been fully explained to date, especially with regard to concurrent adaptations of bone, periodontal ligament and cementum. Therefore, in the present study, using a rat model, the temporal effect of reduced functional loads on physical characteristics, such as morphology and mechanical properties and the mineral profiles of the bone-periodontal ligament-cementum complex was investigated. Two groups of 6-wk-old male Sprague-Dawley rats were fed nutritionally identical food with a stiffness range of 127-158 N/mm for hard pellet or 0.3-0.5 N/mm for soft powder forms. Spatio-temporal adaptation of the bone-periodontal ligament-cementum complex was identified by mapping changes in the following: (i) periodontal ligament collagen orientation and birefringence using polarized light microscopy, bone and cementum adaptation using histochemistry, and bone and cementum morphology using micro-X-ray computed tomography; (ii) mineral profiles of the periodontal ligament-cementum and periodontal ligament-bone interfaces by X-ray attenuation; and (iii) microhardness of bone and cementum by microindentation of specimens at ages 6, 8, 12 and 15 wk. Reduced functional loads over prolonged time resulted in the following adaptations: (i) altered periodontal ligament orientation and decreased periodontal ligament collagen birefringence, indicating decreased periodontal ligament turnover rate and decreased apical cementum resorption; (ii) a gradual increase in X-ray attenuation, owing to mineral differences, at the periodontal ligament-bone and periodontal ligament-cementum interfaces, without significant differences in the gradients for either group; (iii) significantly (p < 0.05) lower microhardness of alveolar bone (0.93 ± 0.16 GPa) and secondary cementum (0.803 ± 0.13 GPa) compared with the higher load group insert bone = (1.10 ± 0.17 and cementum = 0.940 ± 0.15 GPa, respectively) at 15 wk, indicating a temporal effect of loads on the local mineralization of bone and cementum. Based on the results from this study, the effect of reduced functional loads for a prolonged time could differentially affect morphology, mechanical properties and mineral variations of the local load-bearing sites in the bone-periodontal ligament-cementum complex. These observed local changes in turn could help to explain the overall biomechanical function and adaptations of the tooth-bone joint. From a clinical translation perspective, our study provides an insight into modulation of load on the complex for improved tooth function during periodontal disease and/or orthodontic and prosthodontic treatments. © 2011 John Wiley & Sons A/S.
Medical image segmentation based on SLIC superpixels model
NASA Astrophysics Data System (ADS)
Chen, Xiang-ting; Zhang, Fan; Zhang, Ruo-ya
2017-01-01
Medical imaging has been widely used in clinical practice. It is an important basis for medical experts to diagnose the disease. However, medical images have many unstable factors such as complex imaging mechanism, the target displacement will cause constructed defect and the partial volume effect will lead to error and equipment wear, which increases the complexity of subsequent image processing greatly. The segmentation algorithm which based on SLIC (Simple Linear Iterative Clustering, SLIC) superpixels is used to eliminate the influence of constructed defect and noise by means of the feature similarity in the preprocessing stage. At the same time, excellent clustering effect can reduce the complexity of the algorithm extremely, which provides an effective basis for the rapid diagnosis of experts.
[The vanadium compounds: chemistry, synthesis, insulinomimetic properties].
Fedorova, E V; Buriakina, A V; Vorob'eva, N M; Baranova, N I
2014-01-01
The review considers the biological role of vanadium, its participation in various processes in humans and other mammals, and the anti-diabetic effect of its compounds. Vanadium salts have persistent hypoglycemic and antihyperlipidemic effects and reduce the probability of secondary complications in animals with experimental diabetes. The review contains a detailed description of all major synthesized vanadium complexes having antidiabetic activity. Currently, vanadium complexes with organic ligands are more effective and safer than the inorganic salts. Despite the proven efficacy of these compounds as the anti-diabetic agents in animal models, only one organic complex of vanadium is currently under the second phase of clinical trials. All of the considered data suggest that vanadium compound are a new promising class of drugs in modern pharmacotherapy of diabetes.