Designing novel cellulase systems through agent-based modeling and global sensitivity analysis.
Apte, Advait A; Senger, Ryan S; Fong, Stephen S
2014-01-01
Experimental techniques allow engineering of biological systems to modify functionality; however, there still remains a need to develop tools to prioritize targets for modification. In this study, agent-based modeling (ABM) was used to build stochastic models of complexed and non-complexed cellulose hydrolysis, including enzymatic mechanisms for endoglucanase, exoglucanase, and β-glucosidase activity. Modeling results were consistent with experimental observations of higher efficiency in complexed systems than non-complexed systems and established relationships between specific cellulolytic mechanisms and overall efficiency. Global sensitivity analysis (GSA) of model results identified key parameters for improving overall cellulose hydrolysis efficiency including: (1) the cellulase half-life, (2) the exoglucanase activity, and (3) the cellulase composition. Overall, the following parameters were found to significantly influence cellulose consumption in a consolidated bioprocess (CBP): (1) the glucose uptake rate of the culture, (2) the bacterial cell concentration, and (3) the nature of the cellulase enzyme system (complexed or non-complexed). Broadly, these results demonstrate the utility of combining modeling and sensitivity analysis to identify key parameters and/or targets for experimental improvement.
Designing novel cellulase systems through agent-based modeling and global sensitivity analysis
Apte, Advait A; Senger, Ryan S; Fong, Stephen S
2014-01-01
Experimental techniques allow engineering of biological systems to modify functionality; however, there still remains a need to develop tools to prioritize targets for modification. In this study, agent-based modeling (ABM) was used to build stochastic models of complexed and non-complexed cellulose hydrolysis, including enzymatic mechanisms for endoglucanase, exoglucanase, and β-glucosidase activity. Modeling results were consistent with experimental observations of higher efficiency in complexed systems than non-complexed systems and established relationships between specific cellulolytic mechanisms and overall efficiency. Global sensitivity analysis (GSA) of model results identified key parameters for improving overall cellulose hydrolysis efficiency including: (1) the cellulase half-life, (2) the exoglucanase activity, and (3) the cellulase composition. Overall, the following parameters were found to significantly influence cellulose consumption in a consolidated bioprocess (CBP): (1) the glucose uptake rate of the culture, (2) the bacterial cell concentration, and (3) the nature of the cellulase enzyme system (complexed or non-complexed). Broadly, these results demonstrate the utility of combining modeling and sensitivity analysis to identify key parameters and/or targets for experimental improvement. PMID:24830736
Cost drivers and resource allocation in military health care systems.
Fulton, Larry; Lasdon, Leon S; McDaniel, Reuben R
2007-03-01
This study illustrates the feasibility of incorporating technical efficiency considerations in the funding of military hospitals and identifies the primary drivers for hospital costs. Secondary data collected for 24 U.S.-based Army hospitals and medical centers for the years 2001 to 2003 are the basis for this analysis. Technical efficiency was measured by using data envelopment analysis; subsequently, efficiency estimates were included in logarithmic-linear cost models that specified cost as a function of volume, complexity, efficiency, time, and facility type. These logarithmic-linear models were compared against stochastic frontier analysis models. A parsimonious, three-variable, logarithmic-linear model composed of volume, complexity, and efficiency variables exhibited a strong linear relationship with observed costs (R(2) = 0.98). This model also proved reliable in forecasting (R(2) = 0.96). Based on our analysis, as much as $120 million might be reallocated to improve the United States-based Army hospital performance evaluated in this study.
Some Observations on the Current Status of Performing Finite Element Analyses
NASA Technical Reports Server (NTRS)
Raju, Ivatury S.; Knight, Norman F., Jr; Shivakumar, Kunigal N.
2015-01-01
Aerospace structures are complex high-performance structures. Advances in reliable and efficient computing and modeling tools are enabling analysts to consider complex configurations, build complex finite element models, and perform analysis rapidly. Many of the early career engineers of today are very proficient in the usage of modern computers, computing engines, complex software systems, and visualization tools. These young engineers are becoming increasingly efficient in building complex 3D models of complicated aerospace components. However, the current trends demonstrate blind acceptance of the results of the finite element analysis results. This paper is aimed at raising an awareness of this situation. Examples of the common encounters are presented. To overcome the current trends, some guidelines and suggestions for analysts, senior engineers, and educators are offered.
Modeling complexity in engineered infrastructure system: Water distribution network as an example
NASA Astrophysics Data System (ADS)
Zeng, Fang; Li, Xiang; Li, Ke
2017-02-01
The complex topology and adaptive behavior of infrastructure systems are driven by both self-organization of the demand and rigid engineering solutions. Therefore, engineering complex systems requires a method balancing holism and reductionism. To model the growth of water distribution networks, a complex network model was developed following the combination of local optimization rules and engineering considerations. The demand node generation is dynamic and follows the scaling law of urban growth. The proposed model can generate a water distribution network (WDN) similar to reported real-world WDNs on some structural properties. Comparison with different modeling approaches indicates that a realistic demand node distribution and co-evolvement of demand node and network are important for the simulation of real complex networks. The simulation results indicate that the efficiency of water distribution networks is exponentially affected by the urban growth pattern. On the contrary, the improvement of efficiency by engineering optimization is limited and relatively insignificant. The redundancy and robustness, on another aspect, can be significantly improved through engineering methods.
Lim, Hooi Been; Baumann, Dirk; Li, Er-Ping
2011-03-01
Wireless body area network (WBAN) is a new enabling system with promising applications in areas such as remote health monitoring and interpersonal communication. Reliable and optimum design of a WBAN system relies on a good understanding and in-depth studies of the wave propagation around a human body. However, the human body is a very complex structure and is computationally demanding to model. This paper aims to investigate the effects of the numerical model's structure complexity and feature details on the simulation results. Depending on the application, a simplified numerical model that meets desired simulation accuracy can be employed for efficient simulations. Measurements of ultra wideband (UWB) signal propagation along a human arm are performed and compared to the simulation results obtained with numerical arm models of different complexity levels. The influence of the arm shape and size, as well as tissue composition and complexity is investigated.
Kossert, K; Cassette, Ph; Carles, A Grau; Jörg, G; Gostomski, Christroph Lierse V; Nähle, O; Wolf, Ch
2014-05-01
The triple-to-double coincidence ratio (TDCR) method is frequently used to measure the activity of radionuclides decaying by pure β emission or electron capture (EC). Some radionuclides with more complex decays have also been studied, but accurate calculations of decay branches which are accompanied by many coincident γ transitions have not yet been investigated. This paper describes recent extensions of the model to make efficiency computations for more complex decay schemes possible. In particular, the MICELLE2 program that applies a stochastic approach of the free parameter model was extended. With an improved code, efficiencies for β(-), β(+) and EC branches with up to seven coincident γ transitions can be calculated. Moreover, a new parametrization for the computation of electron stopping powers has been implemented to compute the ionization quenching function of 10 commercial scintillation cocktails. In order to demonstrate the capabilities of the TDCR method, the following radionuclides are discussed: (166m)Ho (complex β(-)/γ), (59)Fe (complex β(-)/γ), (64)Cu (β(-), β(+), EC and EC/γ) and (229)Th in equilibrium with its progenies (decay chain with many α, β and complex β(-)/γ transitions). © 2013 Published by Elsevier Ltd.
Near-optimal experimental design for model selection in systems biology.
Busetto, Alberto Giovanni; Hauser, Alain; Krummenacher, Gabriel; Sunnåker, Mikael; Dimopoulos, Sotiris; Ong, Cheng Soon; Stelling, Jörg; Buhmann, Joachim M
2013-10-15
Biological systems are understood through iterations of modeling and experimentation. Not all experiments, however, are equally valuable for predictive modeling. This study introduces an efficient method for experimental design aimed at selecting dynamical models from data. Motivated by biological applications, the method enables the design of crucial experiments: it determines a highly informative selection of measurement readouts and time points. We demonstrate formal guarantees of design efficiency on the basis of previous results. By reducing our task to the setting of graphical models, we prove that the method finds a near-optimal design selection with a polynomial number of evaluations. Moreover, the method exhibits the best polynomial-complexity constant approximation factor, unless P = NP. We measure the performance of the method in comparison with established alternatives, such as ensemble non-centrality, on example models of different complexity. Efficient design accelerates the loop between modeling and experimentation: it enables the inference of complex mechanisms, such as those controlling central metabolic operation. Toolbox 'NearOED' available with source code under GPL on the Machine Learning Open Source Software Web site (mloss.org).
A method to efficiently apply a biogeochemical model to a landscape.
Robert E. Kennedy; David P. Turner; Warren B. Cohen; Michael Guzy
2006-01-01
Biogeochemical models offer an important means of understanding carbon dynamics, but the computational complexity of many models means that modeling all grid cells on a large landscape is computationally burdensome. Because most biogeochemical models ignore adjacency effects between cells, however, a more efficient approach is possible. Recognizing that spatial...
ADAM: analysis of discrete models of biological systems using computer algebra.
Hinkelmann, Franziska; Brandon, Madison; Guang, Bonny; McNeill, Rustin; Blekherman, Grigoriy; Veliz-Cuba, Alan; Laubenbacher, Reinhard
2011-07-20
Many biological systems are modeled qualitatively with discrete models, such as probabilistic Boolean networks, logical models, Petri nets, and agent-based models, to gain a better understanding of them. The computational complexity to analyze the complete dynamics of these models grows exponentially in the number of variables, which impedes working with complex models. There exist software tools to analyze discrete models, but they either lack the algorithmic functionality to analyze complex models deterministically or they are inaccessible to many users as they require understanding the underlying algorithm and implementation, do not have a graphical user interface, or are hard to install. Efficient analysis methods that are accessible to modelers and easy to use are needed. We propose a method for efficiently identifying attractors and introduce the web-based tool Analysis of Dynamic Algebraic Models (ADAM), which provides this and other analysis methods for discrete models. ADAM converts several discrete model types automatically into polynomial dynamical systems and analyzes their dynamics using tools from computer algebra. Specifically, we propose a method to identify attractors of a discrete model that is equivalent to solving a system of polynomial equations, a long-studied problem in computer algebra. Based on extensive experimentation with both discrete models arising in systems biology and randomly generated networks, we found that the algebraic algorithms presented in this manuscript are fast for systems with the structure maintained by most biological systems, namely sparseness and robustness. For a large set of published complex discrete models, ADAM identified the attractors in less than one second. Discrete modeling techniques are a useful tool for analyzing complex biological systems and there is a need in the biological community for accessible efficient analysis tools. ADAM provides analysis methods based on mathematical algorithms as a web-based tool for several different input formats, and it makes analysis of complex models accessible to a larger community, as it is platform independent as a web-service and does not require understanding of the underlying mathematics.
Role of Edges in Complex Network Epidemiology
NASA Astrophysics Data System (ADS)
Zhang, Hao; Jiang, Zhi-Hong; Wang, Hui; Xie, Fei; Chen, Chao
2012-09-01
In complex network epidemiology, diseases spread along contacting edges between individuals and different edges may play different roles in epidemic outbreaks. Quantifying the efficiency of edges is an important step towards arresting epidemics. In this paper, we study the efficiency of edges in general susceptible-infected-recovered models, and introduce the transmission capability to measure the efficiency of edges. Results show that deleting edges with the highest transmission capability will greatly decrease epidemics on scale-free networks. Basing on the message passing approach, we get exact mathematical solution on configuration model networks with edge deletion in the large size limit.
Efficient FFT Algorithm for Psychoacoustic Model of the MPEG-4 AAC
NASA Astrophysics Data System (ADS)
Lee, Jae-Seong; Lee, Chang-Joon; Park, Young-Cheol; Youn, Dae-Hee
This paper proposes an efficient FFT algorithm for the Psycho-Acoustic Model (PAM) of MPEG-4 AAC. The proposed algorithm synthesizes FFT coefficients using MDCT and MDST coefficients through circular convolution. The complexity of the MDCT and MDST coefficients is approximately half of the original FFT. We also design a new PAM based on the proposed FFT algorithm, which has 15% lower computational complexity than the original PAM without degradation of sound quality. Subjective as well as objective test results are presented to confirm the efficiency of the proposed FFT computation algorithm and the PAM.
Leong, Colleen G; Boyd, Caroline M; Roush, Kaleb S; Tenente, Ricardo; Lang, Kristine M; Lostroh, C Phoebe
2017-10-01
Natural transformation is the acquisition of new genetic material via the uptake of exogenous DNA by competent bacteria. Acinetobacter baylyi is model for natural transformation. Here we focus on the natural transformation of A. baylyi ATCC 33305 grown in complex media and seek environmental conditions that appreciably affect transformation efficiency. We find that the transformation efficiency for A. baylyi is a resilient characteristic that remains high under most conditions tested. We do find several distinct conditions that alter natural transformation efficiency including addition of succinate, Fe 2+ (ferrous) iron chelation, and substitution of sodium ions with potassium ones. These distinct conditions could be useful to fine tune transformation efficiency for researchers using A. baylyi as a model organism to study natural transformation.
A hydrological emulator for global applications - HE v1.0.0
NASA Astrophysics Data System (ADS)
Liu, Yaling; Hejazi, Mohamad; Li, Hongyi; Zhang, Xuesong; Leng, Guoyong
2018-03-01
While global hydrological models (GHMs) are very useful in exploring water resources and interactions between the Earth and human systems, their use often requires numerous model inputs, complex model calibration, and high computation costs. To overcome these challenges, we construct an efficient open-source and ready-to-use hydrological emulator (HE) that can mimic complex GHMs at a range of spatial scales (e.g., basin, region, globe). More specifically, we construct both a lumped and a distributed scheme of the HE based on the monthly abcd model to explore the tradeoff between computational cost and model fidelity. Model predictability and computational efficiency are evaluated in simulating global runoff from 1971 to 2010 with both the lumped and distributed schemes. The results are compared against the runoff product from the widely used Variable Infiltration Capacity (VIC) model. Our evaluation indicates that the lumped and distributed schemes present comparable results regarding annual total quantity, spatial pattern, and temporal variation of the major water fluxes (e.g., total runoff, evapotranspiration) across the global 235 basins (e.g., correlation coefficient r between the annual total runoff from either of these two schemes and the VIC is > 0.96), except for several cold (e.g., Arctic, interior Tibet), dry (e.g., North Africa) and mountainous (e.g., Argentina) regions. Compared against the monthly total runoff product from the VIC (aggregated from daily runoff), the global mean Kling-Gupta efficiencies are 0.75 and 0.79 for the lumped and distributed schemes, respectively, with the distributed scheme better capturing spatial heterogeneity. Notably, the computation efficiency of the lumped scheme is 2 orders of magnitude higher than the distributed one and 7 orders more efficient than the VIC model. A case study of uncertainty analysis for the world's 16 basins with top annual streamflow is conducted using 100 000 model simulations, and it demonstrates the lumped scheme's extraordinary advantage in computational efficiency. Our results suggest that the revised lumped abcd model can serve as an efficient and reasonable HE for complex GHMs and is suitable for broad practical use, and the distributed scheme is also an efficient alternative if spatial heterogeneity is of more interest.
NASA Astrophysics Data System (ADS)
Furno, Mauro; Rosenow, Thomas C.; Gather, Malte C.; Lüssem, Björn; Leo, Karl
2012-10-01
We report on a theoretical framework for the efficiency analysis of complex, multi-emitter organic light emitting diodes (OLEDs). The calculation approach makes use of electromagnetic modeling to quantify the overall OLED photon outcoupling efficiency and a phenomenological description for electrical and excitonic processes. From the comparison of optical modeling results and measurements of the total external quantum efficiency, we obtain reliable estimates of internal quantum yield. As application of the model, we analyze high-efficiency stacked white OLEDs and comment on the various efficiency loss channels present in the devices.
A scalable plant-resolving radiative transfer model based on optimized GPU ray tracing
USDA-ARS?s Scientific Manuscript database
A new model for radiative transfer in participating media and its application to complex plant canopies is presented. The goal was to be able to efficiently solve complex canopy-scale radiative transfer problems while also representing sub-plant heterogeneity. In the model, individual leaf surfaces ...
ADAM: Analysis of Discrete Models of Biological Systems Using Computer Algebra
2011-01-01
Background Many biological systems are modeled qualitatively with discrete models, such as probabilistic Boolean networks, logical models, Petri nets, and agent-based models, to gain a better understanding of them. The computational complexity to analyze the complete dynamics of these models grows exponentially in the number of variables, which impedes working with complex models. There exist software tools to analyze discrete models, but they either lack the algorithmic functionality to analyze complex models deterministically or they are inaccessible to many users as they require understanding the underlying algorithm and implementation, do not have a graphical user interface, or are hard to install. Efficient analysis methods that are accessible to modelers and easy to use are needed. Results We propose a method for efficiently identifying attractors and introduce the web-based tool Analysis of Dynamic Algebraic Models (ADAM), which provides this and other analysis methods for discrete models. ADAM converts several discrete model types automatically into polynomial dynamical systems and analyzes their dynamics using tools from computer algebra. Specifically, we propose a method to identify attractors of a discrete model that is equivalent to solving a system of polynomial equations, a long-studied problem in computer algebra. Based on extensive experimentation with both discrete models arising in systems biology and randomly generated networks, we found that the algebraic algorithms presented in this manuscript are fast for systems with the structure maintained by most biological systems, namely sparseness and robustness. For a large set of published complex discrete models, ADAM identified the attractors in less than one second. Conclusions Discrete modeling techniques are a useful tool for analyzing complex biological systems and there is a need in the biological community for accessible efficient analysis tools. ADAM provides analysis methods based on mathematical algorithms as a web-based tool for several different input formats, and it makes analysis of complex models accessible to a larger community, as it is platform independent as a web-service and does not require understanding of the underlying mathematics. PMID:21774817
An Efficient Model-based Diagnosis Engine for Hybrid Systems Using Structural Model Decomposition
NASA Technical Reports Server (NTRS)
Bregon, Anibal; Narasimhan, Sriram; Roychoudhury, Indranil; Daigle, Matthew; Pulido, Belarmino
2013-01-01
Complex hybrid systems are present in a large range of engineering applications, like mechanical systems, electrical circuits, or embedded computation systems. The behavior of these systems is made up of continuous and discrete event dynamics that increase the difficulties for accurate and timely online fault diagnosis. The Hybrid Diagnosis Engine (HyDE) offers flexibility to the diagnosis application designer to choose the modeling paradigm and the reasoning algorithms. The HyDE architecture supports the use of multiple modeling paradigms at the component and system level. However, HyDE faces some problems regarding performance in terms of complexity and time. Our focus in this paper is on developing efficient model-based methodologies for online fault diagnosis in complex hybrid systems. To do this, we propose a diagnosis framework where structural model decomposition is integrated within the HyDE diagnosis framework to reduce the computational complexity associated with the fault diagnosis of hybrid systems. As a case study, we apply our approach to a diagnostic testbed, the Advanced Diagnostics and Prognostics Testbed (ADAPT), using real data.
A framework for scalable parameter estimation of gene circuit models using structural information.
Kuwahara, Hiroyuki; Fan, Ming; Wang, Suojin; Gao, Xin
2013-07-01
Systematic and scalable parameter estimation is a key to construct complex gene regulatory models and to ultimately facilitate an integrative systems biology approach to quantitatively understand the molecular mechanisms underpinning gene regulation. Here, we report a novel framework for efficient and scalable parameter estimation that focuses specifically on modeling of gene circuits. Exploiting the structure commonly found in gene circuit models, this framework decomposes a system of coupled rate equations into individual ones and efficiently integrates them separately to reconstruct the mean time evolution of the gene products. The accuracy of the parameter estimates is refined by iteratively increasing the accuracy of numerical integration using the model structure. As a case study, we applied our framework to four gene circuit models with complex dynamics based on three synthetic datasets and one time series microarray data set. We compared our framework to three state-of-the-art parameter estimation methods and found that our approach consistently generated higher quality parameter solutions efficiently. Although many general-purpose parameter estimation methods have been applied for modeling of gene circuits, our results suggest that the use of more tailored approaches to use domain-specific information may be a key to reverse engineering of complex biological systems. http://sfb.kaust.edu.sa/Pages/Software.aspx. Supplementary data are available at Bioinformatics online.
The Problem of Size in Robust Design
NASA Technical Reports Server (NTRS)
Koch, Patrick N.; Allen, Janet K.; Mistree, Farrokh; Mavris, Dimitri
1997-01-01
To facilitate the effective solution of multidisciplinary, multiobjective complex design problems, a departure from the traditional parametric design analysis and single objective optimization approaches is necessary in the preliminary stages of design. A necessary tradeoff becomes one of efficiency vs. accuracy as approximate models are sought to allow fast analysis and effective exploration of a preliminary design space. In this paper we apply a general robust design approach for efficient and comprehensive preliminary design to a large complex system: a high speed civil transport (HSCT) aircraft. Specifically, we investigate the HSCT wing configuration design, incorporating life cycle economic uncertainties to identify economically robust solutions. The approach is built on the foundation of statistical experimentation and modeling techniques and robust design principles, and is specialized through incorporation of the compromise Decision Support Problem for multiobjective design. For large problems however, as in the HSCT example, this robust design approach developed for efficient and comprehensive design breaks down with the problem of size - combinatorial explosion in experimentation and model building with number of variables -and both efficiency and accuracy are sacrificed. Our focus in this paper is on identifying and discussing the implications and open issues associated with the problem of size for the preliminary design of large complex systems.
Efficient computation of the joint sample frequency spectra for multiple populations.
Kamm, John A; Terhorst, Jonathan; Song, Yun S
2017-01-01
A wide range of studies in population genetics have employed the sample frequency spectrum (SFS), a summary statistic which describes the distribution of mutant alleles at a polymorphic site in a sample of DNA sequences and provides a highly efficient dimensional reduction of large-scale population genomic variation data. Recently, there has been much interest in analyzing the joint SFS data from multiple populations to infer parameters of complex demographic histories, including variable population sizes, population split times, migration rates, admixture proportions, and so on. SFS-based inference methods require accurate computation of the expected SFS under a given demographic model. Although much methodological progress has been made, existing methods suffer from numerical instability and high computational complexity when multiple populations are involved and the sample size is large. In this paper, we present new analytic formulas and algorithms that enable accurate, efficient computation of the expected joint SFS for thousands of individuals sampled from hundreds of populations related by a complex demographic model with arbitrary population size histories (including piecewise-exponential growth). Our results are implemented in a new software package called momi (MOran Models for Inference). Through an empirical study we demonstrate our improvements to numerical stability and computational complexity.
Efficient computation of the joint sample frequency spectra for multiple populations
Kamm, John A.; Terhorst, Jonathan; Song, Yun S.
2016-01-01
A wide range of studies in population genetics have employed the sample frequency spectrum (SFS), a summary statistic which describes the distribution of mutant alleles at a polymorphic site in a sample of DNA sequences and provides a highly efficient dimensional reduction of large-scale population genomic variation data. Recently, there has been much interest in analyzing the joint SFS data from multiple populations to infer parameters of complex demographic histories, including variable population sizes, population split times, migration rates, admixture proportions, and so on. SFS-based inference methods require accurate computation of the expected SFS under a given demographic model. Although much methodological progress has been made, existing methods suffer from numerical instability and high computational complexity when multiple populations are involved and the sample size is large. In this paper, we present new analytic formulas and algorithms that enable accurate, efficient computation of the expected joint SFS for thousands of individuals sampled from hundreds of populations related by a complex demographic model with arbitrary population size histories (including piecewise-exponential growth). Our results are implemented in a new software package called momi (MOran Models for Inference). Through an empirical study we demonstrate our improvements to numerical stability and computational complexity. PMID:28239248
Efficient field-theoretic simulation of polymer solutions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Villet, Michael C.; Fredrickson, Glenn H., E-mail: ghf@mrl.ucsb.edu; Department of Materials, University of California, Santa Barbara, California 93106
2014-12-14
We present several developments that facilitate the efficient field-theoretic simulation of polymers by complex Langevin sampling. A regularization scheme using finite Gaussian excluded volume interactions is used to derive a polymer solution model that appears free of ultraviolet divergences and hence is well-suited for lattice-discretized field theoretic simulation. We show that such models can exhibit ultraviolet sensitivity, a numerical pathology that dramatically increases sampling error in the continuum lattice limit, and further show that this pathology can be eliminated by appropriate model reformulation by variable transformation. We present an exponential time differencing algorithm for integrating complex Langevin equations for fieldmore » theoretic simulation, and show that the algorithm exhibits excellent accuracy and stability properties for our regularized polymer model. These developments collectively enable substantially more efficient field-theoretic simulation of polymers, and illustrate the importance of simultaneously addressing analytical and numerical pathologies when implementing such computations.« less
Influence of the Investor's Behavior on the Complexity of the Stock Market
NASA Astrophysics Data System (ADS)
Atman, A. P. F.; Gonçalves, Bruna Amin
2012-04-01
One of the pillars of the finance theory is the efficient-market hypothesis, which is used to analyze the stock market. However, in recent years, this hypothesis has been questioned by a number of studies showing evidence of unusual behaviors in the returns of financial assets ("anomalies") caused by behavioral aspects of the economic agents. Therefore, it is time to initiate a debate about the efficient-market hypothesis and the "behavioral finances." We here introduce a cellular automaton model to study the stock market complexity, considering different behaviors of the economical agents. From the analysis of the stationary standard of investment observed in the simulations and the Hurst exponents obtained for the term series of stock index, we draw conclusions concerning the complexity of the model compared to real markets. We also investigate which conditions of the investors are able to influence the efficient market hypothesis statements.
A hydrological emulator for global applications – HE v1.0.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yaling; Hejazi, Mohamad; Li, Hongyi
While global hydrological models (GHMs) are very useful in exploring water resources and interactions between the Earth and human systems, their use often requires numerous model inputs, complex model calibration, and high computation costs. To overcome these challenges, we construct an efficient open-source and ready-to-use hydrological emulator (HE) that can mimic complex GHMs at a range of spatial scales (e.g., basin, region, globe). More specifically, we construct both a lumped and a distributed scheme of the HE based on the monthly abcd model to explore the tradeoff between computational cost and model fidelity. Model predictability and computational efficiency are evaluatedmore » in simulating global runoff from 1971 to 2010 with both the lumped and distributed schemes. The results are compared against the runoff product from the widely used Variable Infiltration Capacity (VIC) model. Our evaluation indicates that the lumped and distributed schemes present comparable results regarding annual total quantity, spatial pattern, and temporal variation of the major water fluxes (e.g., total runoff, evapotranspiration) across the global 235 basins (e.g., correlation coefficient r between the annual total runoff from either of these two schemes and the VIC is > 0.96), except for several cold (e.g., Arctic, interior Tibet), dry (e.g., North Africa) and mountainous (e.g., Argentina) regions. Compared against the monthly total runoff product from the VIC (aggregated from daily runoff), the global mean Kling–Gupta efficiencies are 0.75 and 0.79 for the lumped and distributed schemes, respectively, with the distributed scheme better capturing spatial heterogeneity. Notably, the computation efficiency of the lumped scheme is 2 orders of magnitude higher than the distributed one and 7 orders more efficient than the VIC model. A case study of uncertainty analysis for the world's 16 basins with top annual streamflow is conducted using 100 000 model simulations, and it demonstrates the lumped scheme's extraordinary advantage in computational efficiency. Lastly, our results suggest that the revised lumped abcd model can serve as an efficient and reasonable HE for complex GHMs and is suitable for broad practical use, and the distributed scheme is also an efficient alternative if spatial heterogeneity is of more interest.« less
Hattrem, Magnus N; Kristiansen, Kåre A; Aachmann, Finn L; Dille, Morten J; Draget, Kurt I
2015-06-20
A challenge in formulating water-in-oil-in-water (W/O/W) emulsions is the uncontrolled release of the encapsulated compound prior to application. Pharmaceuticals and nutraceuticals usually have amphipathic nature, which may contribute to leakage of the active ingredient. In the present study, cyclodextrins (CyDs) were used to impart a change in the relative polarity and size of a model compound (ibuprofen) by the formation of inclusion complexes. Various inclusion complexes (2-hydroxypropyl (HP)-β-CyD-, α-CyD- and γ-CyD-ibuprofen) were prepared and presented within W/O/W emulsions, and the initial and long-term encapsulation efficiency was investigated. HP-β-CyD-ibuprofen provided the highest encapsulation of ibuprofen in comparison to a W/O/W emulsion with unassociated ibuprofen confined within the inner water phase, with a four-fold increase in the encapsulation efficiency. An improved, although lower, encapsulation efficiency was obtained for the inclusion complex γ-CyD-ibuprofen in comparison to HP-β-CyD-ibuprofen, whereas α-CyD-ibuprofen had a similar encapsulation efficiency to that of unassociated ibuprofen. The lower encapsulation efficiency of ibuprofen in combination with α-CyD and γ-CyD was attributed to a lower association constant for the γ-CyD-ibuprofen inclusion complex and the ability of α-CyD to form inclusion complexes with fatty acids. For the W/O/W emulsion prepared with HP-β-CyD-ibuprofen, the highest encapsulation of ibuprofen was obtained at hyper- and iso-osmotic conditions and by using an excess molar ratio of CyD to ibuprofen. In the last part of the study, it was suggested that the chemical modification of the HP-β-CyD molecule did not influence the encapsulation of ibuprofen, as a similar encapsulation efficiency was obtained for an inclusion complex prepared with mono-1-glucose-β-CyD. Copyright © 2015 Elsevier B.V. All rights reserved.
Spatial operator algebra for flexible multibody dynamics
NASA Technical Reports Server (NTRS)
Jain, A.; Rodriguez, G.
1993-01-01
This paper presents an approach to modeling the dynamics of flexible multibody systems such as flexible spacecraft and limber space robotic systems. A large number of degrees of freedom and complex dynamic interactions are typical in these systems. This paper uses spatial operators to develop efficient recursive algorithms for the dynamics of these systems. This approach very efficiently manages complexity by means of a hierarchy of mathematical operations.
Huang, Yukun; Chen, Rong; Wei, Jingbo; Pei, Xilong; Cao, Jing; Prakash Jayaraman, Prem; Ranjan, Rajiv
2014-01-01
JNI in the Android platform is often observed with low efficiency and high coding complexity. Although many researchers have investigated the JNI mechanism, few of them solve the efficiency and the complexity problems of JNI in the Android platform simultaneously. In this paper, a hybrid polylingual object (HPO) model is proposed to allow a CAR object being accessed as a Java object and as vice in the Dalvik virtual machine. It is an acceptable substitute for JNI to reuse the CAR-compliant components in Android applications in a seamless and efficient way. The metadata injection mechanism is designed to support the automatic mapping and reflection between CAR objects and Java objects. A prototype virtual machine, called HPO-Dalvik, is implemented by extending the Dalvik virtual machine to support the HPO model. Lifespan management, garbage collection, and data type transformation of HPO objects are also handled in the HPO-Dalvik virtual machine automatically. The experimental result shows that the HPO model outweighs the standard JNI in lower overhead on native side, better executing performance with no JNI bridging code being demanded.
Inversion of 2-D DC resistivity data using rapid optimization and minimal complexity neural network
NASA Astrophysics Data System (ADS)
Singh, U. K.; Tiwari, R. K.; Singh, S. B.
2010-02-01
The backpropagation (BP) artificial neural network (ANN) technique of optimization based on steepest descent algorithm is known to be inept for its poor performance and does not ensure global convergence. Nonlinear and complex DC resistivity data require efficient ANN model and more intensive optimization procedures for better results and interpretations. Improvements in the computational ANN modeling process are described with the goals of enhancing the optimization process and reducing ANN model complexity. Well-established optimization methods, such as Radial basis algorithm (RBA) and Levenberg-Marquardt algorithms (LMA) have frequently been used to deal with complexity and nonlinearity in such complex geophysical records. We examined here the efficiency of trained LMA and RB networks by using 2-D synthetic resistivity data and then finally applied to the actual field vertical electrical resistivity sounding (VES) data collected from the Puga Valley, Jammu and Kashmir, India. The resulting ANN reconstruction resistivity results are compared with the result of existing inversion approaches, which are in good agreement. The depths and resistivity structures obtained by the ANN methods also correlate well with the known drilling results and geologic boundaries. The application of the above ANN algorithms proves to be robust and could be used for fast estimation of resistive structures for other complex earth model also.
Studying light-harvesting models with superconducting circuits.
Potočnik, Anton; Bargerbos, Arno; Schröder, Florian A Y N; Khan, Saeed A; Collodo, Michele C; Gasparinetti, Simone; Salathé, Yves; Creatore, Celestino; Eichler, Christopher; Türeci, Hakan E; Chin, Alex W; Wallraff, Andreas
2018-03-02
The process of photosynthesis, the main source of energy in the living world, converts sunlight into chemical energy. The high efficiency of this process is believed to be enabled by an interplay between the quantum nature of molecular structures in photosynthetic complexes and their interaction with the environment. Investigating these effects in biological samples is challenging due to their complex and disordered structure. Here we experimentally demonstrate a technique for studying photosynthetic models based on superconducting quantum circuits, which complements existing experimental, theoretical, and computational approaches. We demonstrate a high degree of freedom in design and experimental control of our approach based on a simplified three-site model of a pigment protein complex with realistic parameters scaled down in energy by a factor of 10 5 . We show that the excitation transport between quantum-coherent sites disordered in energy can be enabled through the interaction with environmental noise. We also show that the efficiency of the process is maximized for structured noise resembling intramolecular phononic environments found in photosynthetic complexes.
Chenu, K; van Oosterom, E J; McLean, G; Deifel, K S; Fletcher, A; Geetika, G; Tirfessa, A; Mace, E S; Jordan, D R; Sulman, R; Hammer, G L
2018-02-21
Following advances in genetics, genomics, and phenotyping, trait selection in breeding is limited by our ability to understand interactions within the plants and with their environments, and to target traits of most relevance for the target population of environments. We propose an integrated approach that combines insights from crop modelling, physiology, genetics, and breeding to identify traits valuable for yield gain in the target population of environments, develop relevant high-throughput phenotyping platforms, and identify genetic controls and their values in production environments. This paper uses transpiration efficiency (biomass produced per unit of water used) as an example of a complex trait of interest to illustrate how the approach can guide modelling, phenotyping, and selection in a breeding program. We believe that this approach, by integrating insights from diverse disciplines, can increase the resource use efficiency of breeding programs for improving yield gains in target populations of environments.
Ranking streamflow model performance based on Information theory metrics
NASA Astrophysics Data System (ADS)
Martinez, Gonzalo; Pachepsky, Yakov; Pan, Feng; Wagener, Thorsten; Nicholson, Thomas
2016-04-01
The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic model evaluation and selection. We simulated 10-year streamflow time series in five watersheds located in Texas, North Carolina, Mississippi, and West Virginia. Eight model of different complexity were applied. The information-theory based metrics were obtained after representing the time series as strings of symbols where different symbols corresponded to different quantiles of the probability distribution of streamflow. The symbol alphabet was used. Three metrics were computed for those strings - mean information gain that measures the randomness of the signal, effective measure complexity that characterizes predictability and fluctuation complexity that characterizes the presence of a pattern in the signal. The observed streamflow time series has smaller information content and larger complexity metrics than the precipitation time series. Watersheds served as information filters and and streamflow time series were less random and more complex than the ones of precipitation. This is reflected the fact that the watershed acts as the information filter in the hydrologic conversion process from precipitation to streamflow. The Nash Sutcliffe efficiency metric increased as the complexity of models increased, but in many cases several model had this efficiency values not statistically significant from each other. In such cases, ranking models by the closeness of the information-theory based parameters in simulated and measured streamflow time series can provide an additional criterion for the evaluation of hydrologic model performance.
NASA Astrophysics Data System (ADS)
Muscoloni, Alessandro; Vittorio Cannistraci, Carlo
2018-05-01
The investigation of the hidden metric space behind complex network topologies is a fervid topic in current network science and the hyperbolic space is one of the most studied, because it seems associated to the structural organization of many real complex systems. The popularity-similarity-optimization (PSO) model simulates how random geometric graphs grow in the hyperbolic space, generating realistic networks with clustering, small-worldness, scale-freeness and rich-clubness. However, it misses to reproduce an important feature of real complex networks, which is the community organization. The geometrical-preferential-attachment (GPA) model was recently developed in order to confer to the PSO also a soft community structure, which is obtained by forcing different angular regions of the hyperbolic disk to have a variable level of attractiveness. However, the number and size of the communities cannot be explicitly controlled in the GPA, which is a clear limitation for real applications. Here, we introduce the nonuniform PSO (nPSO) model. Differently from GPA, the nPSO generates synthetic networks in the hyperbolic space where heterogeneous angular node attractiveness is forced by sampling the angular coordinates from a tailored nonuniform probability distribution (for instance a mixture of Gaussians). The nPSO differs from GPA in other three aspects: it allows one to explicitly fix the number and size of communities; it allows one to tune their mixing property by means of the network temperature; it is efficient to generate networks with high clustering. Several tests on the detectability of the community structure in nPSO synthetic networks and wide investigations on their structural properties confirm that the nPSO is a valid and efficient model to generate realistic complex networks with communities.
An efficient formulation of robot arm dynamics for control and computer simulation
NASA Astrophysics Data System (ADS)
Lee, C. S. G.; Nigam, R.
This paper describes an efficient formulation of the dynamic equations of motion of industrial robots based on the Lagrange formulation of d'Alembert's principle. This formulation, as applied to a PUMA robot arm, results in a set of closed form second order differential equations with cross product terms. They are not as efficient in computation as those formulated by the Newton-Euler method, but provide a better analytical model for control analysis and computer simulation. Computational complexities of this dynamic model together with other models are tabulated for discussion.
An Initial Multi-Domain Modeling of an Actively Cooled Structure
NASA Technical Reports Server (NTRS)
Steinthorsson, Erlendur
1997-01-01
A methodology for the simulation of turbine cooling flows is being developed. The methodology seeks to combine numerical techniques that optimize both accuracy and computational efficiency. Key components of the methodology include the use of multiblock grid systems for modeling complex geometries, and multigrid convergence acceleration for enhancing computational efficiency in highly resolved fluid flow simulations. The use of the methodology has been demonstrated in several turbo machinery flow and heat transfer studies. Ongoing and future work involves implementing additional turbulence models, improving computational efficiency, adding AMR.
Assessment of wear dependence parameters in complex model of cutting tool wear
NASA Astrophysics Data System (ADS)
Antsev, A. V.; Pasko, N. I.; Antseva, N. V.
2018-03-01
This paper addresses wear dependence of the generic efficient life period of cutting tools taken as an aggregate of the law of tool wear rate distribution and dependence of parameters of this law's on the cutting mode, factoring in the random factor as exemplified by the complex model of wear. The complex model of wear takes into account the variance of cutting properties within one batch of tools, variance in machinability within one batch of workpieces, and the stochastic nature of the wear process itself. A technique of assessment of wear dependence parameters in a complex model of cutting tool wear is provided. The technique is supported by a numerical example.
A density-based clustering model for community detection in complex networks
NASA Astrophysics Data System (ADS)
Zhao, Xiang; Li, Yantao; Qu, Zehui
2018-04-01
Network clustering (or graph partitioning) is an important technique for uncovering the underlying community structures in complex networks, which has been widely applied in various fields including astronomy, bioinformatics, sociology, and bibliometric. In this paper, we propose a density-based clustering model for community detection in complex networks (DCCN). The key idea is to find group centers with a higher density than their neighbors and a relatively large integrated-distance from nodes with higher density. The experimental results indicate that our approach is efficient and effective for community detection of complex networks.
NASA Astrophysics Data System (ADS)
Kan, Guangyuan; He, Xiaoyan; Ding, Liuqian; Li, Jiren; Hong, Yang; Zuo, Depeng; Ren, Minglei; Lei, Tianjie; Liang, Ke
2018-01-01
Hydrological model calibration has been a hot issue for decades. The shuffled complex evolution method developed at the University of Arizona (SCE-UA) has been proved to be an effective and robust optimization approach. However, its computational efficiency deteriorates significantly when the amount of hydrometeorological data increases. In recent years, the rise of heterogeneous parallel computing has brought hope for the acceleration of hydrological model calibration. This study proposed a parallel SCE-UA method and applied it to the calibration of a watershed rainfall-runoff model, the Xinanjiang model. The parallel method was implemented on heterogeneous computing systems using OpenMP and CUDA. Performance testing and sensitivity analysis were carried out to verify its correctness and efficiency. Comparison results indicated that heterogeneous parallel computing-accelerated SCE-UA converged much more quickly than the original serial version and possessed satisfactory accuracy and stability for the task of fast hydrological model calibration.
Huang, Yukun; Chen, Rong; Wei, Jingbo; Pei, Xilong; Cao, Jing; Prakash Jayaraman, Prem; Ranjan, Rajiv
2014-01-01
JNI in the Android platform is often observed with low efficiency and high coding complexity. Although many researchers have investigated the JNI mechanism, few of them solve the efficiency and the complexity problems of JNI in the Android platform simultaneously. In this paper, a hybrid polylingual object (HPO) model is proposed to allow a CAR object being accessed as a Java object and as vice in the Dalvik virtual machine. It is an acceptable substitute for JNI to reuse the CAR-compliant components in Android applications in a seamless and efficient way. The metadata injection mechanism is designed to support the automatic mapping and reflection between CAR objects and Java objects. A prototype virtual machine, called HPO-Dalvik, is implemented by extending the Dalvik virtual machine to support the HPO model. Lifespan management, garbage collection, and data type transformation of HPO objects are also handled in the HPO-Dalvik virtual machine automatically. The experimental result shows that the HPO model outweighs the standard JNI in lower overhead on native side, better executing performance with no JNI bridging code being demanded. PMID:25110745
Andrianakis, I; Vernon, I; McCreesh, N; McKinley, T J; Oakley, J E; Nsubuga, R N; Goldstein, M; White, R G
2017-08-01
Complex stochastic models are commonplace in epidemiology, but their utility depends on their calibration to empirical data. History matching is a (pre)calibration method that has been applied successfully to complex deterministic models. In this work, we adapt history matching to stochastic models, by emulating the variance in the model outputs, and therefore accounting for its dependence on the model's input values. The method proposed is applied to a real complex epidemiological model of human immunodeficiency virus in Uganda with 22 inputs and 18 outputs, and is found to increase the efficiency of history matching, requiring 70% of the time and 43% fewer simulator evaluations compared with a previous variant of the method. The insight gained into the structure of the human immunodeficiency virus model, and the constraints placed on it, are then discussed.
Elements of complexity in subsurface modeling, exemplified with three case studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freedman, Vicky L.; Truex, Michael J.; Rockhold, Mark
2017-04-03
There are complexity elements to consider when applying subsurface flow and transport models to support environmental analyses. Modelers balance the benefits and costs of modeling along the spectrum of complexity, taking into account the attributes of more simple models (e.g., lower cost, faster execution, easier to explain, less mechanistic) and the attributes of more complex models (higher cost, slower execution, harder to explain, more mechanistic and technically defensible). In this paper, modeling complexity is examined with respect to considering this balance. The discussion of modeling complexity is organized into three primary elements: 1) modeling approach, 2) description of process, andmore » 3) description of heterogeneity. Three examples are used to examine these complexity elements. Two of the examples use simulations generated from a complex model to develop simpler models for efficient use in model applications. The first example is designed to support performance evaluation of soil vapor extraction remediation in terms of groundwater protection. The second example investigates the importance of simulating different categories of geochemical reactions for carbon sequestration and selecting appropriate simplifications for use in evaluating sequestration scenarios. In the third example, the modeling history for a uranium-contaminated site demonstrates that conservative parameter estimates were inadequate surrogates for complex, critical processes and there is discussion on the selection of more appropriate model complexity for this application. All three examples highlight how complexity considerations are essential to create scientifically defensible models that achieve a balance between model simplification and complexity.« less
Elements of complexity in subsurface modeling, exemplified with three case studies
NASA Astrophysics Data System (ADS)
Freedman, Vicky L.; Truex, Michael J.; Rockhold, Mark L.; Bacon, Diana H.; Freshley, Mark D.; Wellman, Dawn M.
2017-09-01
There are complexity elements to consider when applying subsurface flow and transport models to support environmental analyses. Modelers balance the benefits and costs of modeling along the spectrum of complexity, taking into account the attributes of more simple models (e.g., lower cost, faster execution, easier to explain, less mechanistic) and the attributes of more complex models (higher cost, slower execution, harder to explain, more mechanistic and technically defensible). In this report, modeling complexity is examined with respect to considering this balance. The discussion of modeling complexity is organized into three primary elements: (1) modeling approach, (2) description of process, and (3) description of heterogeneity. Three examples are used to examine these complexity elements. Two of the examples use simulations generated from a complex model to develop simpler models for efficient use in model applications. The first example is designed to support performance evaluation of soil-vapor-extraction remediation in terms of groundwater protection. The second example investigates the importance of simulating different categories of geochemical reactions for carbon sequestration and selecting appropriate simplifications for use in evaluating sequestration scenarios. In the third example, the modeling history for a uranium-contaminated site demonstrates that conservative parameter estimates were inadequate surrogates for complex, critical processes and there is discussion on the selection of more appropriate model complexity for this application. All three examples highlight how complexity considerations are essential to create scientifically defensible models that achieve a balance between model simplification and complexity.
Whittington, James C. R.; Bogacz, Rafal
2017-01-01
To efficiently learn from feedback, cortical networks need to update synaptic weights on multiple levels of cortical hierarchy. An effective and well-known algorithm for computing such changes in synaptic weights is the error backpropagation algorithm. However, in this algorithm, the change in synaptic weights is a complex function of weights and activities of neurons not directly connected with the synapse being modified, whereas the changes in biological synapses are determined only by the activity of presynaptic and postsynaptic neurons. Several models have been proposed that approximate the backpropagation algorithm with local synaptic plasticity, but these models require complex external control over the network or relatively complex plasticity rules. Here we show that a network developed in the predictive coding framework can efficiently perform supervised learning fully autonomously, employing only simple local Hebbian plasticity. Furthermore, for certain parameters, the weight change in the predictive coding model converges to that of the backpropagation algorithm. This suggests that it is possible for cortical networks with simple Hebbian synaptic plasticity to implement efficient learning algorithms in which synapses in areas on multiple levels of hierarchy are modified to minimize the error on the output. PMID:28333583
Whittington, James C R; Bogacz, Rafal
2017-05-01
To efficiently learn from feedback, cortical networks need to update synaptic weights on multiple levels of cortical hierarchy. An effective and well-known algorithm for computing such changes in synaptic weights is the error backpropagation algorithm. However, in this algorithm, the change in synaptic weights is a complex function of weights and activities of neurons not directly connected with the synapse being modified, whereas the changes in biological synapses are determined only by the activity of presynaptic and postsynaptic neurons. Several models have been proposed that approximate the backpropagation algorithm with local synaptic plasticity, but these models require complex external control over the network or relatively complex plasticity rules. Here we show that a network developed in the predictive coding framework can efficiently perform supervised learning fully autonomously, employing only simple local Hebbian plasticity. Furthermore, for certain parameters, the weight change in the predictive coding model converges to that of the backpropagation algorithm. This suggests that it is possible for cortical networks with simple Hebbian synaptic plasticity to implement efficient learning algorithms in which synapses in areas on multiple levels of hierarchy are modified to minimize the error on the output.
STEPS: efficient simulation of stochastic reaction-diffusion models in realistic morphologies.
Hepburn, Iain; Chen, Weiliang; Wils, Stefan; De Schutter, Erik
2012-05-10
Models of cellular molecular systems are built from components such as biochemical reactions (including interactions between ligands and membrane-bound proteins), conformational changes and active and passive transport. A discrete, stochastic description of the kinetics is often essential to capture the behavior of the system accurately. Where spatial effects play a prominent role the complex morphology of cells may have to be represented, along with aspects such as chemical localization and diffusion. This high level of detail makes efficiency a particularly important consideration for software that is designed to simulate such systems. We describe STEPS, a stochastic reaction-diffusion simulator developed with an emphasis on simulating biochemical signaling pathways accurately and efficiently. STEPS supports all the above-mentioned features, and well-validated support for SBML allows many existing biochemical models to be imported reliably. Complex boundaries can be represented accurately in externally generated 3D tetrahedral meshes imported by STEPS. The powerful Python interface facilitates model construction and simulation control. STEPS implements the composition and rejection method, a variation of the Gillespie SSA, supporting diffusion between tetrahedral elements within an efficient search and update engine. Additional support for well-mixed conditions and for deterministic model solution is implemented. Solver accuracy is confirmed with an original and extensive validation set consisting of isolated reaction, diffusion and reaction-diffusion systems. Accuracy imposes upper and lower limits on tetrahedron sizes, which are described in detail. By comparing to Smoldyn, we show how the voxel-based approach in STEPS is often faster than particle-based methods, with increasing advantage in larger systems, and by comparing to MesoRD we show the efficiency of the STEPS implementation. STEPS simulates models of cellular reaction-diffusion systems with complex boundaries with high accuracy and high performance in C/C++, controlled by a powerful and user-friendly Python interface. STEPS is free for use and is available at http://steps.sourceforge.net/
STEPS: efficient simulation of stochastic reaction–diffusion models in realistic morphologies
2012-01-01
Background Models of cellular molecular systems are built from components such as biochemical reactions (including interactions between ligands and membrane-bound proteins), conformational changes and active and passive transport. A discrete, stochastic description of the kinetics is often essential to capture the behavior of the system accurately. Where spatial effects play a prominent role the complex morphology of cells may have to be represented, along with aspects such as chemical localization and diffusion. This high level of detail makes efficiency a particularly important consideration for software that is designed to simulate such systems. Results We describe STEPS, a stochastic reaction–diffusion simulator developed with an emphasis on simulating biochemical signaling pathways accurately and efficiently. STEPS supports all the above-mentioned features, and well-validated support for SBML allows many existing biochemical models to be imported reliably. Complex boundaries can be represented accurately in externally generated 3D tetrahedral meshes imported by STEPS. The powerful Python interface facilitates model construction and simulation control. STEPS implements the composition and rejection method, a variation of the Gillespie SSA, supporting diffusion between tetrahedral elements within an efficient search and update engine. Additional support for well-mixed conditions and for deterministic model solution is implemented. Solver accuracy is confirmed with an original and extensive validation set consisting of isolated reaction, diffusion and reaction–diffusion systems. Accuracy imposes upper and lower limits on tetrahedron sizes, which are described in detail. By comparing to Smoldyn, we show how the voxel-based approach in STEPS is often faster than particle-based methods, with increasing advantage in larger systems, and by comparing to MesoRD we show the efficiency of the STEPS implementation. Conclusion STEPS simulates models of cellular reaction–diffusion systems with complex boundaries with high accuracy and high performance in C/C++, controlled by a powerful and user-friendly Python interface. STEPS is free for use and is available at http://steps.sourceforge.net/ PMID:22574658
Sensitivity analysis of dynamic biological systems with time-delays.
Wu, Wu Hsiung; Wang, Feng Sheng; Chang, Maw Shang
2010-10-15
Mathematical modeling has been applied to the study and analysis of complex biological systems for a long time. Some processes in biological systems, such as the gene expression and feedback control in signal transduction networks, involve a time delay. These systems are represented as delay differential equation (DDE) models. Numerical sensitivity analysis of a DDE model by the direct method requires the solutions of model and sensitivity equations with time-delays. The major effort is the computation of Jacobian matrix when computing the solution of sensitivity equations. The computation of partial derivatives of complex equations either by the analytic method or by symbolic manipulation is time consuming, inconvenient, and prone to introduce human errors. To address this problem, an automatic approach to obtain the derivatives of complex functions efficiently and accurately is necessary. We have proposed an efficient algorithm with an adaptive step size control to compute the solution and dynamic sensitivities of biological systems described by ordinal differential equations (ODEs). The adaptive direct-decoupled algorithm is extended to solve the solution and dynamic sensitivities of time-delay systems describing by DDEs. To save the human effort and avoid the human errors in the computation of partial derivatives, an automatic differentiation technique is embedded in the extended algorithm to evaluate the Jacobian matrix. The extended algorithm is implemented and applied to two realistic models with time-delays: the cardiovascular control system and the TNF-α signal transduction network. The results show that the extended algorithm is a good tool for dynamic sensitivity analysis on DDE models with less user intervention. By comparing with direct-coupled methods in theory, the extended algorithm is efficient, accurate, and easy to use for end users without programming background to do dynamic sensitivity analysis on complex biological systems with time-delays.
Swimming efficiency in a shear-thinning fluid
NASA Astrophysics Data System (ADS)
Nganguia, Herve; Pietrzyk, Kyle; Pak, On Shun
2017-12-01
Micro-organisms expend energy moving through complex media. While propulsion speed is an important property of locomotion, efficiency is another factor that may determine the swimming gait adopted by a micro-organism in order to locomote in an energetically favorable manner. The efficiency of swimming in a Newtonian fluid is well characterized for different biological and artificial swimmers. However, these swimmers often encounter biological fluids displaying shear-thinning viscosities. Little is known about how this nonlinear rheology influences the efficiency of locomotion. Does the shear-thinning rheology render swimming more efficient or less? How does the swimming efficiency depend on the propulsion mechanism of a swimmer and rheological properties of the surrounding shear-thinning fluid? In this work, we address these fundamental questions on the efficiency of locomotion in a shear-thinning fluid by considering the squirmer model as a general locomotion model to represent different types of swimmers. Our analysis reveals how the choice of surface velocity distribution on a squirmer may reduce or enhance the swimming efficiency. We determine optimal shear rates at which the swimming efficiency can be substantially enhanced compared with the Newtonian case. The nontrivial variations of swimming efficiency prompt questions on how micro-organisms may tune their swimming gaits to exploit the shear-thinning rheology. The findings also provide insights into how artificial swimmers should be designed to move through complex media efficiently.
NASA Astrophysics Data System (ADS)
Xiang, Hong-Jun; Zhang, Zhi-Wei; Shi, Zhi-Fei; Li, Hong
2018-04-01
A fully coupled modeling approach is developed for piezoelectric energy harvesters in this work based on the use of available robust finite element packages and efficient reducing order modeling techniques. At first, the harvester is modeled using finite element packages. The dynamic equilibrium equations of harvesters are rebuilt by extracting system matrices from the finite element model using built-in commands without any additional tools. A Krylov subspace-based scheme is then applied to obtain a reduced-order model for improving simulation efficiency but preserving the key features of harvesters. Co-simulation of the reduced-order model with nonlinear energy harvesting circuits is achieved in a system level. Several examples in both cases of harmonic response and transient response analysis are conducted to validate the present approach. The proposed approach allows to improve the simulation efficiency by several orders of magnitude. Moreover, the parameters used in the equivalent circuit model can be conveniently obtained by the proposed eigenvector-based model order reduction technique. More importantly, this work establishes a methodology for modeling of piezoelectric energy harvesters with any complicated mechanical geometries and nonlinear circuits. The input load may be more complex also. The method can be employed by harvester designers to optimal mechanical structures or by circuit designers to develop novel energy harvesting circuits.
Hu, Eric Y; Bouteiller, Jean-Marie C; Song, Dong; Baudry, Michel; Berger, Theodore W
2015-01-01
Chemical synapses are comprised of a wide collection of intricate signaling pathways involving complex dynamics. These mechanisms are often reduced to simple spikes or exponential representations in order to enable computer simulations at higher spatial levels of complexity. However, these representations cannot capture important nonlinear dynamics found in synaptic transmission. Here, we propose an input-output (IO) synapse model capable of generating complex nonlinear dynamics while maintaining low computational complexity. This IO synapse model is an extension of a detailed mechanistic glutamatergic synapse model capable of capturing the input-output relationships of the mechanistic model using the Volterra functional power series. We demonstrate that the IO synapse model is able to successfully track the nonlinear dynamics of the synapse up to the third order with high accuracy. We also evaluate the accuracy of the IO synapse model at different input frequencies and compared its performance with that of kinetic models in compartmental neuron models. Our results demonstrate that the IO synapse model is capable of efficiently replicating complex nonlinear dynamics that were represented in the original mechanistic model and provide a method to replicate complex and diverse synaptic transmission within neuron network simulations.
Hu, Eric Y.; Bouteiller, Jean-Marie C.; Song, Dong; Baudry, Michel; Berger, Theodore W.
2015-01-01
Chemical synapses are comprised of a wide collection of intricate signaling pathways involving complex dynamics. These mechanisms are often reduced to simple spikes or exponential representations in order to enable computer simulations at higher spatial levels of complexity. However, these representations cannot capture important nonlinear dynamics found in synaptic transmission. Here, we propose an input-output (IO) synapse model capable of generating complex nonlinear dynamics while maintaining low computational complexity. This IO synapse model is an extension of a detailed mechanistic glutamatergic synapse model capable of capturing the input-output relationships of the mechanistic model using the Volterra functional power series. We demonstrate that the IO synapse model is able to successfully track the nonlinear dynamics of the synapse up to the third order with high accuracy. We also evaluate the accuracy of the IO synapse model at different input frequencies and compared its performance with that of kinetic models in compartmental neuron models. Our results demonstrate that the IO synapse model is capable of efficiently replicating complex nonlinear dynamics that were represented in the original mechanistic model and provide a method to replicate complex and diverse synaptic transmission within neuron network simulations. PMID:26441622
Sparse RNA folding revisited: space-efficient minimum free energy structure prediction.
Will, Sebastian; Jabbari, Hosna
2016-01-01
RNA secondary structure prediction by energy minimization is the central computational tool for the analysis of structural non-coding RNAs and their interactions. Sparsification has been successfully applied to improve the time efficiency of various structure prediction algorithms while guaranteeing the same result; however, for many such folding problems, space efficiency is of even greater concern, particularly for long RNA sequences. So far, space-efficient sparsified RNA folding with fold reconstruction was solved only for simple base-pair-based pseudo-energy models. Here, we revisit the problem of space-efficient free energy minimization. Whereas the space-efficient minimization of the free energy has been sketched before, the reconstruction of the optimum structure has not even been discussed. We show that this reconstruction is not possible in trivial extension of the method for simple energy models. Then, we present the time- and space-efficient sparsified free energy minimization algorithm SparseMFEFold that guarantees MFE structure prediction. In particular, this novel algorithm provides efficient fold reconstruction based on dynamically garbage-collected trace arrows. The complexity of our algorithm depends on two parameters, the number of candidates Z and the number of trace arrows T; both are bounded by [Formula: see text], but are typically much smaller. The time complexity of RNA folding is reduced from [Formula: see text] to [Formula: see text]; the space complexity, from [Formula: see text] to [Formula: see text]. Our empirical results show more than 80 % space savings over RNAfold [Vienna RNA package] on the long RNAs from the RNA STRAND database (≥2500 bases). The presented technique is intentionally generalizable to complex prediction algorithms; due to their high space demands, algorithms like pseudoknot prediction and RNA-RNA-interaction prediction are expected to profit even stronger than "standard" MFE folding. SparseMFEFold is free software, available at http://www.bioinf.uni-leipzig.de/~will/Software/SparseMFEFold.
Computationally efficient algorithm for Gaussian Process regression in case of structured samples
NASA Astrophysics Data System (ADS)
Belyaev, M.; Burnaev, E.; Kapushev, Y.
2016-04-01
Surrogate modeling is widely used in many engineering problems. Data sets often have Cartesian product structure (for instance factorial design of experiments with missing points). In such case the size of the data set can be very large. Therefore, one of the most popular algorithms for approximation-Gaussian Process regression-can be hardly applied due to its computational complexity. In this paper a computationally efficient approach for constructing Gaussian Process regression in case of data sets with Cartesian product structure is presented. Efficiency is achieved by using a special structure of the data set and operations with tensors. Proposed algorithm has low computational as well as memory complexity compared to existing algorithms. In this work we also introduce a regularization procedure allowing to take into account anisotropy of the data set and avoid degeneracy of regression model.
2016-01-01
Muscle contractions are generated by cyclical interactions of myosin heads with actin filaments to form the actomyosin complex. To simulate actomyosin complex stable states, mathematical models usually define an energy landscape with a corresponding number of wells. The jumps between these wells are defined through rate constants. Almost all previous models assign these wells an infinite sharpness by imposing a relatively simple expression for the detailed balance, i.e., the ratio of the rate constants depends exponentially on the sole myosin elastic energy. Physically, this assumption corresponds to neglecting thermal fluctuations in the actomyosin complex stable states. By comparing three mathematical models, we examine the extent to which this hypothesis affects muscle model predictions at the single cross-bridge, single fiber, and organ levels in a ceteris paribus analysis. We show that including fluctuations in stable states allows the lever arm of the myosin to easily and dynamically explore all possible minima in the energy landscape, generating several backward and forward jumps between states during the lifetime of the actomyosin complex, whereas the infinitely sharp minima case is characterized by fewer jumps between states. Moreover, the analysis predicts that thermal fluctuations enable a more efficient contraction mechanism, in which a higher force is sustained by fewer attached cross-bridges. PMID:27626630
Dendritic trafficking faces physiologically critical speed-precision tradeoffs
Williams, Alex H.; O'Donnell, Cian; Sejnowski, Terrence J.; ...
2016-12-30
Nervous system function requires intracellular transport of channels, receptors, mRNAs, and other cargo throughout complex neuronal morphologies. Local signals such as synaptic input can regulate cargo trafficking, motivating the leading conceptual model of neuron-wide transport, sometimes called the ‘sushi-belt model’. Current theories and experiments are based on this model, yet its predictions are not rigorously understood. We formalized the sushi belt model mathematically, and show that it can achieve arbitrarily complex spatial distributions of cargo in reconstructed morphologies. However, the model also predicts an unavoidable, morphology dependent tradeoff between speed, precision and metabolic efficiency of cargo transport. With experimental estimatesmore » of trafficking kinetics, the model predicts delays of many hours or days for modestly accurate and efficient cargo delivery throughout a dendritic tree. In conclusion, these findings challenge current understanding of the efficacy of nucleus-to-synapse trafficking and may explain the prevalence of local biosynthesis in neurons.« less
Dendritic trafficking faces physiologically critical speed-precision tradeoffs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Alex H.; O'Donnell, Cian; Sejnowski, Terrence J.
Nervous system function requires intracellular transport of channels, receptors, mRNAs, and other cargo throughout complex neuronal morphologies. Local signals such as synaptic input can regulate cargo trafficking, motivating the leading conceptual model of neuron-wide transport, sometimes called the ‘sushi-belt model’. Current theories and experiments are based on this model, yet its predictions are not rigorously understood. We formalized the sushi belt model mathematically, and show that it can achieve arbitrarily complex spatial distributions of cargo in reconstructed morphologies. However, the model also predicts an unavoidable, morphology dependent tradeoff between speed, precision and metabolic efficiency of cargo transport. With experimental estimatesmore » of trafficking kinetics, the model predicts delays of many hours or days for modestly accurate and efficient cargo delivery throughout a dendritic tree. In conclusion, these findings challenge current understanding of the efficacy of nucleus-to-synapse trafficking and may explain the prevalence of local biosynthesis in neurons.« less
A shallow water model for the propagation of tsunami via Lattice Boltzmann method
NASA Astrophysics Data System (ADS)
Zergani, Sara; Aziz, Z. A.; Viswanathan, K. K.
2015-01-01
An efficient implementation of the lattice Boltzmann method (LBM) for the numerical simulation of the propagation of long ocean waves (e.g. tsunami), based on the nonlinear shallow water (NSW) wave equation is presented. The LBM is an alternative numerical procedure for the description of incompressible hydrodynamics and has the potential to serve as an efficient solver for incompressible flows in complex geometries. This work proposes the NSW equations for the irrotational surface waves in the case of complex bottom elevation. In recent time, equation involving shallow water is the current norm in modelling tsunami operations which include the propagation zone estimation. Several test-cases are presented to verify our model. Some implications to tsunami wave modelling are also discussed. Numerical results are found to be in excellent agreement with theory.
Reactome graph database: Efficient access to complex pathway data
Korninger, Florian; Viteri, Guilherme; Marin-Garcia, Pablo; Ping, Peipei; Wu, Guanming; Stein, Lincoln; D’Eustachio, Peter
2018-01-01
Reactome is a free, open-source, open-data, curated and peer-reviewed knowledgebase of biomolecular pathways. One of its main priorities is to provide easy and efficient access to its high quality curated data. At present, biological pathway databases typically store their contents in relational databases. This limits access efficiency because there are performance issues associated with queries traversing highly interconnected data. The same data in a graph database can be queried more efficiently. Here we present the rationale behind the adoption of a graph database (Neo4j) as well as the new ContentService (REST API) that provides access to these data. The Neo4j graph database and its query language, Cypher, provide efficient access to the complex Reactome data model, facilitating easy traversal and knowledge discovery. The adoption of this technology greatly improved query efficiency, reducing the average query time by 93%. The web service built on top of the graph database provides programmatic access to Reactome data by object oriented queries, but also supports more complex queries that take advantage of the new underlying graph-based data storage. By adopting graph database technology we are providing a high performance pathway data resource to the community. The Reactome graph database use case shows the power of NoSQL database engines for complex biological data types. PMID:29377902
Reactome graph database: Efficient access to complex pathway data.
Fabregat, Antonio; Korninger, Florian; Viteri, Guilherme; Sidiropoulos, Konstantinos; Marin-Garcia, Pablo; Ping, Peipei; Wu, Guanming; Stein, Lincoln; D'Eustachio, Peter; Hermjakob, Henning
2018-01-01
Reactome is a free, open-source, open-data, curated and peer-reviewed knowledgebase of biomolecular pathways. One of its main priorities is to provide easy and efficient access to its high quality curated data. At present, biological pathway databases typically store their contents in relational databases. This limits access efficiency because there are performance issues associated with queries traversing highly interconnected data. The same data in a graph database can be queried more efficiently. Here we present the rationale behind the adoption of a graph database (Neo4j) as well as the new ContentService (REST API) that provides access to these data. The Neo4j graph database and its query language, Cypher, provide efficient access to the complex Reactome data model, facilitating easy traversal and knowledge discovery. The adoption of this technology greatly improved query efficiency, reducing the average query time by 93%. The web service built on top of the graph database provides programmatic access to Reactome data by object oriented queries, but also supports more complex queries that take advantage of the new underlying graph-based data storage. By adopting graph database technology we are providing a high performance pathway data resource to the community. The Reactome graph database use case shows the power of NoSQL database engines for complex biological data types.
Adly, Amr A.; Abd-El-Hafiz, Salwa K.
2012-01-01
Incorporation of hysteresis models in electromagnetic analysis approaches is indispensable to accurate field computation in complex magnetic media. Throughout those computations, vector nature and computational efficiency of such models become especially crucial when sophisticated geometries requiring massive sub-region discretization are involved. Recently, an efficient vector Preisach-type hysteresis model constructed from only two scalar models having orthogonally coupled elementary operators has been proposed. This paper presents a novel Hopfield neural network approach for the implementation of Stoner–Wohlfarth-like operators that could lead to a significant enhancement in the computational efficiency of the aforementioned model. Advantages of this approach stem from the non-rectangular nature of these operators that substantially minimizes the number of operators needed to achieve an accurate vector hysteresis model. Details of the proposed approach, its identification and experimental testing are presented in the paper. PMID:25685446
A Radial Basis Function Approach to Financial Time Series Analysis
1993-12-01
including efficient methods for parameter estimation and pruning, a pointwise prediction error estimator, and a methodology for controlling the "data...collection of practical techniques to address these issues for a modeling methodology . Radial Basis Function networks. These techniques in- clude efficient... methodology often then amounts to a careful consideration of the interplay between model complexity and reliability. These will be recurrent themes
NASA Astrophysics Data System (ADS)
Bezruchko, Konstantin; Davidov, Albert
2009-01-01
In the given article scientific and technical complex for modeling, researching and testing of rocket-space vehicles' power installations which was created in Power Source Laboratory of National Aerospace University "KhAI" is described. This scientific and technical complex gives the opportunity to replace the full-sized tests on model tests and to reduce financial and temporary inputs at modeling, researching and testing of rocket-space vehicles' power installations. Using the given complex it is possible to solve the problems of designing and researching of rocket-space vehicles' power installations efficiently, and also to provide experimental researches of physical processes and tests of solar and chemical batteries of rocket-space complexes and space vehicles. Scientific and technical complex also allows providing accelerated tests, diagnostics, life-time control and restoring of chemical accumulators for rocket-space vehicles' power supply systems.
NASA Astrophysics Data System (ADS)
Zhang, Ning; Du, Yunsong; Miao, Shiguang; Fang, Xiaoyi
2016-08-01
The simulation performance over complex building clusters of a wind simulation model (Wind Information Field Fast Analysis model, WIFFA) in a micro-scale air pollutant dispersion model system (Urban Microscale Air Pollution dispersion Simulation model, UMAPS) is evaluated using various wind tunnel experimental data including the CEDVAL (Compilation of Experimental Data for Validation of Micro-Scale Dispersion Models) wind tunnel experiment data and the NJU-FZ experiment data (Nanjing University-Fang Zhuang neighborhood wind tunnel experiment data). The results show that the wind model can reproduce the vortexes triggered by urban buildings well, and the flow patterns in urban street canyons and building clusters can also be represented. Due to the complex shapes of buildings and their distributions, the simulation deviations/discrepancies from the measurements are usually caused by the simplification of the building shapes and the determination of the key zone sizes. The computational efficiencies of different cases are also discussed in this paper. The model has a high computational efficiency compared to traditional numerical models that solve the Navier-Stokes equations, and can produce very high-resolution (1-5 m) wind fields of a complex neighborhood scale urban building canopy (~ 1 km ×1 km) in less than 3 min when run on a personal computer.
Selection Metric for Photovoltaic Materials Screening Based on Detailed-Balance Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blank, Beatrix; Kirchartz, Thomas; Lany, Stephan
The success of recently discovered absorber materials for photovoltaic applications has been generating increasing interest in systematic materials screening over the last years. However, the key for a successful materials screening is a suitable selection metric that goes beyond the Shockley-Queisser theory that determines the thermodynamic efficiency limit of an absorber material solely by its band-gap energy. Here, we develop a selection metric to quantify the potential photovoltaic efficiency of a material. Our approach is compatible with detailed balance and applicable in computational and experimental materials screening. We use the complex refractive index to calculate radiative and nonradiative efficiency limitsmore » and the respective optimal thickness in the high mobility limit. We also compare our model to the widely applied selection metric by Yu and Zunger [Phys. Rev. Lett. 108, 068701 (2012)] with respect to their dependence on thickness, internal luminescence quantum efficiency, and refractive index. Finally, the model is applied to complex refractive indices calculated via electronic structure theory.« less
Selection Metric for Photovoltaic Materials Screening Based on Detailed-Balance Analysis
Blank, Beatrix; Kirchartz, Thomas; Lany, Stephan; ...
2017-08-31
The success of recently discovered absorber materials for photovoltaic applications has been generating increasing interest in systematic materials screening over the last years. However, the key for a successful materials screening is a suitable selection metric that goes beyond the Shockley-Queisser theory that determines the thermodynamic efficiency limit of an absorber material solely by its band-gap energy. Here, we develop a selection metric to quantify the potential photovoltaic efficiency of a material. Our approach is compatible with detailed balance and applicable in computational and experimental materials screening. We use the complex refractive index to calculate radiative and nonradiative efficiency limitsmore » and the respective optimal thickness in the high mobility limit. We also compare our model to the widely applied selection metric by Yu and Zunger [Phys. Rev. Lett. 108, 068701 (2012)] with respect to their dependence on thickness, internal luminescence quantum efficiency, and refractive index. Finally, the model is applied to complex refractive indices calculated via electronic structure theory.« less
Modeling electrokinetic flows by consistent implicit incompressible smoothed particle hydrodynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pan, Wenxiao; Kim, Kyungjoo; Perego, Mauro
2017-04-01
We present an efficient implicit incompressible smoothed particle hydrodynamics (I2SPH) discretization of Navier-Stokes, Poisson-Boltzmann, and advection-diffusion equations subject to Dirichlet or Robin boundary conditions. It is applied to model various two and three dimensional electrokinetic flows in simple or complex geometries. The I2SPH's accuracy and convergence are examined via comparison with analytical solutions, grid-based numerical solutions, or empirical models. The new method provides a framework to explore broader applications of SPH in microfluidics and complex fluids with charged objects, such as colloids and biomolecules, in arbitrary complex geometries.
Efficient embedding of complex networks to hyperbolic space via their Laplacian
Alanis-Lobato, Gregorio; Mier, Pablo; Andrade-Navarro, Miguel A.
2016-01-01
The different factors involved in the growth process of complex networks imprint valuable information in their observable topologies. How to exploit this information to accurately predict structural network changes is the subject of active research. A recent model of network growth sustains that the emergence of properties common to most complex systems is the result of certain trade-offs between node birth-time and similarity. This model has a geometric interpretation in hyperbolic space, where distances between nodes abstract this optimisation process. Current methods for network hyperbolic embedding search for node coordinates that maximise the likelihood that the network was produced by the afore-mentioned model. Here, a different strategy is followed in the form of the Laplacian-based Network Embedding, a simple yet accurate, efficient and data driven manifold learning approach, which allows for the quick geometric analysis of big networks. Comparisons against existing embedding and prediction techniques highlight its applicability to network evolution and link prediction. PMID:27445157
NASA Technical Reports Server (NTRS)
Schmidt, R. J.; Dodds, R. H., Jr.
1985-01-01
The dynamic analysis of complex structural systems using the finite element method and multilevel substructured models is presented. The fixed-interface method is selected for substructure reduction because of its efficiency, accuracy, and adaptability to restart and reanalysis. This method is extended to reduction of substructures which are themselves composed of reduced substructures. The implementation and performance of the method in a general purpose software system is emphasized. Solution algorithms consistent with the chosen data structures are presented. It is demonstrated that successful finite element software requires the use of software executives to supplement the algorithmic language. The complexity of the implementation of restart and reanalysis porcedures illustrates the need for executive systems to support the noncomputational aspects of the software. It is shown that significant computational efficiencies can be achieved through proper use of substructuring and reduction technbiques without sacrificing solution accuracy. The restart and reanalysis capabilities and the flexible procedures for multilevel substructured modeling gives economical yet accurate analyses of complex structural systems.
Efficient embedding of complex networks to hyperbolic space via their Laplacian
NASA Astrophysics Data System (ADS)
Alanis-Lobato, Gregorio; Mier, Pablo; Andrade-Navarro, Miguel A.
2016-07-01
The different factors involved in the growth process of complex networks imprint valuable information in their observable topologies. How to exploit this information to accurately predict structural network changes is the subject of active research. A recent model of network growth sustains that the emergence of properties common to most complex systems is the result of certain trade-offs between node birth-time and similarity. This model has a geometric interpretation in hyperbolic space, where distances between nodes abstract this optimisation process. Current methods for network hyperbolic embedding search for node coordinates that maximise the likelihood that the network was produced by the afore-mentioned model. Here, a different strategy is followed in the form of the Laplacian-based Network Embedding, a simple yet accurate, efficient and data driven manifold learning approach, which allows for the quick geometric analysis of big networks. Comparisons against existing embedding and prediction techniques highlight its applicability to network evolution and link prediction.
COMPREHENSIVE PBPK MODELING APPROACH USING THE EXPOSURE RELATED DOSE ESTIMATING MODEL (ERDEM)
ERDEM, a complex PBPK modeling system, is the result of the implementation of a comprehensive PBPK modeling approach. ERDEM provides a scalable and user-friendly environment that enables researchers to focus on data input values rather than writing program code. It efficiently ...
Harel, Elad; Engel, Gregory S
2012-01-17
Light-harvesting antenna complexes transfer energy from sunlight to photosynthetic reaction centers where charge separation drives cellular metabolism. The process through which pigments transfer excitation energy involves a complex choreography of coherent and incoherent processes mediated by the surrounding protein and solvent environment. The recent discovery of coherent dynamics in photosynthetic light-harvesting antennae has motivated many theoretical models exploring effects of interference in energy transfer phenomena. In this work, we provide experimental evidence of long-lived quantum coherence between the spectrally separated B800 and B850 rings of the light-harvesting complex 2 (LH2) of purple bacteria. Spectrally resolved maps of the detuning, dephasing, and the amplitude of electronic coupling between excitons reveal that different relaxation pathways act in concert for optimal transfer efficiency. Furthermore, maps of the phase of the signal suggest that quantum mechanical interference between different energy transfer pathways may be important even at ambient temperature. Such interference at a product state has already been shown to enhance the quantum efficiency of transfer in theoretical models of closed loop systems such as LH2.
Harel, Elad; Engel, Gregory S.
2012-01-01
Light-harvesting antenna complexes transfer energy from sunlight to photosynthetic reaction centers where charge separation drives cellular metabolism. The process through which pigments transfer excitation energy involves a complex choreography of coherent and incoherent processes mediated by the surrounding protein and solvent environment. The recent discovery of coherent dynamics in photosynthetic light-harvesting antennae has motivated many theoretical models exploring effects of interference in energy transfer phenomena. In this work, we provide experimental evidence of long-lived quantum coherence between the spectrally separated B800 and B850 rings of the light-harvesting complex 2 (LH2) of purple bacteria. Spectrally resolved maps of the detuning, dephasing, and the amplitude of electronic coupling between excitons reveal that different relaxation pathways act in concert for optimal transfer efficiency. Furthermore, maps of the phase of the signal suggest that quantum mechanical interference between different energy transfer pathways may be important even at ambient temperature. Such interference at a product state has already been shown to enhance the quantum efficiency of transfer in theoretical models of closed loop systems such as LH2. PMID:22215585
CAD-Based Aerodynamic Design of Complex Configurations using a Cartesian Method
NASA Technical Reports Server (NTRS)
Nemec, Marian; Aftosmis, Michael J.; Pulliam, Thomas H.
2003-01-01
A modular framework for aerodynamic optimization of complex geometries is developed. By working directly with a parametric CAD system, complex-geometry models are modified nnd tessellated in an automatic fashion. The use of a component-based Cartesian method significantly reduces the demands on the CAD system, and also provides for robust and efficient flowfield analysis. The optimization is controlled using either a genetic or quasi-Newton algorithm. Parallel efficiency of the framework is maintained even when subject to limited CAD resources by dynamically re-allocating the processors of the flow solver. Overall, the resulting framework can explore designs incorporating large shape modifications and changes in topology.
3D CSEM inversion based on goal-oriented adaptive finite element method
NASA Astrophysics Data System (ADS)
Zhang, Y.; Key, K.
2016-12-01
We present a parallel 3D frequency domain controlled-source electromagnetic inversion code name MARE3DEM. Non-linear inversion of observed data is performed with the Occam variant of regularized Gauss-Newton optimization. The forward operator is based on the goal-oriented finite element method that efficiently calculates the responses and sensitivity kernels in parallel using a data decomposition scheme where independent modeling tasks contain different frequencies and subsets of the transmitters and receivers. To accommodate complex 3D conductivity variation with high flexibility and precision, we adopt the dual-grid approach where the forward mesh conforms to the inversion parameter grid and is adaptively refined until the forward solution converges to the desired accuracy. This dual-grid approach is memory efficient, since the inverse parameter grid remains independent from fine meshing generated around the transmitter and receivers by the adaptive finite element method. Besides, the unstructured inverse mesh efficiently handles multiple scale structures and allows for fine-scale model parameters within the region of interest. Our mesh generation engine keeps track of the refinement hierarchy so that the map of conductivity and sensitivity kernel between the forward and inverse mesh is retained. We employ the adjoint-reciprocity method to calculate the sensitivity kernels which establish a linear relationship between changes in the conductivity model and changes in the modeled responses. Our code uses a direcy solver for the linear systems, so the adjoint problem is efficiently computed by re-using the factorization from the primary problem. Further computational efficiency and scalability is obtained in the regularized Gauss-Newton portion of the inversion using parallel dense matrix-matrix multiplication and matrix factorization routines implemented with the ScaLAPACK library. We show the scalability, reliability and the potential of the algorithm to deal with complex geological scenarios by applying it to the inversion of synthetic marine controlled source EM data generated for a complex 3D offshore model with significant seafloor topography.
Topography Modeling in Atmospheric Flows Using the Immersed Boundary Method
NASA Technical Reports Server (NTRS)
Ackerman, A. S.; Senocak, I.; Mansour, N. N.; Stevens, D. E.
2004-01-01
Numerical simulation of flow over complex geometry needs accurate and efficient computational methods. Different techniques are available to handle complex geometry. The unstructured grid and multi-block body-fitted grid techniques have been widely adopted for complex geometry in engineering applications. In atmospheric applications, terrain fitted single grid techniques have found common use. Although these are very effective techniques, their implementation, coupling with the flow algorithm, and efficient parallelization of the complete method are more involved than a Cartesian grid method. The grid generation can be tedious and one needs to pay special attention in numerics to handle skewed cells for conservation purposes. Researchers have long sought for alternative methods to ease the effort involved in simulating flow over complex geometry.
Multi-level emulation of complex climate model responses to boundary forcing data
NASA Astrophysics Data System (ADS)
Tran, Giang T.; Oliver, Kevin I. C.; Holden, Philip B.; Edwards, Neil R.; Sóbester, András; Challenor, Peter
2018-04-01
Climate model components involve both high-dimensional input and output fields. It is desirable to efficiently generate spatio-temporal outputs of these models for applications in integrated assessment modelling or to assess the statistical relationship between such sets of inputs and outputs, for example, uncertainty analysis. However, the need for efficiency often compromises the fidelity of output through the use of low complexity models. Here, we develop a technique which combines statistical emulation with a dimensionality reduction technique to emulate a wide range of outputs from an atmospheric general circulation model, PLASIM, as functions of the boundary forcing prescribed by the ocean component of a lower complexity climate model, GENIE-1. Although accurate and detailed spatial information on atmospheric variables such as precipitation and wind speed is well beyond the capability of GENIE-1's energy-moisture balance model of the atmosphere, this study demonstrates that the output of this model is useful in predicting PLASIM's spatio-temporal fields through multi-level emulation. Meaningful information from the fast model, GENIE-1 was extracted by utilising the correlation between variables of the same type in the two models and between variables of different types in PLASIM. We present here the construction and validation of several PLASIM variable emulators and discuss their potential use in developing a hybrid model with statistical components.
Najafpour, Mohammad Mahdi; Ghobadi, Mohadeseh Zarei; Sarvi, Bahram; Haghighi, Behzad
2015-09-14
Synthesis of new efficient catalysts inspired by Nature is a key goal in the production of clean fuel. Different compounds based on manganese oxide have been investigated in order to find their water-oxidation activity. Herein, we introduce a novel engineered polypeptide containing tyrosine around nano-sized manganese-calcium oxide, which was shown to be a highly active catalyst toward water oxidation at low overpotential (240 mV), with high turnover frequency of 1.5 × 10(-2) s(-1) at pH = 6.3 in the Mn(III)/Mn(IV) oxidation range. The compound is a novel structural and efficient functional model for the water-oxidizing complex in Photosystem II. A new proposed clever strategy used by Nature in water oxidation is also discussed. The new model of the water-oxidizing complex opens a new perspective for synthesis of efficient water-oxidation catalysts.
Singh, Gurpreet; Ravi, Koustuban; Wang, Qian; Ho, Seng-Tiong
2012-06-15
A complex-envelope (CE) alternating-direction-implicit (ADI) finite-difference time-domain (FDTD) approach to treat light-matter interaction self-consistently with electromagnetic field evolution for efficient simulations of active photonic devices is presented for the first time (to our best knowledge). The active medium (AM) is modeled using an efficient multilevel system of carrier rate equations to yield the correct carrier distributions, suitable for modeling semiconductor/solid-state media accurately. To include the AM in the CE-ADI-FDTD method, a first-order differential system involving CE fields in the AM is first set up. The system matrix that includes AM parameters is then split into two time-dependent submatrices that are then used in an efficient ADI splitting formula. The proposed CE-ADI-FDTD approach with AM takes 22% of the time as the approach of the corresponding explicit FDTD, as validated by semiconductor microdisk laser simulations.
The implementation of a comprehensive PBPK modeling approach resulted in ERDEM, a complex PBPK modeling system. ERDEM provides a scalable and user-friendly environment that enables researchers to focus on data input values rather than writing program code. ERDEM efficiently m...
Prospects of application of additive technologies for increasing the efficiency of impeller machines
NASA Astrophysics Data System (ADS)
Belova, O. V.; Borisov, Yu. A.
2017-08-01
Impeller machine is a device in which the flow path carries out the supply (or retraction) of mechanical energy to the flow of a working fluid passing through the machine. To increase the efficiency of impeller machines, it is necessary to use design modern technologies, namely the use of numerical methods for conducting research in the field of gas dynamics, as well as additive manufacturing (AM) for the of both prototypes and production model. AM technologies are deservedly rightly called revolutionary because they give unique possibility for manufacturing products, creating perfect forms, both light and durable. The designers face the challenge of developing a new design methodology, since AM allows the use of the concept of "Complexity For Free". The "Complexity For Free" conception is based on: complexity of the form; hierarchical complexity; complexity of the material; functional complexity. The new technical items design method according to a functional principle is also investigated.
Jihong, Qu
2014-01-01
Wind-hydrothermal power system dispatching has received intensive attention in recent years because it can help develop various reasonable plans to schedule the power generation efficiency. But future data such as wind power output and power load would not be accurately predicted and the nonlinear nature involved in the complex multiobjective scheduling model; therefore, to achieve accurate solution to such complex problem is a very difficult task. This paper presents an interval programming model with 2-step optimization algorithm to solve multiobjective dispatching. Initially, we represented the future data into interval numbers and simplified the object function to a linear programming problem to search the feasible and preliminary solutions to construct the Pareto set. Then the simulated annealing method was used to search the optimal solution of initial model. Thorough experimental results suggest that the proposed method performed reasonably well in terms of both operating efficiency and precision. PMID:24895663
Ren, Kun; Jihong, Qu
2014-01-01
Wind-hydrothermal power system dispatching has received intensive attention in recent years because it can help develop various reasonable plans to schedule the power generation efficiency. But future data such as wind power output and power load would not be accurately predicted and the nonlinear nature involved in the complex multiobjective scheduling model; therefore, to achieve accurate solution to such complex problem is a very difficult task. This paper presents an interval programming model with 2-step optimization algorithm to solve multiobjective dispatching. Initially, we represented the future data into interval numbers and simplified the object function to a linear programming problem to search the feasible and preliminary solutions to construct the Pareto set. Then the simulated annealing method was used to search the optimal solution of initial model. Thorough experimental results suggest that the proposed method performed reasonably well in terms of both operating efficiency and precision.
Fractional modeling of viscoelasticity in 3D cerebral arteries and aneurysms
NASA Astrophysics Data System (ADS)
Yu, Yue; Perdikaris, Paris; Karniadakis, George Em
2016-10-01
We develop efficient numerical methods for fractional order PDEs, and employ them to investigate viscoelastic constitutive laws for arterial wall mechanics. Recent simulations using one-dimensional models [1] have indicated that fractional order models may offer a more powerful alternative for modeling the arterial wall response, exhibiting reduced sensitivity to parametric uncertainties compared with the integer-calculus-based models. Here, we study three-dimensional (3D) fractional PDEs that naturally model the continuous relaxation properties of soft tissue, and for the first time employ them to simulate flow structure interactions for patient-specific brain aneurysms. To deal with the high memory requirements and in order to accelerate the numerical evaluation of hereditary integrals, we employ a fast convolution method [2] that reduces the memory cost to O (log (N)) and the computational complexity to O (Nlog (N)). Furthermore, we combine the fast convolution with high-order backward differentiation to achieve third-order time integration accuracy. We confirm that in 3D viscoelastic simulations, the integer order models strongly depends on the relaxation parameters, while the fractional order models are less sensitive. As an application to long-time simulations in complex geometries, we also apply the method to modeling fluid-structure interaction of a 3D patient-specific compliant cerebral artery with an aneurysm. Taken together, our findings demonstrate that fractional calculus can be employed effectively in modeling complex behavior of materials in realistic 3D time-dependent problems if properly designed efficient algorithms are employed to overcome the extra memory requirements and computational complexity associated with the non-local character of fractional derivatives.
Fractional modeling of viscoelasticity in 3D cerebral arteries and aneurysms
Perdikaris, Paris; Karniadakis, George Em
2017-01-01
We develop efficient numerical methods for fractional order PDEs, and employ them to investigate viscoelastic constitutive laws for arterial wall mechanics. Recent simulations using one-dimensional models [1] have indicated that fractional order models may offer a more powerful alternative for modeling the arterial wall response, exhibiting reduced sensitivity to parametric uncertainties compared with the integer-calculus-based models. Here, we study three-dimensional (3D) fractional PDEs that naturally model the continuous relaxation properties of soft tissue, and for the first time employ them to simulate flow structure interactions for patient-specific brain aneurysms. To deal with the high memory requirements and in order to accelerate the numerical evaluation of hereditary integrals, we employ a fast convolution method [2] that reduces the memory cost to O(log(N)) and the computational complexity to O(N log(N)). Furthermore, we combine the fast convolution with high-order backward differentiation to achieve third-order time integration accuracy. We confirm that in 3D viscoelastic simulations, the integer order models strongly depends on the relaxation parameters, while the fractional order models are less sensitive. As an application to long-time simulations in complex geometries, we also apply the method to modeling fluid–structure interaction of a 3D patient-specific compliant cerebral artery with an aneurysm. Taken together, our findings demonstrate that fractional calculus can be employed effectively in modeling complex behavior of materials in realistic 3D time-dependent problems if properly designed efficient algorithms are employed to overcome the extra memory requirements and computational complexity associated with the non-local character of fractional derivatives. PMID:29104310
NASA Astrophysics Data System (ADS)
Khalili, Ashkan; Jha, Ratneshwar; Samaratunga, Dulip
2016-11-01
Wave propagation analysis in 2-D composite structures is performed efficiently and accurately through the formulation of a User-Defined Element (UEL) based on the wavelet spectral finite element (WSFE) method. The WSFE method is based on the first-order shear deformation theory which yields accurate results for wave motion at high frequencies. The 2-D WSFE model is highly efficient computationally and provides a direct relationship between system input and output in the frequency domain. The UEL is formulated and implemented in Abaqus (commercial finite element software) for wave propagation analysis in 2-D composite structures with complexities. Frequency domain formulation of WSFE leads to complex valued parameters, which are decoupled into real and imaginary parts and presented to Abaqus as real values. The final solution is obtained by forming a complex value using the real number solutions given by Abaqus. Five numerical examples are presented in this article, namely undamaged plate, impacted plate, plate with ply drop, folded plate and plate with stiffener. Wave motions predicted by the developed UEL correlate very well with Abaqus simulations. The results also show that the UEL largely retains computational efficiency of the WSFE method and extends its ability to model complex features.
Effects of human running cadence and experimental validation of the bouncing ball model
NASA Astrophysics Data System (ADS)
Bencsik, László; Zelei, Ambrus
2017-05-01
The biomechanical analysis of human running is a complex problem, because of the large number of parameters and degrees of freedom. However, simplified models can be constructed, which are usually characterized by some fundamental parameters, like step length, foot strike pattern and cadence. The bouncing ball model of human running is analysed theoretically and experimentally in this work. It is a minimally complex dynamic model when the aim is to estimate the energy cost of running and the tendency of ground-foot impact intensity as a function of cadence. The model shows that cadence has a direct effect on energy efficiency of running and ground-foot impact intensity. Furthermore, it shows that higher cadence implies lower risk of injury and better energy efficiency. An experimental data collection of 121 amateur runners is presented. The experimental results validate the model and provides information about the walk-to-run transition speed and the typical development of cadence and grounded phase ratio in different running speed ranges.
Queueing Network Models for Parallel Processing of Task Systems: an Operational Approach
NASA Technical Reports Server (NTRS)
Mak, Victor W. K.
1986-01-01
Computer performance modeling of possibly complex computations running on highly concurrent systems is considered. Earlier works in this area either dealt with a very simple program structure or resulted in methods with exponential complexity. An efficient procedure is developed to compute the performance measures for series-parallel-reducible task systems using queueing network models. The procedure is based on the concept of hierarchical decomposition and a new operational approach. Numerical results for three test cases are presented and compared to those of simulations.
Multiscale Mathematics for Biomass Conversion to Renewable Hydrogen
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plechac, Petr
2016-03-01
The overall objective of this project was to develop multiscale models for understanding and eventually designing complex processes for renewables. To the best of our knowledge, our work is the first attempt at modeling complex reacting systems, whose performance relies on underlying multiscale mathematics and developing rigorous mathematical techniques and computational algorithms to study such models. Our specific application lies at the heart of biofuels initiatives of DOE and entails modeling of catalytic systems, to enable economic, environmentally benign, and efficient conversion of biomass into either hydrogen or valuable chemicals.
Foundations for Streaming Model Transformations by Complex Event Processing.
Dávid, István; Ráth, István; Varró, Dániel
2018-01-01
Streaming model transformations represent a novel class of transformations to manipulate models whose elements are continuously produced or modified in high volume and with rapid rate of change. Executing streaming transformations requires efficient techniques to recognize activated transformation rules over a live model and a potentially infinite stream of events. In this paper, we propose foundations of streaming model transformations by innovatively integrating incremental model query, complex event processing (CEP) and reactive (event-driven) transformation techniques. Complex event processing allows to identify relevant patterns and sequences of events over an event stream. Our approach enables event streams to include model change events which are automatically and continuously populated by incremental model queries. Furthermore, a reactive rule engine carries out transformations on identified complex event patterns. We provide an integrated domain-specific language with precise semantics for capturing complex event patterns and streaming transformations together with an execution engine, all of which is now part of the Viatra reactive transformation framework. We demonstrate the feasibility of our approach with two case studies: one in an advanced model engineering workflow; and one in the context of on-the-fly gesture recognition.
Hezaveh, Samira; Zeng, An-Ping; Jandt, Uwe
2016-05-19
Targeted manipulation and exploitation of beneficial properties of multienzyme complexes, especially for the design of novel and efficiently structured enzymatic reaction cascades, require a solid model understanding of mechanistic principles governing the structure and functionality of the complexes. This type of system-level and quantitative knowledge has been very scarce thus far. We utilize the human pyruvate dehydrogenase complex (hPDC) as a versatile template to conduct corresponding studies. Here we present new homology models of the core subunits of the hPDC, namely E2 and E3BP, as the first time effort to elucidate the assembly of hPDC core based on molecular dynamic simulation. New models of E2 and E3BP were generated and validated at atomistic level for different properties of the proteins. The results of the wild type dimer simulations showed a strong hydrophobic interaction between the C-terminal and the hydrophobic pocket which is the main driving force in the intertrimer binding and the core self-assembly. On the contrary, the C-terminal truncated versions exhibited a drastic loss of hydrophobic interaction leading to a dimeric separation. This study represents a significant step toward a model-based understanding of structure and function of large multienzyme systems like PDC for developing highly efficient biocatalyst or bioreaction cascades.
Complex Moving Parts: Assessment Systems and Electronic Portfolios
ERIC Educational Resources Information Center
Larkin, Martha J.; Robertson, Royce L.
2013-01-01
The largest college within an online university of over 50,000 students invested significant resources in translating a complex assessment system focused on continuous improvement and national accreditation into an effective and efficient electronic portfolio (ePortfolio). The team building the system needed a model to address problems met…
NASA Astrophysics Data System (ADS)
Kim, Jae-Min; Yoo, Seung-Jun; Moon, Chang-Ki; Sim, Bomi; Lee, Jae-Hyun; Lim, Heeseon; Kim, Jeong Won; Kim, Jang-Joo
2016-09-01
Electrical doping is an important method in organic electronics to enhance device efficiency by controlling Fermi level, increasing conductivity, and reducing injection barrier from electrode. To understand the charge generation process of dopant in doped organic semiconductors, it is important to analyze the charge transfer complex (CTC) formation and dissociation into free charge carrier. In this paper, we correlate charge generation efficiency with the CTC formation and dissociation efficiency of n-dopant in organic semiconductors (OSs). The CTC formation efficiency of Rb2CO3 linearly decreases from 82.8% to 47.0% as the doping concentration increases from 2.5 mol% to 20 mol%. The CTC formation efficiency and its linear decrease with doping concentration are analytically correlated with the concentration-dependent size and number of dopant agglomerates by introducing the degree of reduced CTC formation. Lastly, the behavior of dissociation efficiency is discussed based on the picture of the statistical semiconductor theory and the frontier orbital hybridization model.
A novel scene management technology for complex virtual battlefield environment
NASA Astrophysics Data System (ADS)
Sheng, Changchong; Jiang, Libing; Tang, Bo; Tang, Xiaoan
2018-04-01
The efficient scene management of virtual environment is an important research content of computer real-time visualization, which has a decisive influence on the efficiency of drawing. However, Traditional scene management methods do not suitable for complex virtual battlefield environments, this paper combines the advantages of traditional scene graph technology and spatial data structure method, using the idea of management and rendering separation, a loose object-oriented scene graph structure is established to manage the entity model data in the scene, and the performance-based quad-tree structure is created for traversing and rendering. In addition, the collaborative update relationship between the above two structural trees is designed to achieve efficient scene management. Compared with the previous scene management method, this method is more efficient and meets the needs of real-time visualization.
AAC Intervention as an Immersion Model
ERIC Educational Resources Information Center
Dodd, Janet L.; Gorey, Megan
2014-01-01
Augmentative and alternative communication based interventions support individuals with complex communication needs in becoming effective and efficient communicators. However, there is often a disconnect between language models, communication opportunities, and desired intervention outcomes in the intervention process. This article outlines a…
A combinatorial model of malware diffusion via bluetooth connections.
Merler, Stefano; Jurman, Giuseppe
2013-01-01
We outline here the mathematical expression of a diffusion model for cellphones malware transmitted through Bluetooth channels. In particular, we provide the deterministic formula underlying the proposed infection model, in its equivalent recursive (simple but computationally heavy) and closed form (more complex but efficiently computable) expression.
Turbulence model development and application at Lockheed Fort Worth Company
NASA Technical Reports Server (NTRS)
Smith, Brian R.
1995-01-01
This viewgraph presentation demonstrates that computationally efficient k-l and k-kl turbulence models have been developed and implemented at Lockheed Fort Worth Company. Many years of experience have been gained applying two equation turbulence models to complex three-dimensional flows for design and analysis.
Ge, Liang; Sotiropoulos, Fotis
2007-08-01
A novel numerical method is developed that integrates boundary-conforming grids with a sharp interface, immersed boundary methodology. The method is intended for simulating internal flows containing complex, moving immersed boundaries such as those encountered in several cardiovascular applications. The background domain (e.g the empty aorta) is discretized efficiently with a curvilinear boundary-fitted mesh while the complex moving immersed boundary (say a prosthetic heart valve) is treated with the sharp-interface, hybrid Cartesian/immersed-boundary approach of Gilmanov and Sotiropoulos [1]. To facilitate the implementation of this novel modeling paradigm in complex flow simulations, an accurate and efficient numerical method is developed for solving the unsteady, incompressible Navier-Stokes equations in generalized curvilinear coordinates. The method employs a novel, fully-curvilinear staggered grid discretization approach, which does not require either the explicit evaluation of the Christoffel symbols or the discretization of all three momentum equations at cell interfaces as done in previous formulations. The equations are integrated in time using an efficient, second-order accurate fractional step methodology coupled with a Jacobian-free, Newton-Krylov solver for the momentum equations and a GMRES solver enhanced with multigrid as preconditioner for the Poisson equation. Several numerical experiments are carried out on fine computational meshes to demonstrate the accuracy and efficiency of the proposed method for standard benchmark problems as well as for unsteady, pulsatile flow through a curved, pipe bend. To demonstrate the ability of the method to simulate flows with complex, moving immersed boundaries we apply it to calculate pulsatile, physiological flow through a mechanical, bileaflet heart valve mounted in a model straight aorta with an anatomical-like triple sinus.
Ge, Liang; Sotiropoulos, Fotis
2008-01-01
A novel numerical method is developed that integrates boundary-conforming grids with a sharp interface, immersed boundary methodology. The method is intended for simulating internal flows containing complex, moving immersed boundaries such as those encountered in several cardiovascular applications. The background domain (e.g the empty aorta) is discretized efficiently with a curvilinear boundary-fitted mesh while the complex moving immersed boundary (say a prosthetic heart valve) is treated with the sharp-interface, hybrid Cartesian/immersed-boundary approach of Gilmanov and Sotiropoulos [1]. To facilitate the implementation of this novel modeling paradigm in complex flow simulations, an accurate and efficient numerical method is developed for solving the unsteady, incompressible Navier-Stokes equations in generalized curvilinear coordinates. The method employs a novel, fully-curvilinear staggered grid discretization approach, which does not require either the explicit evaluation of the Christoffel symbols or the discretization of all three momentum equations at cell interfaces as done in previous formulations. The equations are integrated in time using an efficient, second-order accurate fractional step methodology coupled with a Jacobian-free, Newton-Krylov solver for the momentum equations and a GMRES solver enhanced with multigrid as preconditioner for the Poisson equation. Several numerical experiments are carried out on fine computational meshes to demonstrate the accuracy and efficiency of the proposed method for standard benchmark problems as well as for unsteady, pulsatile flow through a curved, pipe bend. To demonstrate the ability of the method to simulate flows with complex, moving immersed boundaries we apply it to calculate pulsatile, physiological flow through a mechanical, bileaflet heart valve mounted in a model straight aorta with an anatomical-like triple sinus. PMID:19194533
A systems-based approach for integrated design of materials, products and design process chains
NASA Astrophysics Data System (ADS)
Panchal, Jitesh H.; Choi, Hae-Jin; Allen, Janet K.; McDowell, David L.; Mistree, Farrokh
2007-12-01
The concurrent design of materials and products provides designers with flexibility to achieve design objectives that were not previously accessible. However, the improved flexibility comes at a cost of increased complexity of the design process chains and the materials simulation models used for executing the design chains. Efforts to reduce the complexity generally result in increased uncertainty. We contend that a systems based approach is essential for managing both the complexity and the uncertainty in design process chains and simulation models in concurrent material and product design. Our approach is based on simplifying the design process chains systematically such that the resulting uncertainty does not significantly affect the overall system performance. Similarly, instead of striving for accurate models for multiscale systems (that are inherently complex), we rely on making design decisions that are robust to uncertainties in the models. Accordingly, we pursue hierarchical modeling in the context of design of multiscale systems. In this paper our focus is on design process chains. We present a systems based approach, premised on the assumption that complex systems can be designed efficiently by managing the complexity of design process chains. The approach relies on (a) the use of reusable interaction patterns to model design process chains, and (b) consideration of design process decisions using value-of-information based metrics. The approach is illustrated using a Multifunctional Energetic Structural Material (MESM) design example. Energetic materials store considerable energy which can be released through shock-induced detonation; conventionally, they are not engineered for strength properties. The design objectives for the MESM in this paper include both sufficient strength and energy release characteristics. The design is carried out by using models at different length and time scales that simulate different aspects of the system. Finally, by applying the method to the MESM design problem, we show that the integrated design of materials and products can be carried out more efficiently by explicitly accounting for design process decisions with the hierarchy of models.
Variable Complexity Structural Optimization of Shells
NASA Technical Reports Server (NTRS)
Haftka, Raphael T.; Venkataraman, Satchi
1999-01-01
Structural designers today face both opportunities and challenges in a vast array of available analysis and optimization programs. Some programs such as NASTRAN, are very general, permitting the designer to model any structure, to any degree of accuracy, but often at a higher computational cost. Additionally, such general procedures often do not allow easy implementation of all constraints of interest to the designer. Other programs, based on algebraic expressions used by designers one generation ago, have limited applicability for general structures with modem materials. However, when applicable, they provide easy understanding of design decisions trade-off. Finally, designers can also use specialized programs suitable for designing efficiently a subset of structural problems. For example, PASCO and PANDA2 are panel design codes, which calculate response and estimate failure much more efficiently than general-purpose codes, but are narrowly applicable in terms of geometry and loading. Therefore, the problem of optimizing structures based on simultaneous use of several models and computer programs is a subject of considerable interest. The problem of using several levels of models in optimization has been dubbed variable complexity modeling. Work under NASA grant NAG1-2110 has been concerned with the development of variable complexity modeling strategies with special emphasis on response surface techniques. In addition, several modeling issues for the design of shells of revolution were studied.
Variable Complexity Structural Optimization of Shells
NASA Technical Reports Server (NTRS)
Haftka, Raphael T.; Venkataraman, Satchi
1998-01-01
Structural designers today face both opportunities and challenges in a vast array of available analysis and optimization programs. Some programs such as NASTRAN, are very general, permitting the designer to model any structure, to any degree of accuracy, but often at a higher computational cost. Additionally, such general procedures often do not allow easy implementation of all constraints of interest to the designer. Other programs, based on algebraic expressions used by designers one generation ago, have limited applicability for general structures with modem materials. However, when applicable, they provide easy understanding of design decisions trade-off. Finally, designers can also use specialized programs suitable for designing efficiently a subset of structural problems. For example, PASCO and PANDA2 are panel design codes, which calculate response and estimate failure much more efficiently than general-purpose codes, but are narrowly applicable in terms of geometry and loading. Therefore, the problem of optimizing structures based on simultaneous use of several models and computer programs is a subject of considerable interest. The problem of using several levels of models in optimization has been dubbed variable complexity modeling. Work under NASA grant NAG1-1808 has been concerned with the development of variable complexity modeling strategies with special emphasis on response surface techniques. In addition several modeling issues for the design of shells of revolution were studied.
Evaluation of Supply Chain Efficiency Based on a Novel Network of Data Envelopment Analysis Model
NASA Astrophysics Data System (ADS)
Fu, Li Fang; Meng, Jun; Liu, Ying
2015-12-01
Performance evaluation of supply chain (SC) is a vital topic in SC management and inherently complex problems with multilayered internal linkages and activities of multiple entities. Recently, various Network Data Envelopment Analysis (NDEA) models, which opened the “black box” of conventional DEA, were developed and applied to evaluate the complex SC with a multilayer network structure. However, most of them are input or output oriented models which cannot take into consideration the nonproportional changes of inputs and outputs simultaneously. This paper extends the Slack-based measure (SBM) model to a nonradial, nonoriented network model named as U-NSBM with the presence of undesirable outputs in the SC. A numerical example is presented to demonstrate the applicability of the model in quantifying the efficiency and ranking the supply chain performance. By comparing with the CCR and U-SBM models, it is shown that the proposed model has higher distinguishing ability and gives feasible solution in the presence of undesirable outputs. Meanwhile, it provides more insights for decision makers about the source of inefficiency as well as the guidance to improve the SC performance.
Li, Zhenping; Zhang, Xiang-Sun; Wang, Rui-Sheng; Liu, Hongwei; Zhang, Shihua
2013-01-01
Identification of communities in complex networks is an important topic and issue in many fields such as sociology, biology, and computer science. Communities are often defined as groups of related nodes or links that correspond to functional subunits in the corresponding complex systems. While most conventional approaches have focused on discovering communities of nodes, some recent studies start partitioning links to find overlapping communities straightforwardly. In this paper, we propose a new quantity function for link community identification in complex networks. Based on this quantity function we formulate the link community partition problem into an integer programming model which allows us to partition a complex network into overlapping communities. We further propose a genetic algorithm for link community detection which can partition a network into overlapping communities without knowing the number of communities. We test our model and algorithm on both artificial networks and real-world networks. The results demonstrate that the model and algorithm are efficient in detecting overlapping community structure in complex networks. PMID:24386268
Dendritic trafficking faces physiologically critical speed-precision tradeoffs
Williams, Alex H; O'Donnell, Cian; Sejnowski, Terrence J; O'Leary, Timothy
2016-01-01
Nervous system function requires intracellular transport of channels, receptors, mRNAs, and other cargo throughout complex neuronal morphologies. Local signals such as synaptic input can regulate cargo trafficking, motivating the leading conceptual model of neuron-wide transport, sometimes called the ‘sushi-belt model’ (Doyle and Kiebler, 2011). Current theories and experiments are based on this model, yet its predictions are not rigorously understood. We formalized the sushi belt model mathematically, and show that it can achieve arbitrarily complex spatial distributions of cargo in reconstructed morphologies. However, the model also predicts an unavoidable, morphology dependent tradeoff between speed, precision and metabolic efficiency of cargo transport. With experimental estimates of trafficking kinetics, the model predicts delays of many hours or days for modestly accurate and efficient cargo delivery throughout a dendritic tree. These findings challenge current understanding of the efficacy of nucleus-to-synapse trafficking and may explain the prevalence of local biosynthesis in neurons. DOI: http://dx.doi.org/10.7554/eLife.20556.001 PMID:28034367
NASA Technical Reports Server (NTRS)
Cwik, Tom; Zuffada, Cinzia; Jamnejad, Vahraz
1996-01-01
Finite element modeling has proven useful for accurtely simulating scattered or radiated fields from complex three-dimensional objects whose geometry varies on the scale of a fraction of a wavelength.
A Combinatorial Model of Malware Diffusion via Bluetooth Connections
Merler, Stefano; Jurman, Giuseppe
2013-01-01
We outline here the mathematical expression of a diffusion model for cellphones malware transmitted through Bluetooth channels. In particular, we provide the deterministic formula underlying the proposed infection model, in its equivalent recursive (simple but computationally heavy) and closed form (more complex but efficiently computable) expression. PMID:23555677
Lessons from the Specific Factors Model of International Trade.
ERIC Educational Resources Information Center
Tohamy, Soumaya M.; Mixon, J. Wilson, Jr.
2003-01-01
Uses the Specific Factors model to illustrate the meaning of economic efficiency, how complex economies simultaneously determine prices and quantities, and how changes in demand conditions or technology can affect income distribution among owners of factors of production. Employs spreadsheets to help students see how the model works. (JEH)
Efficient least angle regression for identification of linear-in-the-parameters models
Beach, Thomas H.; Rezgui, Yacine
2017-01-01
Least angle regression, as a promising model selection method, differentiates itself from conventional stepwise and stagewise methods, in that it is neither too greedy nor too slow. It is closely related to L1 norm optimization, which has the advantage of low prediction variance through sacrificing part of model bias property in order to enhance model generalization capability. In this paper, we propose an efficient least angle regression algorithm for model selection for a large class of linear-in-the-parameters models with the purpose of accelerating the model selection process. The entire algorithm works completely in a recursive manner, where the correlations between model terms and residuals, the evolving directions and other pertinent variables are derived explicitly and updated successively at every subset selection step. The model coefficients are only computed when the algorithm finishes. The direct involvement of matrix inversions is thereby relieved. A detailed computational complexity analysis indicates that the proposed algorithm possesses significant computational efficiency, compared with the original approach where the well-known efficient Cholesky decomposition is involved in solving least angle regression. Three artificial and real-world examples are employed to demonstrate the effectiveness, efficiency and numerical stability of the proposed algorithm. PMID:28293140
An integrated modelling framework for neural circuits with multiple neuromodulators.
Joshi, Alok; Youssofzadeh, Vahab; Vemana, Vinith; McGinnity, T M; Prasad, Girijesh; Wong-Lin, KongFatt
2017-01-01
Neuromodulators are endogenous neurochemicals that regulate biophysical and biochemical processes, which control brain function and behaviour, and are often the targets of neuropharmacological drugs. Neuromodulator effects are generally complex partly owing to the involvement of broad innervation, co-release of neuromodulators, complex intra- and extrasynaptic mechanism, existence of multiple receptor subtypes and high interconnectivity within the brain. In this work, we propose an efficient yet sufficiently realistic computational neural modelling framework to study some of these complex behaviours. Specifically, we propose a novel dynamical neural circuit model that integrates the effective neuromodulator-induced currents based on various experimental data (e.g. electrophysiology, neuropharmacology and voltammetry). The model can incorporate multiple interacting brain regions, including neuromodulator sources, simulate efficiently and easily extendable to large-scale brain models, e.g. for neuroimaging purposes. As an example, we model a network of mutually interacting neural populations in the lateral hypothalamus, dorsal raphe nucleus and locus coeruleus, which are major sources of neuromodulator orexin/hypocretin, serotonin and norepinephrine/noradrenaline, respectively, and which play significant roles in regulating many physiological functions. We demonstrate that such a model can provide predictions of systemic drug effects of the popular antidepressants (e.g. reuptake inhibitors), neuromodulator antagonists or their combinations. Finally, we developed user-friendly graphical user interface software for model simulation and visualization for both fundamental sciences and pharmacological studies. © 2017 The Authors.
An integrated modelling framework for neural circuits with multiple neuromodulators
Vemana, Vinith
2017-01-01
Neuromodulators are endogenous neurochemicals that regulate biophysical and biochemical processes, which control brain function and behaviour, and are often the targets of neuropharmacological drugs. Neuromodulator effects are generally complex partly owing to the involvement of broad innervation, co-release of neuromodulators, complex intra- and extrasynaptic mechanism, existence of multiple receptor subtypes and high interconnectivity within the brain. In this work, we propose an efficient yet sufficiently realistic computational neural modelling framework to study some of these complex behaviours. Specifically, we propose a novel dynamical neural circuit model that integrates the effective neuromodulator-induced currents based on various experimental data (e.g. electrophysiology, neuropharmacology and voltammetry). The model can incorporate multiple interacting brain regions, including neuromodulator sources, simulate efficiently and easily extendable to large-scale brain models, e.g. for neuroimaging purposes. As an example, we model a network of mutually interacting neural populations in the lateral hypothalamus, dorsal raphe nucleus and locus coeruleus, which are major sources of neuromodulator orexin/hypocretin, serotonin and norepinephrine/noradrenaline, respectively, and which play significant roles in regulating many physiological functions. We demonstrate that such a model can provide predictions of systemic drug effects of the popular antidepressants (e.g. reuptake inhibitors), neuromodulator antagonists or their combinations. Finally, we developed user-friendly graphical user interface software for model simulation and visualization for both fundamental sciences and pharmacological studies. PMID:28100828
Turbofan Duct Propagation Model
NASA Technical Reports Server (NTRS)
Lan, Justin H.; Posey, Joe W. (Technical Monitor)
2001-01-01
The CDUCT code utilizes a parabolic approximation to the convected Helmholtz equation in order to efficiently model acoustic propagation in acoustically treated, complex shaped ducts. The parabolic approximation solves one-way wave propagation with a marching method which neglects backwards reflected waves. The derivation of the parabolic approximation is presented. Several code validation cases are given. An acoustic lining design process for an example aft fan duct is discussed. It is noted that the method can efficiently model realistic three-dimension effects, acoustic lining, and flow within the computational capabilities of a typical computer workstation.
EMILiO: a fast algorithm for genome-scale strain design.
Yang, Laurence; Cluett, William R; Mahadevan, Radhakrishnan
2011-05-01
Systems-level design of cell metabolism is becoming increasingly important for renewable production of fuels, chemicals, and drugs. Computational models are improving in the accuracy and scope of predictions, but are also growing in complexity. Consequently, efficient and scalable algorithms are increasingly important for strain design. Previous algorithms helped to consolidate the utility of computational modeling in this field. To meet intensifying demands for high-performance strains, both the number and variety of genetic manipulations involved in strain construction are increasing. Existing algorithms have experienced combinatorial increases in computational complexity when applied toward the design of such complex strains. Here, we present EMILiO, a new algorithm that increases the scope of strain design to include reactions with individually optimized fluxes. Unlike existing approaches that would experience an explosion in complexity to solve this problem, we efficiently generated numerous alternate strain designs producing succinate, l-glutamate and l-serine. This was enabled by successive linear programming, a technique new to the area of computational strain design. Copyright © 2011 Elsevier Inc. All rights reserved.
Sand, Andreas; Kristiansen, Martin; Pedersen, Christian N S; Mailund, Thomas
2013-11-22
Hidden Markov models are widely used for genome analysis as they combine ease of modelling with efficient analysis algorithms. Calculating the likelihood of a model using the forward algorithm has worst case time complexity linear in the length of the sequence and quadratic in the number of states in the model. For genome analysis, however, the length runs to millions or billions of observations, and when maximising the likelihood hundreds of evaluations are often needed. A time efficient forward algorithm is therefore a key ingredient in an efficient hidden Markov model library. We have built a software library for efficiently computing the likelihood of a hidden Markov model. The library exploits commonly occurring substrings in the input to reuse computations in the forward algorithm. In a pre-processing step our library identifies common substrings and builds a structure over the computations in the forward algorithm which can be reused. This analysis can be saved between uses of the library and is independent of concrete hidden Markov models so one preprocessing can be used to run a number of different models.Using this library, we achieve up to 78 times shorter wall-clock time for realistic whole-genome analyses with a real and reasonably complex hidden Markov model. In one particular case the analysis was performed in less than 8 minutes compared to 9.6 hours for the previously fastest library. We have implemented the preprocessing procedure and forward algorithm as a C++ library, zipHMM, with Python bindings for use in scripts. The library is available at http://birc.au.dk/software/ziphmm/.
NASA Astrophysics Data System (ADS)
Ge, Liang; Sotiropoulos, Fotis
2007-08-01
A novel numerical method is developed that integrates boundary-conforming grids with a sharp interface, immersed boundary methodology. The method is intended for simulating internal flows containing complex, moving immersed boundaries such as those encountered in several cardiovascular applications. The background domain (e.g. the empty aorta) is discretized efficiently with a curvilinear boundary-fitted mesh while the complex moving immersed boundary (say a prosthetic heart valve) is treated with the sharp-interface, hybrid Cartesian/immersed-boundary approach of Gilmanov and Sotiropoulos [A. Gilmanov, F. Sotiropoulos, A hybrid cartesian/immersed boundary method for simulating flows with 3d, geometrically complex, moving bodies, Journal of Computational Physics 207 (2005) 457-492.]. To facilitate the implementation of this novel modeling paradigm in complex flow simulations, an accurate and efficient numerical method is developed for solving the unsteady, incompressible Navier-Stokes equations in generalized curvilinear coordinates. The method employs a novel, fully-curvilinear staggered grid discretization approach, which does not require either the explicit evaluation of the Christoffel symbols or the discretization of all three momentum equations at cell interfaces as done in previous formulations. The equations are integrated in time using an efficient, second-order accurate fractional step methodology coupled with a Jacobian-free, Newton-Krylov solver for the momentum equations and a GMRES solver enhanced with multigrid as preconditioner for the Poisson equation. Several numerical experiments are carried out on fine computational meshes to demonstrate the accuracy and efficiency of the proposed method for standard benchmark problems as well as for unsteady, pulsatile flow through a curved, pipe bend. To demonstrate the ability of the method to simulate flows with complex, moving immersed boundaries we apply it to calculate pulsatile, physiological flow through a mechanical, bileaflet heart valve mounted in a model straight aorta with an anatomical-like triple sinus.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Lijian, E-mail: ljjiang@hnu.edu.cn; Li, Xinping, E-mail: exping@126.com
Stochastic multiscale modeling has become a necessary approach to quantify uncertainty and characterize multiscale phenomena for many practical problems such as flows in stochastic porous media. The numerical treatment of the stochastic multiscale models can be very challengeable as the existence of complex uncertainty and multiple physical scales in the models. To efficiently take care of the difficulty, we construct a computational reduced model. To this end, we propose a multi-element least square high-dimensional model representation (HDMR) method, through which the random domain is adaptively decomposed into a few subdomains, and a local least square HDMR is constructed in eachmore » subdomain. These local HDMRs are represented by a finite number of orthogonal basis functions defined in low-dimensional random spaces. The coefficients in the local HDMRs are determined using least square methods. We paste all the local HDMR approximations together to form a global HDMR approximation. To further reduce computational cost, we present a multi-element reduced least-square HDMR, which improves both efficiency and approximation accuracy in certain conditions. To effectively treat heterogeneity properties and multiscale features in the models, we integrate multiscale finite element methods with multi-element least-square HDMR for stochastic multiscale model reduction. This approach significantly reduces the original model's complexity in both the resolution of the physical space and the high-dimensional stochastic space. We analyze the proposed approach, and provide a set of numerical experiments to demonstrate the performance of the presented model reduction techniques. - Highlights: • Multi-element least square HDMR is proposed to treat stochastic models. • Random domain is adaptively decomposed into some subdomains to obtain adaptive multi-element HDMR. • Least-square reduced HDMR is proposed to enhance computation efficiency and approximation accuracy in certain conditions. • Integrating MsFEM and multi-element least square HDMR can significantly reduce computation complexity.« less
OpenFOAM: Open source CFD in research and industry
NASA Astrophysics Data System (ADS)
Jasak, Hrvoje
2009-12-01
The current focus of development in industrial Computational Fluid Dynamics (CFD) is integration of CFD into Computer-Aided product development, geometrical optimisation, robust design and similar. On the other hand, in CFD research aims to extend the boundaries ofpractical engineering use in "non-traditional " areas. Requirements of computational flexibility and code integration are contradictory: a change of coding paradigm, with object orientation, library components, equation mimicking is proposed as a way forward. This paper describes OpenFOAM, a C++ object oriented library for Computational Continuum Mechanics (CCM) developed by the author. Efficient and flexible implementation of complex physical models is achieved by mimicking the form ofpartial differential equation in software, with code functionality provided in library form. Open Source deployment and development model allows the user to achieve desired versatility in physical modeling without the sacrifice of complex geometry support and execution efficiency.
Clima, Lilia; Ursu, Elena L; Cojocaru, Corneliu; Rotaru, Alexandru; Barboiu, Mihail; Pinteala, Mariana
2015-09-28
The complexes formed by DNA and polycations have received great attention owing to their potential application in gene therapy. In this study, the binding efficiency between double-stranded oligonucleotides (dsDNA) and branched polyethylenimine (B-PEI) has been quantified by processing of the images captured from the gel electrophoresis assays. The central composite experimental design has been employed to investigate the effects of controllable factors on the binding efficiency. On the basis of experimental data and the response surface methodology, a multivariate regression model has been constructed and statistically validated. The model has enabled us to predict the binding efficiency depending on experimental factors, such as concentrations of dsDNA and B-PEI as well as the initial pH of solution. The optimization of the binding process has been performed using simplex and gradient methods. The optimal conditions determined for polyplex formation have yielded a maximal binding efficiency close to 100%. In order to reveal the mechanism of complex formation at the atomic-scale, a molecular dynamic simulation has been carried out. According to the computation results, B-PEI amine hydrogen atoms have interacted with oxygen atoms from dsDNA phosphate groups. These interactions have led to the formation of hydrogen bonds between macromolecules, stabilizing the polyplex structure.
Quantum Approximate Methods for the Atomistic Modeling of Multicomponent Alloys. Chapter 7
NASA Technical Reports Server (NTRS)
Bozzolo, Guillermo; Garces, Jorge; Mosca, Hugo; Gargano, pablo; Noebe, Ronald D.; Abel, Phillip
2007-01-01
This chapter describes the role of quantum approximate methods in the understanding of complex multicomponent alloys at the atomic level. The need to accelerate materials design programs based on economical and efficient modeling techniques provides the framework for the introduction of approximations and simplifications in otherwise rigorous theoretical schemes. As a promising example of the role that such approximate methods might have in the development of complex systems, the BFS method for alloys is presented and applied to Ru-rich Ni-base superalloys and also to the NiAI(Ti,Cu) system, highlighting the benefits that can be obtained from introducing simple modeling techniques to the investigation of such complex systems.
GIS Toolsets for Planetary Geomorphology and Landing-Site Analysis
NASA Astrophysics Data System (ADS)
Nass, Andrea; van Gasselt, Stephan
2015-04-01
Modern Geographic Information Systems (GIS) allow expert and lay users alike to load and position geographic data and perform simple to highly complex surface analyses. For many applications dedicated and ready-to-use GIS tools are available in standard software systems while other applications require the modular combination of available basic tools to answer more specific questions. This also applies to analyses in modern planetary geomorphology where many of such (basic) tools can be used to build complex analysis tools, e.g. in image- and terrain model analysis. Apart from the simple application of sets of different tools, many complex tasks require a more sophisticated design for storing and accessing data using databases (e.g. ArcHydro for hydrological data analysis). In planetary sciences, complex database-driven models are often required to efficiently analyse potential landings sites or store rover data, but also geologic mapping data can be efficiently stored and accessed using database models rather than stand-alone shapefiles. For landings-site analyses, relief and surface roughness estimates are two common concepts that are of particular interest and for both, a number of different definitions co-exist. We here present an advanced toolset for the analysis of image and terrain-model data with an emphasis on extraction of landing site characteristics using established criteria. We provide working examples and particularly focus on the concepts of terrain roughness as it is interpreted in geomorphology and engineering studies.
The complexity of air quality modeling systems, air quality monitoring data make ad-hoc systems for model evaluation important aids to the modeling community. Among those are the ENSEMBLE system developed by the EC-Joint Research Center, and the AMET software developed by the US-...
Li, Ying; He, Zhen-Dan; Zheng, Qian-En; Hu, Chengshen; Lai, Wing-Fu
2018-05-14
Over the years, various methods have been developed to enhance the solubility of insoluble drugs; however, most of these methods are time-consuming and labor intensive or involve the use of toxic materials. A method that can safely and effectively enhance the solubility of insoluble drugs is lacking. This study adopted baicalin as an insoluble drug model, and used hydroxypropyl-β-cyclodextrin for the delivery of baicalin via the inclusion complexation by supercritical fluid encapsulation. Different parameters for the complex preparation as well as the physicochemical properties of the complex have been investigated. Our results showed that when compared to the conventional solution mixing approach, supercritical fluid encapsulation enables a more precise control of the properties of the complex, and gives higher loading and encapsulation efficiency. It is anticipated that our reported method can be useful in enhancing the preparation efficiency of inclusion complexes, and can expand the application potential of insoluble herbal ingredients in treatment development and pharmaceutical formulation.
Glaholt, Stephen P; Chen, Celia Y; Demidenko, Eugene; Bugge, Deenie M; Folt, Carol L; Shaw, Joseph R
2012-08-15
The study of stressor interactions by eco-toxicologists using nonlinear response variables is limited by required amounts of a priori knowledge, complexity of experimental designs, the use of linear models, and the lack of use of optimal designs of nonlinear models to characterize complex interactions. Therefore, we developed AID, an adaptive-iterative design for eco-toxicologist to more accurately and efficiently examine complex multiple stressor interactions. AID incorporates the power of the general linear model and A-optimal criteria with an iterative process that: 1) minimizes the required amount of a priori knowledge, 2) simplifies the experimental design, and 3) quantifies both individual and interactive effects. Once a stable model is determined, the best fit model is identified and the direction and magnitude of stressors, individually and all combinations (including complex interactions) are quantified. To validate AID, we selected five commonly co-occurring components of polluted aquatic systems, three metal stressors (Cd, Zn, As) and two water chemistry parameters (pH, hardness) to be tested using standard acute toxicity tests in which Daphnia mortality is the (nonlinear) response variable. We found after the initial data input of experimental data, although literature values (e.g. EC-values) may also be used, and after only two iterations of AID, our dose response model was stable. The model ln(Cd)*ln(Zn) was determined the best predictor of Daphnia mortality response to the combined effects of Cd, Zn, As, pH, and hardness. This model was then used to accurately identify and quantify the strength of both greater- (e.g. As*Cd) and less-than additive interactions (e.g. Cd*Zn). Interestingly, our study found only binary interactions significant, not higher order interactions. We conclude that AID is more efficient and effective at assessing multiple stressor interactions than current methods. Other applications, including life-history endpoints commonly used by regulators, could benefit from AID's efficiency in assessing water quality criteria. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Maccormack, R. W.
1978-01-01
The calculation of flow fields past aircraft configuration at flight Reynolds numbers is considered. Progress in devising accurate and efficient numerical methods, in understanding and modeling the physics of turbulence, and in developing reliable and powerful computer hardware is discussed. Emphasis is placed on efficient solutions to the Navier-Stokes equations.
Using a Polytope to Estimate Efficient Production Functions of Joint Product Processes.
ERIC Educational Resources Information Center
Simpson, William A.
In the last decade, a modeling technique has been developed to handle complex input/output analyses where outputs involve joint products and there are no known mathematical relationships linking the outputs or inputs. The technique uses the geometrical concept of a six-dimensional shape called a polytope to analyze the efficiency of each…
Remote sensing image ship target detection method based on visual attention model
NASA Astrophysics Data System (ADS)
Sun, Yuejiao; Lei, Wuhu; Ren, Xiaodong
2017-11-01
The traditional methods of detecting ship targets in remote sensing images mostly use sliding window to search the whole image comprehensively. However, the target usually occupies only a small fraction of the image. This method has high computational complexity for large format visible image data. The bottom-up selective attention mechanism can selectively allocate computing resources according to visual stimuli, thus improving the computational efficiency and reducing the difficulty of analysis. Considering of that, a method of ship target detection in remote sensing images based on visual attention model was proposed in this paper. The experimental results show that the proposed method can reduce the computational complexity while improving the detection accuracy, and improve the detection efficiency of ship targets in remote sensing images.
Data Mining for Efficient and Accurate Large Scale Retrieval of Geophysical Parameters
NASA Astrophysics Data System (ADS)
Obradovic, Z.; Vucetic, S.; Peng, K.; Han, B.
2004-12-01
Our effort is devoted to developing data mining technology for improving efficiency and accuracy of the geophysical parameter retrievals by learning a mapping from observation attributes to the corresponding parameters within the framework of classification and regression. We will describe a method for efficient learning of neural network-based classification and regression models from high-volume data streams. The proposed procedure automatically learns a series of neural networks of different complexities on smaller data stream chunks and then properly combines them into an ensemble predictor through averaging. Based on the idea of progressive sampling the proposed approach starts with a very simple network trained on a very small chunk and then gradually increases the model complexity and the chunk size until the learning performance no longer improves. Our empirical study on aerosol retrievals from data obtained with the MISR instrument mounted at Terra satellite suggests that the proposed method is successful in learning complex concepts from large data streams with near-optimal computational effort. We will also report on a method that complements deterministic retrievals by constructing accurate predictive algorithms and applying them on appropriately selected subsets of observed data. The method is based on developing more accurate predictors aimed to catch global and local properties synthesized in a region. The procedure starts by learning the global properties of data sampled over the entire space, and continues by constructing specialized models on selected localized regions. The global and local models are integrated through an automated procedure that determines the optimal trade-off between the two components with the objective of minimizing the overall mean square errors over a specific region. Our experimental results on MISR data showed that the combined model can increase the retrieval accuracy significantly. The preliminary results on various large heterogeneous spatial-temporal datasets provide evidence that the benefits of the proposed methodology for efficient and accurate learning exist beyond the area of retrieval of geophysical parameters.
Scaffolding in Complex Modelling Situations
ERIC Educational Resources Information Center
Stender, Peter; Kaiser, Gabriele
2015-01-01
The implementation of teacher-independent realistic modelling processes is an ambitious educational activity with many unsolved problems so far. Amongst others, there hardly exists any empirical knowledge about efficient ways of possible teacher support with students' activities, which should be mainly independent from the teacher. The research…
NASA Astrophysics Data System (ADS)
Song, Pei; Jiang, Chun
2013-05-01
The effect on photoelectric conversion efficiency of an a-Si-based solar cell by applying a solar spectral downshifter of rare earth ion Ce3+ single-doped complexes including yttrium aluminum garnet Y3Al5O12 single crystals, nanostructured ceramics, microstructured ceramics and B2O3-SiO2-Gd2O3-BaO glass is studied. The photoluminescence excitation spectra in the region 360-460 nm convert effectively into photoluminescence emission spectra in the region 450-550 nm where a-Si-based solar cells exhibit a higher spectral response. When these Ce3+ single-doped complexes are placed on the top of an a-Si-based solar cell as precursors for solar spectral downshifting, theoretical relative photoelectric conversion efficiencies of nc-Si:H and a-Si:H solar cells approach 1.09-1.13 and 1.04-1.07, respectively, by means of AMPS-1D numerical modeling, potentially benefiting an a-Si-based solar cell with a photoelectric efficiency improvement.
Efficient evaluation of wireless real-time control networks.
Horvath, Peter; Yampolskiy, Mark; Koutsoukos, Xenofon
2015-02-11
In this paper, we present a system simulation framework for the design and performance evaluation of complex wireless cyber-physical systems. We describe the simulator architecture and the specific developments that are required to simulate cyber-physical systems relying on multi-channel, multihop mesh networks. We introduce realistic and efficient physical layer models and a system simulation methodology, which provides statistically significant performance evaluation results with low computational complexity. The capabilities of the proposed framework are illustrated in the example of WirelessHART, a centralized, real-time, multi-hop mesh network designed for industrial control and monitor applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mundaca, Luis; Neij, Lena; Worrell, Ernst
The growing complexities of energy systems, environmental problems and technology markets are driving and testing most energy-economy models to their limits. To further advance bottom-up models from a multidisciplinary energy efficiency policy evaluation perspective, we review and critically analyse bottom-up energy-economy models and corresponding evaluation studies on energy efficiency policies to induce technological change. We use the household sector as a case study. Our analysis focuses on decision frameworks for technology choice, type of evaluation being carried out, treatment of market and behavioural failures, evaluated policy instruments, and key determinants used to mimic policy instruments. Although the review confirms criticismmore » related to energy-economy models (e.g. unrealistic representation of decision-making by consumers when choosing technologies), they provide valuable guidance for policy evaluation related to energy efficiency. Different areas to further advance models remain open, particularly related to modelling issues, techno-economic and environmental aspects, behavioural determinants, and policy considerations.« less
Modeling Bivariate Longitudinal Hormone Profiles by Hierarchical State Space Models
Liu, Ziyue; Cappola, Anne R.; Crofford, Leslie J.; Guo, Wensheng
2013-01-01
The hypothalamic-pituitary-adrenal (HPA) axis is crucial in coping with stress and maintaining homeostasis. Hormones produced by the HPA axis exhibit both complex univariate longitudinal profiles and complex relationships among different hormones. Consequently, modeling these multivariate longitudinal hormone profiles is a challenging task. In this paper, we propose a bivariate hierarchical state space model, in which each hormone profile is modeled by a hierarchical state space model, with both population-average and subject-specific components. The bivariate model is constructed by concatenating the univariate models based on the hypothesized relationship. Because of the flexible framework of state space form, the resultant models not only can handle complex individual profiles, but also can incorporate complex relationships between two hormones, including both concurrent and feedback relationship. Estimation and inference are based on marginal likelihood and posterior means and variances. Computationally efficient Kalman filtering and smoothing algorithms are used for implementation. Application of the proposed method to a study of chronic fatigue syndrome and fibromyalgia reveals that the relationships between adrenocorticotropic hormone and cortisol in the patient group are weaker than in healthy controls. PMID:24729646
Modeling Bivariate Longitudinal Hormone Profiles by Hierarchical State Space Models.
Liu, Ziyue; Cappola, Anne R; Crofford, Leslie J; Guo, Wensheng
2014-01-01
The hypothalamic-pituitary-adrenal (HPA) axis is crucial in coping with stress and maintaining homeostasis. Hormones produced by the HPA axis exhibit both complex univariate longitudinal profiles and complex relationships among different hormones. Consequently, modeling these multivariate longitudinal hormone profiles is a challenging task. In this paper, we propose a bivariate hierarchical state space model, in which each hormone profile is modeled by a hierarchical state space model, with both population-average and subject-specific components. The bivariate model is constructed by concatenating the univariate models based on the hypothesized relationship. Because of the flexible framework of state space form, the resultant models not only can handle complex individual profiles, but also can incorporate complex relationships between two hormones, including both concurrent and feedback relationship. Estimation and inference are based on marginal likelihood and posterior means and variances. Computationally efficient Kalman filtering and smoothing algorithms are used for implementation. Application of the proposed method to a study of chronic fatigue syndrome and fibromyalgia reveals that the relationships between adrenocorticotropic hormone and cortisol in the patient group are weaker than in healthy controls.
Giron-Gonzalez, M Dolores; Salto-Gonzalez, Rafael; Lopez-Jaramillo, F Javier; Salinas-Castillo, Alfonso; Jodar-Reyes, Ana Belen; Ortega-Muñoz, Mariano; Hernandez-Mateo, Fernando; Santoyo-Gonzalez, Francisco
2016-03-16
Gene transfection mediated by the cationic polymer polyethylenimine (PEI) is considered a standard methodology. However, while highly branched PEIs form smaller polyplexes with DNA that exhibit high transfection efficiencies, they have significant cell toxicity. Conversely, low molecular weight PEIs (LMW-PEIs) with favorable cytotoxicity profiles display minimum transfection activities as a result of inadequate DNA complexation and protection. To solve this paradox, a novel polyelectrolyte complex was prepared by the ionic cross-linking of branched 1.8 kDa PEI with citric acid (CA). This system synergistically exploits the good cytotoxicity profile exhibited by LMW-PEI with the high transfection efficiencies shown by highly branched and high molecular weight PEIs. The polyectrolyte complex (1.8 kDa-PEI@CA) was obtained by a simple synthetic protocol based on the microwave irradiation of a solution of 1.8 kDa PEI and CA. Upon complexation with DNA, intrinsic properties of the resulting particles (size and surface charge) were measured and their ability to form stable polyplexes was determined. Compared with unmodified PEIs the new complexes behave as efficient gene vectors and showed enhanced DNA binding capability associated with facilitated intracellular DNA release and enhanced DNA protection from endonuclease degradation. In addition, while transfection values for LMW-PEIs are almost null, transfection efficiencies of the new reagent range from 2.5- to 3.8-fold to those of Lipofectamine 2000 and 25 kDa PEI in several cell lines in culture such as CHO-k1, FTO2B hepatomas, L6 myoblasts, or NRK cells, simultaneously showing a negligible toxicity. Furthermore, the 1.8 kDa-PEI@CA polyelectrolyte complexes retained the capability to transfect eukaryotic cells in the presence of serum and exhibited the capability to promote in vivo transfection in mouse (as an animal model) with an enhanced efficiency compared to 25 kDa PEI. Results support the polyelectrolyte complex of LMW-PEI and CA as promising generic nonviral gene carriers.
Duggin, Iain G; Matthews, Jacqueline M; Dixon, Nicholas E; Wake, R Gerry; Mackay, Joel P
2005-04-01
Two dimers of the replication terminator protein (RTP) of Bacillus subtilis bind to a chromosomal DNA terminator site to effect polar replication fork arrest. Cooperative binding of the dimers to overlapping half-sites within the terminator is essential for arrest. It was suggested previously that polarity of fork arrest is the result of the RTP dimer at the blocking (proximal) side within the complex binding very tightly and the permissive-side RTP dimer binding relatively weakly. In order to investigate this "differential binding affinity" model, we have constructed a series of mutant terminators that contain half-sites of widely different RTP binding affinities in various combinations. Although there appeared to be a correlation between binding affinity at the proximal half-site and fork arrest efficiency in vivo for some terminators, several deviated significantly from this correlation. Some terminators exhibited greatly reduced binding cooperativity (and therefore have reduced affinity at each half-site) but were highly efficient in fork arrest, whereas one terminator had normal affinity over the proximal half-site, yet had low fork arrest efficiency. The results show clearly that there is no direct correlation between the RTP binding affinity (either within the full complex or at the proximal half-site within the full complex) and the efficiency of replication fork arrest in vivo. Thus, the differential binding affinity over the proximal and distal half-sites cannot be solely responsible for functional polarity of fork arrest. Furthermore, efficient fork arrest relies on features in addition to the tight binding of RTP to terminator DNA.
Information and complexity measures for hydrologic model evaluation
USDA-ARS?s Scientific Manuscript database
Hydrological models are commonly evaluated through the residual-based performance measures such as the root-mean square error or efficiency criteria. Such measures, however, do not evaluate the degree of similarity of patterns in simulated and measured time series. The objective of this study was to...
Real-time simulation of large-scale floods
NASA Astrophysics Data System (ADS)
Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.
2016-08-01
According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.
Self-learning Monte Carlo with deep neural networks
NASA Astrophysics Data System (ADS)
Shen, Huitao; Liu, Junwei; Fu, Liang
2018-05-01
The self-learning Monte Carlo (SLMC) method is a general algorithm to speedup MC simulations. Its efficiency has been demonstrated in various systems by introducing an effective model to propose global moves in the configuration space. In this paper, we show that deep neural networks can be naturally incorporated into SLMC, and without any prior knowledge can learn the original model accurately and efficiently. Demonstrated in quantum impurity models, we reduce the complexity for a local update from O (β2) in Hirsch-Fye algorithm to O (β lnβ ) , which is a significant speedup especially for systems at low temperatures.
The efficiency of geophysical adjoint codes generated by automatic differentiation tools
NASA Astrophysics Data System (ADS)
Vlasenko, A. V.; Köhl, A.; Stammer, D.
2016-02-01
The accuracy of numerical models that describe complex physical or chemical processes depends on the choice of model parameters. Estimating an optimal set of parameters by optimization algorithms requires knowledge of the sensitivity of the process of interest to model parameters. Typically the sensitivity computation involves differentiation of the model, which can be performed by applying algorithmic differentiation (AD) tools to the underlying numerical code. However, existing AD tools differ substantially in design, legibility and computational efficiency. In this study we show that, for geophysical data assimilation problems of varying complexity, the performance of adjoint codes generated by the existing AD tools (i) Open_AD, (ii) Tapenade, (iii) NAGWare and (iv) Transformation of Algorithms in Fortran (TAF) can be vastly different. Based on simple test problems, we evaluate the efficiency of each AD tool with respect to computational speed, accuracy of the adjoint, the efficiency of memory usage, and the capability of each AD tool to handle modern FORTRAN 90-95 elements such as structures and pointers, which are new elements that either combine groups of variables or provide aliases to memory addresses, respectively. We show that, while operator overloading tools are the only ones suitable for modern codes written in object-oriented programming languages, their computational efficiency lags behind source transformation by orders of magnitude, rendering the application of these modern tools to practical assimilation problems prohibitive. In contrast, the application of source transformation tools appears to be the most efficient choice, allowing handling even large geophysical data assimilation problems. However, they can only be applied to numerical models written in earlier generations of programming languages. Our study indicates that applying existing AD tools to realistic geophysical problems faces limitations that urgently need to be solved to allow the continuous use of AD tools for solving geophysical problems on modern computer architectures.
USDA-ARS?s Scientific Manuscript database
Feed efficiency (FE), characterized as the ability to convert feed nutrients into saleable milk or meat directly affects the profitability of dairy production, is of increasing economic importance in the dairy industry. We conjecture that FE is a complex trait whose variation and relationships or pa...
Development and Application of Agglomerated Multigrid Methods for Complex Geometries
NASA Technical Reports Server (NTRS)
Nishikawa, Hiroaki; Diskin, Boris; Thomas, James L.
2010-01-01
We report progress in the development of agglomerated multigrid techniques for fully un- structured grids in three dimensions, building upon two previous studies focused on efficiently solving a model diffusion equation. We demonstrate a robust fully-coarsened agglomerated multigrid technique for 3D complex geometries, incorporating the following key developments: consistent and stable coarse-grid discretizations, a hierarchical agglomeration scheme, and line-agglomeration/relaxation using prismatic-cell discretizations in the highly-stretched grid regions. A signi cant speed-up in computer time is demonstrated for a model diffusion problem, the Euler equations, and the Reynolds-averaged Navier-Stokes equations for 3D realistic complex geometries.
Options to improve energy efficiency for educational building
NASA Astrophysics Data System (ADS)
Jahan, Mafruha
The cost of energy is a major factor that must be considered for educational facility budget planning purpose. The analysis of energy related issues and options can be complex and requires significant time and detailed effort. One way to facilitate the inclusion of energy option planning in facility planning efforts is to utilize a tool that allows for quick appraisal of the facility energy profile. Once such an appraisal is accomplished, it is then possible to rank energy improvement options consistently with other facility needs and requirements. After an energy efficiency option has been determined to have meaningful value in comparison with other facility planning options, it is then possible to utilize the initial appraisal as the basis for an expanded consideration of additional facility and energy use detail using the same analytic system used for the initial appraisal. This thesis has developed a methodology and an associated analytic model to assist in these tasks and thereby improve the energy efficiency of educational facilities. A detailed energy efficiency and analysis tool is described that utilizes specific university building characteristics such as size, architecture, envelop, lighting, occupancy, thermal design which allows reducing the annual energy consumption. Improving the energy efficiency of various aspects of an educational building's energy performance can be complex and can require significant time and experience to make decisions. The approach developed in this thesis initially assesses the energy design for a university building. This initial appraisal is intended to assist administrators in assessing the potential value of energy efficiency options for their particular facility. Subsequently this scoping design can then be extended as another stage of the model by local facility or planning personnel to add more details and engineering aspects to the initial screening model. This approach can assist university planning efforts to identify the most cost effective combinations of energy efficiency strategies. The model analyzes and compares the payback periods of all proposed Energy Performance Measures (EPMs) to determine which has the greatest potential value.
Calibration of HEC-Ras hydrodynamic model using gauged discharge data and flood inundation maps
NASA Astrophysics Data System (ADS)
Tong, Rui; Komma, Jürgen
2017-04-01
The estimation of flood is essential for disaster alleviation. Hydrodynamic models are implemented to predict the occurrence and variance of flood in different scales. In practice, the calibration of hydrodynamic models aims to search the best possible parameters for the representation the natural flow resistance. Recent years have seen the calibration of hydrodynamic models being more actual and faster following the advance of earth observation products and computer based optimization techniques. In this study, the Hydrologic Engineering River Analysis System (HEC-Ras) model was set up with high-resolution digital elevation model from Laser scanner for the river Inn in Tyrol, Austria. 10 largest flood events from 19 hourly discharge gauges and flood inundation maps were selected to calibrate the HEC-Ras model. Manning roughness values and lateral inflow factors as parameters were automatically optimized with the Shuffled complex with Principal component analysis (SP-UCI) algorithm developed from the Shuffled Complex Evolution (SCE-UA). Different objective functions (Nash-Sutcliffe model efficiency coefficient, the timing of peak, peak value and Root-mean-square deviation) were used in single or multiple way. It was found that the lateral inflow factor was the most sensitive parameter. SP-UCI algorithm could avoid the local optimal and achieve efficient and effective parameters in the calibration of HEC-Ras model using flood extension images. As results showed, calibration by means of gauged discharge data and flood inundation maps, together with objective function of Nash-Sutcliffe model efficiency coefficient, was very robust to obtain more reliable flood simulation, and also to catch up with the peak value and the timing of peak.
Development and validation of a two-dimensional fast-response flood estimation model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Judi, David R; Mcpherson, Timothy N; Burian, Steven J
2009-01-01
A finite difference formulation of the shallow water equations using an upwind differencing method was developed maintaining computational efficiency and accuracy such that it can be used as a fast-response flood estimation tool. The model was validated using both laboratory controlled experiments and an actual dam breach. Through the laboratory experiments, the model was shown to give good estimations of depth and velocity when compared to the measured data, as well as when compared to a more complex two-dimensional model. Additionally, the model was compared to high water mark data obtained from the failure of the Taum Sauk dam. Themore » simulated inundation extent agreed well with the observed extent, with the most notable differences resulting from the inability to model sediment transport. The results of these validation studies complex two-dimensional model. Additionally, the model was compared to high water mark data obtained from the failure of the Taum Sauk dam. The simulated inundation extent agreed well with the observed extent, with the most notable differences resulting from the inability to model sediment transport. The results of these validation studies show that a relatively numerical scheme used to solve the complete shallow water equations can be used to accurately estimate flood inundation. Future work will focus on further reducing the computation time needed to provide flood inundation estimates for fast-response analyses. This will be accomplished through the efficient use of multi-core, multi-processor computers coupled with an efficient domain-tracking algorithm, as well as an understanding of the impacts of grid resolution on model results.« less
Complex fuzzy soft expert sets
NASA Astrophysics Data System (ADS)
Selvachandran, Ganeshsree; Hafeed, Nisren A.; Salleh, Abdul Razak
2017-04-01
Complex fuzzy sets and its accompanying theory although at its infancy, has proven to be superior to classical type-1 fuzzy sets, due its ability in representing time-periodic problem parameters and capturing the seasonality of the fuzziness that exists in the elements of a set. These are important characteristics that are pervasive in most real world problems. However, there are two major problems that are inherent in complex fuzzy sets: it lacks a sufficient parameterization tool and it does not have a mechanism to validate the values assigned to the membership functions of the elements in a set. To overcome these problems, we propose the notion of complex fuzzy soft expert sets which is a hybrid model of complex fuzzy sets and soft expert sets. This model incorporates the advantages of complex fuzzy sets and soft sets, besides having the added advantage of allowing the users to know the opinion of all the experts in a single model without the need for any additional cumbersome operations. As such, this model effectively improves the accuracy of representation of problem parameters that are periodic in nature, besides having a higher level of computational efficiency compared to similar models in literature.
Modelling DC responses of 3D complex fracture networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beskardes, Gungor Didem; Weiss, Chester Joseph
Here, the determination of the geometrical properties of fractures plays a critical role in many engineering problems to assess the current hydrological and mechanical states of geological media and to predict their future states. However, numerical modeling of geoelectrical responses in realistic fractured media has been challenging due to the explosive computational cost imposed by the explicit discretizations of fractures at multiple length scales, which often brings about a tradeoff between computational efficiency and geologic realism. Here, we use the hierarchical finite element method to model electrostatic response of realistically complex 3D conductive fracture networks with minimal computational cost.
Modelling DC responses of 3D complex fracture networks
Beskardes, Gungor Didem; Weiss, Chester Joseph
2018-03-01
Here, the determination of the geometrical properties of fractures plays a critical role in many engineering problems to assess the current hydrological and mechanical states of geological media and to predict their future states. However, numerical modeling of geoelectrical responses in realistic fractured media has been challenging due to the explosive computational cost imposed by the explicit discretizations of fractures at multiple length scales, which often brings about a tradeoff between computational efficiency and geologic realism. Here, we use the hierarchical finite element method to model electrostatic response of realistically complex 3D conductive fracture networks with minimal computational cost.
NASA Astrophysics Data System (ADS)
Behrens, Jörg; Hanke, Moritz; Jahns, Thomas
2014-05-01
In this talk we present a way to facilitate efficient use of MPI communication for developers of climate models. Exploitation of the performance potential of today's highly parallel supercomputers with real world simulations is a complex task. This is partly caused by the low level nature of the MPI communication library which is the dominant communication tool at least for inter-node communication. In order to manage the complexity of the task, climate simulations with non-trivial communication patterns often use an internal abstraction layer above MPI without exploiting the benefits of communication aggregation or MPI-datatypes. The solution for the complexity and performance problem we propose is the communication library YAXT. This library is built on top of MPI and takes high level descriptions of arbitrary domain decompositions and automatically derives an efficient collective data exchange. Several exchanges can be aggregated in order to reduce latency costs. Examples are given which demonstrate the simplicity and the performance gains for selected climate applications.
NASA Astrophysics Data System (ADS)
Kuz'min, V. V.; Salmin, V. V.; Salmina, A. B.; Provorov, A. S.
2008-07-01
The general properties of photodissociation of carboxyhemoglobin (HbCO) in buffer solutions of whole human blood are studied by the flash photolysis method on a setup with intersecting beams. It is shown that the efficiency of photoinduced dissociation of the HbCO complex virtually linearly depends on the photolytic irradiation intensity for the average power density not exceeding 45 mW cm-2. The general dissociation of the HbCO complex in native conditions occurs in a narrower range of values of the saturation degree than in model experiments with the hemoglobin solution. The dependence of the pulse photolysis efficiency of HbCO on the photolytic radiation wavelength in the range from 550 to 585 nm has a broad bell shape. The efficiency maximum corresponds to the electronic Q transition (porphyrin π—π* absorption) in HbCO at a wavelength of 570 nm. No dissociation of the complex was observed under given experimental conditions upon irradiation of solutions by photolytic radiation at wavelengths above 585 nm.
NASA Astrophysics Data System (ADS)
Xuan, Hejun; Wang, Yuping; Xu, Zhanqi; Hao, Shanshan; Wang, Xiaoli
2017-11-01
Virtualization technology can greatly improve the efficiency of the networks by allowing the virtual optical networks to share the resources of the physical networks. However, it will face some challenges, such as finding the efficient strategies for virtual nodes mapping, virtual links mapping and spectrum assignment. It is even more complex and challenging when the physical elastic optical networks using multi-core fibers. To tackle these challenges, we establish a constrained optimization model to determine the optimal schemes of optical network mapping, core allocation and spectrum assignment. To solve the model efficiently, tailor-made encoding scheme, crossover and mutation operators are designed. Based on these, an efficient genetic algorithm is proposed to obtain the optimal schemes of the virtual nodes mapping, virtual links mapping, core allocation. The simulation experiments are conducted on three widely used networks, and the experimental results show the effectiveness of the proposed model and algorithm.
Ward, Marie; McDonald, Nick; Morrison, Rabea; Gaynor, Des; Nugent, Tony
2010-02-01
Aircraft maintenance is a highly regulated, safety critical, complex and competitive industry. There is a need to develop innovative solutions to address process efficiency without compromising safety and quality. This paper presents the case that in order to improve a highly complex system such as aircraft maintenance, it is necessary to develop a comprehensive and ecologically valid model of the operational system, which represents not just what is meant to happen, but what normally happens. This model then provides the backdrop against which to change or improve the system. A performance report, the Blocker Report, specific to aircraft maintenance and related to the model was developed gathering data on anything that 'blocks' task or check performance. A Blocker Resolution Process was designed to resolve blockers and improve the current check system. Significant results were obtained for the company in the first trial and implications for safety management systems and hazard identification are discussed. Statement of Relevance: Aircraft maintenance is a safety critical, complex, competitive industry with a need to develop innovative solutions to address process and safety efficiency. This research addresses this through the development of a comprehensive and ecologically valid model of the system linked with a performance reporting and resolution system.
Economic Analysis of Biological Invasions in Forests
Tomas P. Holmes; Julian Aukema; Jeffrey Englin; Robert G. Haight; Kent Kovacs; Brian Leung
2014-01-01
Biological invasions of native forests by nonnative pests result from complex stochastic processes that are difficult to predict. Although economic optimization models describe efficient controls across the stages of an invasion, the ability to calibrate such models is constrained by lack of information on pest population dynamics and consequent economic damages. Here...
Song, Ji Hyun; Kim, Ji Yeon; Piao, Chunxian; Lee, Seonyeong; Kim, Bora; Song, Su Jeong; Choi, Joon Sig; Lee, Minhyung
2016-07-28
In this study, the efficacy of the high-mobility group box-1 box A (HMGB1A)/heparin complex was evaluated for the treatment of acute lung injury (ALI). HMGB1A is an antagonist against wild-type high-mobility group box-1 (wtHMGB1), a pro-inflammatory cytokine that is involved in ALIs. HMGB1A has positive charges and can be captured in the mucus layer after intratracheal administration. To enhance the delivery and therapeutic efficiency of HMGB1A, the HMGB1A/heparin complex was produced using electrostatic interactions, with the expectation that the nano-sized complex with a negative surface charge could efficiently penetrate the mucus layer. Additionally, heparin itself had an anti-inflammatory effect. Complex formation with HMGB1A and heparin was confirmed by atomic force microscopy. The particle size and surface charge of the HMGB1A/heparin complex at a 1:1 weight ratio were 113nm and -25mV, respectively. Intratracheal administration of the complex was performed into an ALI animal model. The results showed that the HMGB1A/heparin complex reduced pro-inflammatory cytokines, including tumor necrosis factor-α (TNF-α), interleukin-6 (IL-6), and IL-1β, more effectively than HMGB1A or heparin alone. Hematoxylin and eosin staining confirmed the decreased inflammatory reaction in the lungs after delivery of the HMGB1A/heparin complex. In conclusion, the HMGB1A/heparin complex might be useful to treat ALI. Copyright © 2016 Elsevier B.V. All rights reserved.
Efficient and Extensible Quasi-Explicit Modular Nonlinear Multiscale Battery Model: GH-MSMD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Gi-Heon; Smith, Kandler; Lawrence-Simon, Jake
Complex physics and long computation time hinder the adoption of computer aided engineering models in the design of large-format battery cells and systems. A modular, efficient battery simulation model -- the multiscale multidomain (MSMD) model -- was previously introduced to aid the scale-up of Li-ion material and electrode designs to complete cell and pack designs, capturing electrochemical interplay with 3-D electronic current pathways and thermal response. Here, this paper enhances the computational efficiency of the MSMD model using a separation of time-scales principle to decompose model field variables. The decomposition provides a quasi-explicit linkage between the multiple length-scale domains andmore » thus reduces time-consuming nested iteration when solving model equations across multiple domains. In addition to particle-, electrode- and cell-length scales treated in the previous work, the present formulation extends to bus bar- and multi-cell module-length scales. We provide example simulations for several variants of GH electrode-domain models.« less
Efficient and Extensible Quasi-Explicit Modular Nonlinear Multiscale Battery Model: GH-MSMD
Kim, Gi-Heon; Smith, Kandler; Lawrence-Simon, Jake; ...
2017-03-24
Complex physics and long computation time hinder the adoption of computer aided engineering models in the design of large-format battery cells and systems. A modular, efficient battery simulation model -- the multiscale multidomain (MSMD) model -- was previously introduced to aid the scale-up of Li-ion material and electrode designs to complete cell and pack designs, capturing electrochemical interplay with 3-D electronic current pathways and thermal response. Here, this paper enhances the computational efficiency of the MSMD model using a separation of time-scales principle to decompose model field variables. The decomposition provides a quasi-explicit linkage between the multiple length-scale domains andmore » thus reduces time-consuming nested iteration when solving model equations across multiple domains. In addition to particle-, electrode- and cell-length scales treated in the previous work, the present formulation extends to bus bar- and multi-cell module-length scales. We provide example simulations for several variants of GH electrode-domain models.« less
Energy Efficient Operation of Ammonia Refrigeration Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohammed, Abdul Qayyum; Wenning, Thomas J; Sever, Franc
Ammonia refrigeration systems typically offer many energy efficiency opportunities because of their size and complexity. This paper develops a model for simulating single-stage ammonia refrigeration systems, describes common energy saving opportunities, and uses the model to quantify those opportunities. The simulation model uses data that are typically available during site visits to ammonia refrigeration plants and can be calibrated to actual consumption and performance data if available. Annual electricity consumption for a base-case ammonia refrigeration system is simulated. The model is then used to quantify energy savings for six specific energy efficiency opportunities; reduce refrigeration load, increase suction pressure, employmore » dual suction, decrease minimum head pressure set-point, increase evaporative condenser capacity, and reclaim heat. Methods and considerations for achieving each saving opportunity are discussed. The model captures synergistic effects that result when more than one component or parameter is changed. This methodology represents an effective method to model and quantify common energy saving opportunities in ammonia refrigeration systems. The results indicate the range of savings that might be expected from common energy efficiency opportunities.« less
Study of ecological compensation in complex river networks based on a mathematical model.
Wang, Xiao; Shen, Chunqi; Wei, Jun; Niu, Yong
2018-05-31
Transboundary water pollution has resulted in increasing conflicts between upstream and downstream administrative districts. Ecological compensation is an efficient means of restricting pollutant discharge and achieving sustainable utilization of water resources. The tri-provincial region of Taihu Basin is a typical river networks area. Pollutant flux across provincial boundaries in the Taihu Basin is hard to determine due to complex hydrologic and hydrodynamic conditions. In this study, ecological compensation estimation for the tri-provincial area based on a mathematical model is investigated for better environmental management. River discharge and water quality are predicted with the one-dimensional mathematical model and validated with field measurements. Different ecological compensation criteria are identified considering the notable regional discrepancy in sewage treatment costs. Finally, the total compensation payment is estimated. Our study indicates that Shanghai should be the receiver of payment from both Jiangsu and Zhenjiang in 2013, with 305 million and 300 million CNY, respectively. Zhejiang also contributes more pollutants to Jiangsu, and the compensation to Jiangsu is estimated as 9.3 million CNY. The proposed ecological compensation method provides an efficient way for solving the transboundary conflicts in a complex river networks area and is instructive for future policy-making.
Kernel methods and flexible inference for complex stochastic dynamics
NASA Astrophysics Data System (ADS)
Capobianco, Enrico
2008-07-01
Approximation theory suggests that series expansions and projections represent standard tools for random process applications from both numerical and statistical standpoints. Such instruments emphasize the role of both sparsity and smoothness for compression purposes, the decorrelation power achieved in the expansion coefficients space compared to the signal space, and the reproducing kernel property when some special conditions are met. We consider these three aspects central to the discussion in this paper, and attempt to analyze the characteristics of some known approximation instruments employed in a complex application domain such as financial market time series. Volatility models are often built ad hoc, parametrically and through very sophisticated methodologies. But they can hardly deal with stochastic processes with regard to non-Gaussianity, covariance non-stationarity or complex dependence without paying a big price in terms of either model mis-specification or computational efficiency. It is thus a good idea to look at other more flexible inference tools; hence the strategy of combining greedy approximation and space dimensionality reduction techniques, which are less dependent on distributional assumptions and more targeted to achieve computationally efficient performances. Advantages and limitations of their use will be evaluated by looking at algorithmic and model building strategies, and by reporting statistical diagnostics.
Efficient physics-based tracking of heart surface motion for beating heart surgery robotic systems.
Bogatyrenko, Evgeniya; Pompey, Pascal; Hanebeck, Uwe D
2011-05-01
Tracking of beating heart motion in a robotic surgery system is required for complex cardiovascular interventions. A heart surface motion tracking method is developed, including a stochastic physics-based heart surface model and an efficient reconstruction algorithm. The algorithm uses the constraints provided by the model that exploits the physical characteristics of the heart. The main advantage of the model is that it is more realistic than most standard heart models. Additionally, no explicit matching between the measurements and the model is required. The application of meshless methods significantly reduces the complexity of physics-based tracking. Based on the stochastic physical model of the heart surface, this approach considers the motion of the intervention area and is robust to occlusions and reflections. The tracking algorithm is evaluated in simulations and experiments on an artificial heart. Providing higher accuracy than the standard model-based methods, it successfully copes with occlusions and provides high performance even when all measurements are not available. Combining the physical and stochastic description of the heart surface motion ensures physically correct and accurate prediction. Automatic initialization of the physics-based cardiac motion tracking enables system evaluation in a clinical environment.
NASA Astrophysics Data System (ADS)
Mo, Shaoxing; Lu, Dan; Shi, Xiaoqing; Zhang, Guannan; Ye, Ming; Wu, Jianfeng; Wu, Jichun
2017-12-01
Global sensitivity analysis (GSA) and uncertainty quantification (UQ) for groundwater modeling are challenging because of the model complexity and significant computational requirements. To reduce the massive computational cost, a cheap-to-evaluate surrogate model is usually constructed to approximate and replace the expensive groundwater models in the GSA and UQ. Constructing an accurate surrogate requires actual model simulations on a number of parameter samples. Thus, a robust experimental design strategy is desired to locate informative samples so as to reduce the computational cost in surrogate construction and consequently to improve the efficiency in the GSA and UQ. In this study, we develop a Taylor expansion-based adaptive design (TEAD) that aims to build an accurate global surrogate model with a small training sample size. TEAD defines a novel hybrid score function to search informative samples, and a robust stopping criterion to terminate the sample search that guarantees the resulted approximation errors satisfy the desired accuracy. The good performance of TEAD in building global surrogate models is demonstrated in seven analytical functions with different dimensionality and complexity in comparison to two widely used experimental design methods. The application of the TEAD-based surrogate method in two groundwater models shows that the TEAD design can effectively improve the computational efficiency of GSA and UQ for groundwater modeling.
An Efficient Pattern Mining Approach for Event Detection in Multivariate Temporal Data
Batal, Iyad; Cooper, Gregory; Fradkin, Dmitriy; Harrison, James; Moerchen, Fabian; Hauskrecht, Milos
2015-01-01
This work proposes a pattern mining approach to learn event detection models from complex multivariate temporal data, such as electronic health records. We present Recent Temporal Pattern mining, a novel approach for efficiently finding predictive patterns for event detection problems. This approach first converts the time series data into time-interval sequences of temporal abstractions. It then constructs more complex time-interval patterns backward in time using temporal operators. We also present the Minimal Predictive Recent Temporal Patterns framework for selecting a small set of predictive and non-spurious patterns. We apply our methods for predicting adverse medical events in real-world clinical data. The results demonstrate the benefits of our methods in learning accurate event detection models, which is a key step for developing intelligent patient monitoring and decision support systems. PMID:26752800
Efficient solvers for coupled models in respiratory mechanics.
Verdugo, Francesc; Roth, Christian J; Yoshihara, Lena; Wall, Wolfgang A
2017-02-01
We present efficient preconditioners for one of the most physiologically relevant pulmonary models currently available. Our underlying motivation is to enable the efficient simulation of such a lung model on high-performance computing platforms in order to assess mechanical ventilation strategies and contributing to design more protective patient-specific ventilation treatments. The system of linear equations to be solved using the proposed preconditioners is essentially the monolithic system arising in fluid-structure interaction (FSI) extended by additional algebraic constraints. The introduction of these constraints leads to a saddle point problem that cannot be solved with usual FSI preconditioners available in the literature. The key ingredient in this work is to use the idea of the semi-implicit method for pressure-linked equations (SIMPLE) for getting rid of the saddle point structure, resulting in a standard FSI problem that can be treated with available techniques. The numerical examples show that the resulting preconditioners approach the optimal performance of multigrid methods, even though the lung model is a complex multiphysics problem. Moreover, the preconditioners are robust enough to deal with physiologically relevant simulations involving complex real-world patient-specific lung geometries. The same approach is applicable to other challenging biomedical applications where coupling between flow and tissue deformations is modeled with additional algebraic constraints. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Kinetic analysis of the effects of target structure on siRNA efficiency
NASA Astrophysics Data System (ADS)
Chen, Jiawen; Zhang, Wenbing
2012-12-01
RNAi efficiency for target cleavage and protein expression is related to the target structure. Considering the RNA-induced silencing complex (RISC) as a multiple turnover enzyme, we investigated the effect of target mRNA structure on siRNA efficiency with kinetic analysis. The 4-step model was used to study the target cleavage kinetic process: hybridization nucleation at an accessible target site, RISC-mRNA hybrid elongation along with mRNA target structure melting, target cleavage, and enzyme reactivation. At this model, the terms accounting for the target accessibility, stability, and the seed and the nucleation site effects are all included. The results are in good agreement with that of experiments which show different arguments about the structure effects on siRNA efficiency. It shows that the siRNA efficiency is influenced by the integrated factors of target's accessibility, stability, and the seed effects. To study the off-target effects, a simple model of one siRNA binding to two mRNA targets was designed. By using this model, the possibility for diminishing the off-target effects by the concentration of siRNA was discussed.
Aufderheide, Helge; Rudolf, Lars; Gross, Thilo; Lafferty, Kevin D.
2013-01-01
Recent attempts to predict the response of large food webs to perturbations have revealed that in larger systems increasingly precise information on the elements of the system is required. Thus, the effort needed for good predictions grows quickly with the system's complexity. Here, we show that not all elements need to be measured equally well, suggesting that a more efficient allocation of effort is possible. We develop an iterative technique for determining an efficient measurement strategy. In model food webs, we find that it is most important to precisely measure the mortality and predation rates of long-lived, generalist, top predators. Prioritizing the study of such species will make it easier to understand the response of complex food webs to perturbations.
Optimization of protocol design: a path to efficient, lower cost clinical trial execution
Malikova, Marina A
2016-01-01
Managing clinical trials requires strategic planning and efficient execution. In order to achieve a timely delivery of important clinical trials’ outcomes, it is useful to establish standardized trial management guidelines and develop robust scoring methodology for evaluation of study protocol complexity. This review will explore the challenges clinical teams face in developing protocols to ensure that the right patients are enrolled and the right data are collected to demonstrate that a drug is safe and efficacious, while managing study costs and study complexity based on proposed comprehensive scoring model. Key factors to consider when developing protocols and techniques to minimize complexity will be discussed. A methodology to identify processes at planning phase, approaches to increase fiscal return and mitigate fiscal compliance risk for clinical trials will be addressed. PMID:28031939
Search for Directed Networks by Different Random Walk Strategies
NASA Astrophysics Data System (ADS)
Zhu, Zi-Qi; Jin, Xiao-Ling; Huang, Zhi-Long
2012-03-01
A comparative study is carried out on the efficiency of five different random walk strategies searching on directed networks constructed based on several typical complex networks. Due to the difference in search efficiency of the strategies rooted in network clustering, the clustering coefficient in a random walker's eye on directed networks is defined and computed to be half of the corresponding undirected networks. The search processes are performed on the directed networks based on Erdös—Rényi model, Watts—Strogatz model, Barabási—Albert model and clustered scale-free network model. It is found that self-avoiding random walk strategy is the best search strategy for such directed networks. Compared to unrestricted random walk strategy, path-iteration-avoiding random walks can also make the search process much more efficient. However, no-triangle-loop and no-quadrangle-loop random walks do not improve the search efficiency as expected, which is different from those on undirected networks since the clustering coefficient of directed networks are smaller than that of undirected networks.
Effect of shoulder model complexity in upper-body kinematics analysis of the golf swing.
Bourgain, M; Hybois, S; Thoreux, P; Rouillon, O; Rouch, P; Sauret, C
2018-06-25
The golf swing is a complex full body movement during which the spine and shoulders are highly involved. In order to determine shoulder kinematics during this movement, multibody kinematics optimization (MKO) can be recommended to limit the effect of the soft tissue artifact and to avoid joint dislocations or bone penetration in reconstructed kinematics. Classically, in golf biomechanics research, the shoulder is represented by a 3 degrees-of-freedom model representing the glenohumeral joint. More complex and physiological models are already provided in the scientific literature. Particularly, the model used in this study was a full body model and also described motions of clavicles and scapulae. This study aimed at quantifying the effect of utilizing a more complex and physiological shoulder model when studying the golf swing. Results obtained on 20 golfers showed that a more complex and physiologically-accurate model can more efficiently track experimental markers, which resulted in differences in joint kinematics. Hence, the model with 3 degrees-of-freedom between the humerus and the thorax may be inadequate when combined with MKO and a more physiological model would be beneficial. Finally, results would also be improved through a subject-specific approach for the determination of the segment lengths. Copyright © 2018 Elsevier Ltd. All rights reserved.
Cilfone, Nicholas A.; Kirschner, Denise E.; Linderman, Jennifer J.
2015-01-01
Biologically related processes operate across multiple spatiotemporal scales. For computational modeling methodologies to mimic this biological complexity, individual scale models must be linked in ways that allow for dynamic exchange of information across scales. A powerful methodology is to combine a discrete modeling approach, agent-based models (ABMs), with continuum models to form hybrid models. Hybrid multi-scale ABMs have been used to simulate emergent responses of biological systems. Here, we review two aspects of hybrid multi-scale ABMs: linking individual scale models and efficiently solving the resulting model. We discuss the computational choices associated with aspects of linking individual scale models while simultaneously maintaining model tractability. We demonstrate implementations of existing numerical methods in the context of hybrid multi-scale ABMs. Using an example model describing Mycobacterium tuberculosis infection, we show relative computational speeds of various combinations of numerical methods. Efficient linking and solution of hybrid multi-scale ABMs is key to model portability, modularity, and their use in understanding biological phenomena at a systems level. PMID:26366228
Ewe, Alexander; Przybylski, Susanne; Burkhardt, Jana; Janke, Andreas; Appelhans, Dietmar; Aigner, Achim
2016-05-28
The delivery of nucleic acids, particularly of small RNA molecules like siRNAs for the induction of RNA interference (RNAi), still represents a major hurdle with regard to their application in vivo. Possible therapeutic applications thus rely on the development of efficient non-viral gene delivery vectors. While low molecular weight polyethylenimines (PEIs) have been successfully explored, the introduction of chemical modifications offers an avenue towards the development of more efficient vectors. In this paper, we describe the synthesis of a novel tyrosine-modified low-molecular weight polyethylenimine (P10Y) for efficient siRNA complexation and delivery. The comparison with the respective parent PEI reveals that knockdown efficacies are considerably enhanced by the tyrosine modification, as determined in different reporter cell lines, without appreciable cytotoxicity. We furthermore identify optimal conditions for complex preparation as well as for storing or lyophilization of the complexes without loss of biological activity. Beyond reporter cell lines, P10Y/siRNA complexes mediate the efficient knockdown of endogenous target genes and, upon knockdown of the anti-apoptotic oncogene survivin, tumor cell inhibitory effects in different carcinoma cell lines. Pushing the system further towards its therapeutic in vivo application, we demonstrate in mice the delivery of intact siRNAs and distinct biodistribution profiles upon systemic (intravenous or intraperitoneal) injection. No adverse effects (hepatotoxicity, immunostimulation/alterations in immunophenotype, weight loss) are observed. More importantly, profound tumor-inhibitory effects in a melanoma xenograft mouse model are observed upon systemic application of P10Y/siRNA complexes for survivin knockdown, indicating the therapeutic efficacy of P10Y/siRNA complexes. Taken together, we (i) establish tyrosine-modified PEI (P10Y) as efficient platform for siRNA delivery in vitro and in vivo, (ii) identify optimal preparation and storage conditions as well as (iii) physicochemical and biological properties of P10Y complexes, and (iv) demonstrate their applicability as siRNA therapeutic in vivo (v) in the absence of adverse effects. Copyright © 2016 Elsevier B.V. All rights reserved.
Computational Modeling of Liquid and Gaseous Control Valves
NASA Technical Reports Server (NTRS)
Daines, Russell; Ahuja, Vineet; Hosangadi, Ashvin; Shipman, Jeremy; Moore, Arden; Sulyma, Peter
2005-01-01
In this paper computational modeling efforts undertaken at NASA Stennis Space Center in support of rocket engine component testing are discussed. Such analyses include structurally complex cryogenic liquid valves and gas valves operating at high pressures and flow rates. Basic modeling and initial successes are documented, and other issues that make valve modeling at SSC somewhat unique are also addressed. These include transient behavior, valve stall, and the determination of flow patterns in LOX valves. Hexahedral structured grids are used for valves that can be simplifies through the use of axisymmetric approximation. Hybrid unstructured methodology is used for structurally complex valves that have disparate length scales and complex flow paths that include strong swirl, local recirculation zones/secondary flow effects. Hexahedral (structured), unstructured, and hybrid meshes are compared for accuracy and computational efficiency. Accuracy is determined using verification and validation techniques.
Intrinsic dimensionality predicts the saliency of natural dynamic scenes.
Vig, Eleonora; Dorr, Michael; Martinetz, Thomas; Barth, Erhardt
2012-06-01
Since visual attention-based computer vision applications have gained popularity, ever more complex, biologically inspired models seem to be needed to predict salient locations (or interest points) in naturalistic scenes. In this paper, we explore how far one can go in predicting eye movements by using only basic signal processing, such as image representations derived from efficient coding principles, and machine learning. To this end, we gradually increase the complexity of a model from simple single-scale saliency maps computed on grayscale videos to spatiotemporal multiscale and multispectral representations. Using a large collection of eye movements on high-resolution videos, supervised learning techniques fine-tune the free parameters whose addition is inevitable with increasing complexity. The proposed model, although very simple, demonstrates significant improvement in predicting salient locations in naturalistic videos over four selected baseline models and two distinct data labeling scenarios.
Using ABAQUS Scripting Interface for Materials Evaluation and Life Prediction
NASA Technical Reports Server (NTRS)
Powers, Lynn M.; Arnold, Steven M.; Baranski, Andrzej
2006-01-01
An ABAQUS script has been written to aid in the evaluation of the mechanical behavior of viscoplastic materials. The purposes of the script are to: handle complex load histories; control load/displacement with alternate stopping criteria; predict failure and life; and verify constitutive models. Material models from the ABAQUS library may be used or the UMAT routine may specify mechanical behavior. User subroutines implemented include: UMAT for the constitutive model; UEXTERNALDB for file manipulation; DISP for boundary conditions; and URDFIL for results processing. Examples presented include load, strain and displacement control tests on a single element model. The tests are creep with a life limiting strain criterion, strain control with a stress limiting cycle and a complex interrupted cyclic relaxation test. The techniques implemented in this paper enable complex load conditions to be solved efficiently with ABAQUS.
A program code generator for multiphysics biological simulation using markup languages.
Amano, Akira; Kawabata, Masanari; Yamashita, Yoshiharu; Rusty Punzalan, Florencio; Shimayoshi, Takao; Kuwabara, Hiroaki; Kunieda, Yoshitoshi
2012-01-01
To cope with the complexity of the biological function simulation models, model representation with description language is becoming popular. However, simulation software itself becomes complex in these environment, thus, it is difficult to modify the simulation conditions, target computation resources or calculation methods. In the complex biological function simulation software, there are 1) model equations, 2) boundary conditions and 3) calculation schemes. Use of description model file is useful for first point and partly second point, however, third point is difficult to handle for various calculation schemes which is required for simulation models constructed from two or more elementary models. We introduce a simulation software generation system which use description language based description of coupling calculation scheme together with cell model description file. By using this software, we can easily generate biological simulation code with variety of coupling calculation schemes. To show the efficiency of our system, example of coupling calculation scheme with three elementary models are shown.
Mapping monomeric threading to protein-protein structure prediction.
Guerler, Aysam; Govindarajoo, Brandon; Zhang, Yang
2013-03-25
The key step of template-based protein-protein structure prediction is the recognition of complexes from experimental structure libraries that have similar quaternary fold. Maintaining two monomer and dimer structure libraries is however laborious, and inappropriate library construction can degrade template recognition coverage. We propose a novel strategy SPRING to identify complexes by mapping monomeric threading alignments to protein-protein interactions based on the original oligomer entries in the PDB, which does not rely on library construction and increases the efficiency and quality of complex template recognitions. SPRING is tested on 1838 nonhomologous protein complexes which can recognize correct quaternary template structures with a TM score >0.5 in 1115 cases after excluding homologous proteins. The average TM score of the first model is 60% and 17% higher than that by HHsearch and COTH, respectively, while the number of targets with an interface RMSD <2.5 Å by SPRING is 134% and 167% higher than these competing methods. SPRING is controlled with ZDOCK on 77 docking benchmark proteins. Although the relative performance of SPRING and ZDOCK depends on the level of homology filters, a combination of the two methods can result in a significantly higher model quality than ZDOCK at all homology thresholds. These data demonstrate a new efficient approach to quaternary structure recognition that is ready to use for genome-scale modeling of protein-protein interactions due to the high speed and accuracy.
Surfing on Protein Waves: Proteophoresis as a Mechanism for Bacterial Genome Partitioning
NASA Astrophysics Data System (ADS)
Walter, J.-C.; Dorignac, J.; Lorman, V.; Rech, J.; Bouet, J.-Y.; Nollmann, M.; Palmeri, J.; Parmeggiani, A.; Geniet, F.
2017-07-01
Efficient bacterial chromosome segregation typically requires the coordinated action of a three-component machinery, fueled by adenosine triphosphate, called the partition complex. We present a phenomenological model accounting for the dynamic activity of this system that is also relevant for the physics of catalytic particles in active environments. The model is obtained by coupling simple linear reaction-diffusion equations with a proteophoresis, or "volumetric" chemophoresis, force field that arises from protein-protein interactions and provides a physically viable mechanism for complex translocation. This minimal description captures most known experimental observations: dynamic oscillations of complex components, complex separation, and subsequent symmetrical positioning. The predictions of our model are in phenomenological agreement with and provide substantial insight into recent experiments. From a nonlinear physics view point, this system explores the active separation of matter at micrometric scales with a dynamical instability between static positioning and traveling wave regimes triggered by the dynamical spontaneous breaking of rotational symmetry.
Wind tunnel investigation of a high lift system with pneumatic flow control
NASA Astrophysics Data System (ADS)
Victor, Pricop Mihai; Mircea, Boscoianu; Daniel-Eugeniu, Crunteanu
2016-06-01
Next generation passenger aircrafts require more efficient high lift systems under size and mass constraints, to achieve more fuel efficiency. This can be obtained in various ways: to improve/maintain aerodynamic performance while simplifying the mechanical design of the high lift system going to a single slotted flap, to maintain complexity and improve the aerodynamics even more, etc. Laminar wings have less efficient leading edge high lift systems if any, requiring more performance from the trailing edge flap. Pulsed blowing active flow control (AFC) in the gap of single element flap is investigated for a relatively large model. A wind tunnel model, test campaign and results and conclusion are presented.
NASA Astrophysics Data System (ADS)
Liu, Y.; Zheng, L.; Pau, G. S. H.
2016-12-01
A careful assessment of the risk associated with geologic CO2 storage is critical to the deployment of large-scale storage projects. While numerical modeling is an indispensable tool for risk assessment, there has been increasing need in considering and addressing uncertainties in the numerical models. However, uncertainty analyses have been significantly hindered by the computational complexity of the model. As a remedy, reduced-order models (ROM), which serve as computationally efficient surrogates for high-fidelity models (HFM), have been employed. The ROM is constructed at the expense of an initial set of HFM simulations, and afterwards can be relied upon to predict the model output values at minimal cost. The ROM presented here is part of National Risk Assessment Program (NRAP) and intends to predict the water quality change in groundwater in response to hypothetical CO2 and brine leakage. The HFM based on which the ROM is derived is a multiphase flow and reactive transport model, with 3-D heterogeneous flow field and complex chemical reactions including aqueous complexation, mineral dissolution/precipitation, adsorption/desorption via surface complexation and cation exchange. Reduced-order modeling techniques based on polynomial basis expansion, such as polynomial chaos expansion (PCE), are widely used in the literature. However, the accuracy of such ROMs can be affected by the sparse structure of the coefficients of the expansion. Failing to identify vanishing polynomial coefficients introduces unnecessary sampling errors, the accumulation of which deteriorates the accuracy of the ROMs. To address this issue, we treat the PCE as a sparse Bayesian learning (SBL) problem, and the sparsity is obtained by detecting and including only the non-zero PCE coefficients one at a time by iteratively selecting the most contributing coefficients. The computational complexity due to predicting the entire 3-D concentration fields is further mitigated by a dimension reduction procedure-proper orthogonal decomposition (POD). Our numerical results show that utilizing the sparse structure and POD significantly enhances the accuracy and efficiency of the ROMs, laying the basis for further analyses that necessitate a large number of model simulations.
Steel, Jason C; Cavanagh, Heather M A; Burton, Mark A; Abu-Asab, Mones S; Tsokos, Maria; Morris, John C; Kalle, Wouter H J
2007-04-01
We aimed to increase the efficiency of adenoviral vectors by limiting adenoviral spread from the target site and reducing unwanted host immune responses to the vector. We complexed adenoviral vectors with DDAB-DOPE liposomes to form adenovirus-liposomal (AL) complexes. AL complexes were delivered by intratumoral injection in an immunocompetent subcutaneous rat tumor model and the immunogenicity of the AL complexes and the expression efficiency in the tumor and other organs was examined. Animals treated with the AL complexes had significantly lower levels of beta-galactosidase expression in systemic tissues compared to animals treated with the naked adenovirus (NA) (P<0.05). The tumor to non-tumor ratio of beta-galactosidase marker expression was significantly higher for the AL complex treated animals. NA induced significantly higher titers of adenoviral-specific antibodies compared to the AL complexes (P<0.05). The AL complexes provided protection (immunoshielding) to the adenovirus from neutralizing antibody. Forty-seven percent more beta-galactosidase expression was detected following intratumoral injection with AL complexes compared to the NA in animals pre-immunized with adenovirus. Complexing of adenovirus with liposomes provides a simple method to enhance tumor localization of the vector, decrease the immunogenicity of adenovirus, and provide protection of the virus from pre-existing neutralizing antibodies.
NASA Astrophysics Data System (ADS)
Kraus, E. I.; Shabalin, I. I.; Shabalin, T. I.
2018-04-01
The main points of development of numerical tools for simulation of deformation and failure of complex technical objects under nonstationary conditions of extreme loading are presented. The possibility of extending the dynamic method for construction of difference grids to the 3D case is shown. A 3D realization of discrete-continuum approach to the deformation and failure of complex technical objects is carried out. The efficiency of the existing software package for 3D modelling is shown.
Unstructured mesh algorithms for aerodynamic calculations
NASA Technical Reports Server (NTRS)
Mavriplis, D. J.
1992-01-01
The use of unstructured mesh techniques for solving complex aerodynamic flows is discussed. The principle advantages of unstructured mesh strategies, as they relate to complex geometries, adaptive meshing capabilities, and parallel processing are emphasized. The various aspects required for the efficient and accurate solution of aerodynamic flows are addressed. These include mesh generation, mesh adaptivity, solution algorithms, convergence acceleration, and turbulence modeling. Computations of viscous turbulent two-dimensional flows and inviscid three-dimensional flows about complex configurations are demonstrated. Remaining obstacles and directions for future research are also outlined.
Using circuit theory to model connectivity in ecology, evolution, and conservation.
McRae, Brad H; Dickson, Brett G; Keitt, Timothy H; Shah, Viral B
2008-10-01
Connectivity among populations and habitats is important for a wide range of ecological processes. Understanding, preserving, and restoring connectivity in complex landscapes requires connectivity models and metrics that are reliable, efficient, and process based. We introduce a new class of ecological connectivity models based in electrical circuit theory. Although they have been applied in other disciplines, circuit-theoretic connectivity models are new to ecology. They offer distinct advantages over common analytic connectivity models, including a theoretical basis in random walk theory and an ability to evaluate contributions of multiple dispersal pathways. Resistance, current, and voltage calculated across graphs or raster grids can be related to ecological processes (such as individual movement and gene flow) that occur across large population networks or landscapes. Efficient algorithms can quickly solve networks with millions of nodes, or landscapes with millions of raster cells. Here we review basic circuit theory, discuss relationships between circuit and random walk theories, and describe applications in ecology, evolution, and conservation. We provide examples of how circuit models can be used to predict movement patterns and fates of random walkers in complex landscapes and to identify important habitat patches and movement corridors for conservation planning.
Plant metabolic modeling: achieving new insight into metabolism and metabolic engineering.
Baghalian, Kambiz; Hajirezaei, Mohammad-Reza; Schreiber, Falk
2014-10-01
Models are used to represent aspects of the real world for specific purposes, and mathematical models have opened up new approaches in studying the behavior and complexity of biological systems. However, modeling is often time-consuming and requires significant computational resources for data development, data analysis, and simulation. Computational modeling has been successfully applied as an aid for metabolic engineering in microorganisms. But such model-based approaches have only recently been extended to plant metabolic engineering, mainly due to greater pathway complexity in plants and their highly compartmentalized cellular structure. Recent progress in plant systems biology and bioinformatics has begun to disentangle this complexity and facilitate the creation of efficient plant metabolic models. This review highlights several aspects of plant metabolic modeling in the context of understanding, predicting and modifying complex plant metabolism. We discuss opportunities for engineering photosynthetic carbon metabolism, sucrose synthesis, and the tricarboxylic acid cycle in leaves and oil synthesis in seeds and the application of metabolic modeling to the study of plant acclimation to the environment. The aim of the review is to offer a current perspective for plant biologists without requiring specialized knowledge of bioinformatics or systems biology. © 2014 American Society of Plant Biologists. All rights reserved.
Deterministic ripple-spreading model for complex networks.
Hu, Xiao-Bing; Wang, Ming; Leeson, Mark S; Hines, Evor L; Di Paolo, Ezequiel
2011-04-01
This paper proposes a deterministic complex network model, which is inspired by the natural ripple-spreading phenomenon. The motivations and main advantages of the model are the following: (i) The establishment of many real-world networks is a dynamic process, where it is often observed that the influence of a few local events spreads out through nodes, and then largely determines the final network topology. Obviously, this dynamic process involves many spatial and temporal factors. By simulating the natural ripple-spreading process, this paper reports a very natural way to set up a spatial and temporal model for such complex networks. (ii) Existing relevant network models are all stochastic models, i.e., with a given input, they cannot output a unique topology. Differently, the proposed ripple-spreading model can uniquely determine the final network topology, and at the same time, the stochastic feature of complex networks is captured by randomly initializing ripple-spreading related parameters. (iii) The proposed model can use an easily manageable number of ripple-spreading related parameters to precisely describe a network topology, which is more memory efficient when compared with traditional adjacency matrix or similar memory-expensive data structures. (iv) The ripple-spreading model has a very good potential for both extensions and applications.
Plant Metabolic Modeling: Achieving New Insight into Metabolism and Metabolic Engineering
Baghalian, Kambiz; Hajirezaei, Mohammad-Reza; Schreiber, Falk
2014-01-01
Models are used to represent aspects of the real world for specific purposes, and mathematical models have opened up new approaches in studying the behavior and complexity of biological systems. However, modeling is often time-consuming and requires significant computational resources for data development, data analysis, and simulation. Computational modeling has been successfully applied as an aid for metabolic engineering in microorganisms. But such model-based approaches have only recently been extended to plant metabolic engineering, mainly due to greater pathway complexity in plants and their highly compartmentalized cellular structure. Recent progress in plant systems biology and bioinformatics has begun to disentangle this complexity and facilitate the creation of efficient plant metabolic models. This review highlights several aspects of plant metabolic modeling in the context of understanding, predicting and modifying complex plant metabolism. We discuss opportunities for engineering photosynthetic carbon metabolism, sucrose synthesis, and the tricarboxylic acid cycle in leaves and oil synthesis in seeds and the application of metabolic modeling to the study of plant acclimation to the environment. The aim of the review is to offer a current perspective for plant biologists without requiring specialized knowledge of bioinformatics or systems biology. PMID:25344492
Modeling of substrate and inhibitor binding to phospholipase A2.
Sessions, R B; Dauber-Osguthorpe, P; Campbell, M M; Osguthorpe, D J
1992-09-01
Molecular graphics and molecular mechanics techniques have been used to study the mode of ligand binding and mechanism of action of the enzyme phospholipase A2. A substrate-enzyme complex was constructed based on the crystal structure of the apoenzyme. The complex was minimized to relieve initial strain, and the structural and energetic features of the resultant complex analyzed in detail, at the molecular and residue level. The minimized complex was then used as a basis for examining the action of the enzyme on modified substrates, binding of inhibitors to the enzyme, and possible reaction intermediate complexes. The model is compatible with the suggested mechanism of hydrolysis and with experimental data about stereoselectivity, efficiency of hydrolysis of modified substrates, and inhibitor potency. In conclusion, the model can be used as a tool in evaluating new ligands as possible substrates and in the rational design of inhibitors, for the therapeutic treatment of diseases such as rheumatoid arthritis, atherosclerosis, and asthma.
Bi, Fukun; Chen, Jing; Zhuang, Yin; Bian, Mingming; Zhang, Qingjun
2017-01-01
With the rapid development of optical remote sensing satellites, ship detection and identification based on large-scale remote sensing images has become a significant maritime research topic. Compared with traditional ocean-going vessel detection, inshore ship detection has received increasing attention in harbor dynamic surveillance and maritime management. However, because the harbor environment is complex, gray information and texture features between docked ships and their connected dock regions are indistinguishable, most of the popular detection methods are limited by their calculation efficiency and detection accuracy. In this paper, a novel hierarchical method that combines an efficient candidate scanning strategy and an accurate candidate identification mixture model is presented for inshore ship detection in complex harbor areas. First, in the candidate region extraction phase, an omnidirectional intersected two-dimension scanning (OITDS) strategy is designed to rapidly extract candidate regions from the land-water segmented images. In the candidate region identification phase, a decision mixture model (DMM) is proposed to identify real ships from candidate objects. Specifically, to improve the robustness regarding the diversity of ships, a deformable part model (DPM) was employed to train a key part sub-model and a whole ship sub-model. Furthermore, to improve the identification accuracy, a surrounding correlation context sub-model is built. Finally, to increase the accuracy of candidate region identification, these three sub-models are integrated into the proposed DMM. Experiments were performed on numerous large-scale harbor remote sensing images, and the results showed that the proposed method has high detection accuracy and rapid computational efficiency. PMID:28640236
Bi, Fukun; Chen, Jing; Zhuang, Yin; Bian, Mingming; Zhang, Qingjun
2017-06-22
With the rapid development of optical remote sensing satellites, ship detection and identification based on large-scale remote sensing images has become a significant maritime research topic. Compared with traditional ocean-going vessel detection, inshore ship detection has received increasing attention in harbor dynamic surveillance and maritime management. However, because the harbor environment is complex, gray information and texture features between docked ships and their connected dock regions are indistinguishable, most of the popular detection methods are limited by their calculation efficiency and detection accuracy. In this paper, a novel hierarchical method that combines an efficient candidate scanning strategy and an accurate candidate identification mixture model is presented for inshore ship detection in complex harbor areas. First, in the candidate region extraction phase, an omnidirectional intersected two-dimension scanning (OITDS) strategy is designed to rapidly extract candidate regions from the land-water segmented images. In the candidate region identification phase, a decision mixture model (DMM) is proposed to identify real ships from candidate objects. Specifically, to improve the robustness regarding the diversity of ships, a deformable part model (DPM) was employed to train a key part sub-model and a whole ship sub-model. Furthermore, to improve the identification accuracy, a surrounding correlation context sub-model is built. Finally, to increase the accuracy of candidate region identification, these three sub-models are integrated into the proposed DMM. Experiments were performed on numerous large-scale harbor remote sensing images, and the results showed that the proposed method has high detection accuracy and rapid computational efficiency.
Liu, Qi; Yang, Yu; Chen, Chun; Bu, Jiajun; Zhang, Yin; Ye, Xiuzi
2008-03-31
With the rapid emergence of RNA databases and newly identified non-coding RNAs, an efficient compression algorithm for RNA sequence and structural information is needed for the storage and analysis of such data. Although several algorithms for compressing DNA sequences have been proposed, none of them are suitable for the compression of RNA sequences with their secondary structures simultaneously. This kind of compression not only facilitates the maintenance of RNA data, but also supplies a novel way to measure the informational complexity of RNA structural data, raising the possibility of studying the relationship between the functional activities of RNA structures and their complexities, as well as various structural properties of RNA based on compression. RNACompress employs an efficient grammar-based model to compress RNA sequences and their secondary structures. The main goals of this algorithm are two fold: (1) present a robust and effective way for RNA structural data compression; (2) design a suitable model to represent RNA secondary structure as well as derive the informational complexity of the structural data based on compression. Our extensive tests have shown that RNACompress achieves a universally better compression ratio compared with other sequence-specific or common text-specific compression algorithms, such as Gencompress, winrar and gzip. Moreover, a test of the activities of distinct GTP-binding RNAs (aptamers) compared with their structural complexity shows that our defined informational complexity can be used to describe how complexity varies with activity. These results lead to an objective means of comparing the functional properties of heteropolymers from the information perspective. A universal algorithm for the compression of RNA secondary structure as well as the evaluation of its informational complexity is discussed in this paper. We have developed RNACompress, as a useful tool for academic users. Extensive tests have shown that RNACompress is a universally efficient algorithm for the compression of RNA sequences with their secondary structures. RNACompress also serves as a good measurement of the informational complexity of RNA secondary structure, which can be used to study the functional activities of RNA molecules.
Liu, Qi; Yang, Yu; Chen, Chun; Bu, Jiajun; Zhang, Yin; Ye, Xiuzi
2008-01-01
Background With the rapid emergence of RNA databases and newly identified non-coding RNAs, an efficient compression algorithm for RNA sequence and structural information is needed for the storage and analysis of such data. Although several algorithms for compressing DNA sequences have been proposed, none of them are suitable for the compression of RNA sequences with their secondary structures simultaneously. This kind of compression not only facilitates the maintenance of RNA data, but also supplies a novel way to measure the informational complexity of RNA structural data, raising the possibility of studying the relationship between the functional activities of RNA structures and their complexities, as well as various structural properties of RNA based on compression. Results RNACompress employs an efficient grammar-based model to compress RNA sequences and their secondary structures. The main goals of this algorithm are two fold: (1) present a robust and effective way for RNA structural data compression; (2) design a suitable model to represent RNA secondary structure as well as derive the informational complexity of the structural data based on compression. Our extensive tests have shown that RNACompress achieves a universally better compression ratio compared with other sequence-specific or common text-specific compression algorithms, such as Gencompress, winrar and gzip. Moreover, a test of the activities of distinct GTP-binding RNAs (aptamers) compared with their structural complexity shows that our defined informational complexity can be used to describe how complexity varies with activity. These results lead to an objective means of comparing the functional properties of heteropolymers from the information perspective. Conclusion A universal algorithm for the compression of RNA secondary structure as well as the evaluation of its informational complexity is discussed in this paper. We have developed RNACompress, as a useful tool for academic users. Extensive tests have shown that RNACompress is a universally efficient algorithm for the compression of RNA sequences with their secondary structures. RNACompress also serves as a good measurement of the informational complexity of RNA secondary structure, which can be used to study the functional activities of RNA molecules. PMID:18373878
Seniuk, Olga F; Gorovoj, Leontiy F; Beketova, Galina V; Savichuk, Hatalia O; Rytik, Petr G; Kucherov, Igor I; Prilutskay, Alla B; Prilutsky, Alexandr I
2011-01-01
The goal of this investigation was to comparatively study the efficiency of traditionally used anti-infective drugs and biopolymer complexes originated from the medicinal mushroom Fomes fomentarius (L.:Fr.) Fr.: 1) water-soluble melanin-glucan complex (MGC; -80% melanins and -20% beta-glucans) and 2) insoluble chitin-glucan-melanin complex (ChGMC; -70% chitin, -20% beta-glucans, and -10% melanins). Infectious materials (Helicobacter pylori, Candida albicans, and Herpes vulgaris I and HIV-1(zmb) were used in pure cultures of in vitro and in vivo models on experimental animals. Comparison studies of fungal biopolymers and effective modern antifungal, antibacterial, and antiviral drugs were used in in vitro models. The comparative clinical efficiency of ChGMC and of etiotropic pharmaceuticals in models of H. pylori, C. albicans, and H. vulgaris I infection contamination were studied. Using in vitro models, it was established that MGC completely depresses growth of C. albicans. MGC had an antimicrobial effect on H. pylori identical to erythromycin in all concentrations, and had a stronger action on this bacterium than other tested antibiotics. Tested MGC possesses simultaneously weak toxicity and high anti-HIV-1 activity in comparison with zidovudine (Retrovir). The obtained results show that CLUDDT therapy in Wistar rats with the application of ChGMC is, on average, 1.35-1.43 times as effective as a traditional one. Considering the absence of MGC and ChGMC toxic properties on blood cells even in very high concentrations, these complexes may be used as a source of biopolymers for the creation of essentially new agents for wide application in infectious pathology.
Sampling and modeling riparian forest structure and riparian microclimate
Bianca N.I. Eskelson; Paul D. Anderson; Hailemariam Temesgen
2013-01-01
Riparian areas are extremely variable and dynamic, and represent some of the most complex terrestrial ecosystems in the world. The high variability within and among riparian areas poses challenges in developing efficient sampling and modeling approaches that accurately quantify riparian forest structure and riparian microclimate. Data from eight stream reaches that are...
Software Surface Modeling and Grid Generation Steering Committee
NASA Technical Reports Server (NTRS)
Smith, Robert E. (Editor)
1992-01-01
It is a NASA objective to promote improvements in the capability and efficiency of computational fluid dynamics. Grid generation, the creation of a discrete representation of the solution domain, is an essential part of computational fluid dynamics. However, grid generation about complex boundaries requires sophisticated surface-model descriptions of the boundaries. The surface modeling and the associated computation of surface grids consume an extremely large percentage of the total time required for volume grid generation. Efficient and user friendly software systems for surface modeling and grid generation are critical for computational fluid dynamics to reach its potential. The papers presented here represent the state-of-the-art in software systems for surface modeling and grid generation. Several papers describe improved techniques for grid generation.
NASA Astrophysics Data System (ADS)
Hansen, T. M.; Cordua, K. S.
2017-12-01
Probabilistically formulated inverse problems can be solved using Monte Carlo-based sampling methods. In principle, both advanced prior information, based on for example, complex geostatistical models and non-linear forward models can be considered using such methods. However, Monte Carlo methods may be associated with huge computational costs that, in practice, limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical forward response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival traveltime inversion of crosshole ground penetrating radar data. An accurate forward model, based on 2-D full-waveform modeling followed by automatic traveltime picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the accurate and computationally expensive forward model, and also considerably faster and more accurate (i.e. with better resolution), than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of non-linear and non-Gaussian inverse problems that have to be solved using Monte Carlo sampling techniques.
Modaresi, Seyed Mohamad Sadegh; Faramarzi, Mohammad Ali; Soltani, Arash; Baharifar, Hadi; Amani, Amir
2014-01-01
Streptokinase is a potent fibrinolytic agent which is widely used in treatment of deep vein thrombosis (DVT), pulmonary embolism (PE) and acute myocardial infarction (MI). Major limitation of this enzyme is its short biological half-life in the blood stream. Our previous report showed that complexing streptokinase with chitosan could be a solution to overcome this limitation. The aim of this research was to establish an artificial neural networks (ANNs) model for identifying main factors influencing the loading efficiency of streptokinase, as an essential parameter determining efficacy of the enzyme. Three variables, namely, chitosan concentration, buffer pH and enzyme concentration were considered as input values and the loading efficiency was used as output. Subsequently, the experimental data were modeled and the model was validated against a set of unseen data. The developed model indicated chitosan concentration as probably the most important factor, having reverse effect on the loading efficiency. PMID:25587327
Coupling HYDRUS-1D Code with PA-DDS Algorithms for Inverse Calibration
NASA Astrophysics Data System (ADS)
Wang, Xiang; Asadzadeh, Masoud; Holländer, Hartmut
2017-04-01
Numerical modelling requires calibration to predict future stages. A standard method for calibration is inverse calibration where generally multi-objective optimization algorithms are used to find a solution, e.g. to find an optimal solution of the van Genuchten Mualem (VGM) parameters to predict water fluxes in the vadose zone. We coupled HYDRUS-1D with PA-DDS to add a new, robust function for inverse calibration to the model. The PA-DDS method is a recently developed multi-objective optimization algorithm, which combines Dynamically Dimensioned Search (DDS) and Pareto Archived Evolution Strategy (PAES). The results were compared to a standard method (Marquardt-Levenberg method) implemented in HYDRUS-1D. Calibration performance is evaluated using observed and simulated soil moisture at two soil layers in the Southern Abbotsford, British Columbia, Canada in the terms of the root mean squared error (RMSE) and the Nash-Sutcliffe Efficiency (NSE). Results showed low RMSE values of 0.014 and 0.017 and strong NSE values of 0.961 and 0.939. Compared to the results by the Marquardt-Levenberg method, we received better calibration results for deeper located soil sensors. However, VGM parameters were similar comparing with previous studies. Both methods are equally computational efficient. We claim that a direct implementation of PA-DDS into HYDRUS-1D should reduce the computation effort further. This, the PA-DDS method is efficient for calibrating recharge for complex vadose zone modelling with multiple soil layer and can be a potential tool for calibration of heat and solute transport. Future work should focus on the effectiveness of PA-DDS for calibrating more complex versions of the model with complex vadose zone settings, with more soil layers, and against measured heat and solute transport. Keywords: Recharge, Calibration, HYDRUS-1D, Multi-objective Optimization
NASA Astrophysics Data System (ADS)
Pajusalu, Mihkel; Kunz, Ralf; Rätsep, Margus; Timpmann, Kõu; Köhler, Jürgen; Freiberg, Arvi
2015-11-01
Bacterial light-harvesting pigment-protein complexes are very efficient at converting photons into excitons and transferring them to reaction centers, where the energy is stored in a chemical form. Optical properties of the complexes are known to change significantly in time and also vary from one complex to another; therefore, a detailed understanding of the variations on the level of single complexes and how they accumulate into effects that can be seen on the macroscopic scale is required. While experimental and theoretical methods exist to study the spectral properties of light-harvesting complexes on both individual complex and bulk ensemble levels, they have been developed largely independently of each other. To fill this gap, we simultaneously analyze experimental low-temperature single-complex and bulk ensemble optical spectra of the light-harvesting complex-2 (LH2) chromoproteins from the photosynthetic bacterium Rhodopseudomonas acidophila in order to find a unique theoretical model consistent with both experimental situations. The model, which satisfies most of the observations, combines strong exciton-phonon coupling with significant disorder, characteristic of the proteins. We establish a detailed disorder model that, in addition to containing a C2-symmetrical modulation of the site energies, distinguishes between static intercomplex and slow conformational intracomplex disorders. The model evaluations also verify that, despite best efforts, the single-LH2-complex measurements performed so far may be biased toward complexes with higher Huang-Rhys factors.
Pajusalu, Mihkel; Kunz, Ralf; Rätsep, Margus; Timpmann, Kõu; Köhler, Jürgen; Freiberg, Arvi
2015-01-01
Bacterial light-harvesting pigment-protein complexes are very efficient at converting photons into excitons and transferring them to reaction centers, where the energy is stored in a chemical form. Optical properties of the complexes are known to change significantly in time and also vary from one complex to another; therefore, a detailed understanding of the variations on the level of single complexes and how they accumulate into effects that can be seen on the macroscopic scale is required. While experimental and theoretical methods exist to study the spectral properties of light-harvesting complexes on both individual complex and bulk ensemble levels, they have been developed largely independently of each other. To fill this gap, we simultaneously analyze experimental low-temperature single-complex and bulk ensemble optical spectra of the light-harvesting complex-2 (LH2) chromoproteins from the photosynthetic bacterium Rhodopseudomonas acidophila in order to find a unique theoretical model consistent with both experimental situations. The model, which satisfies most of the observations, combines strong exciton-phonon coupling with significant disorder, characteristic of the proteins. We establish a detailed disorder model that, in addition to containing a C_{2}-symmetrical modulation of the site energies, distinguishes between static intercomplex and slow conformational intracomplex disorders. The model evaluations also verify that, despite best efforts, the single-LH2-complex measurements performed so far may be biased toward complexes with higher Huang-Rhys factors.
Developing an active implementation model for a chronic disease management program.
Smidth, Margrethe; Christensen, Morten Bondo; Olesen, Frede; Vedsted, Peter
2013-04-01
Introduction and diffusion of new disease management programs in healthcare is usually slow, but active theory-driven implementation seems to outperform other implementation strategies. However, we have only scarce evidence on the feasibility and real effect of such strategies in complex primary care settings where municipalities, general practitioners and hospitals should work together. The Central Denmark Region recently implemented a disease management program for chronic obstructive pulmonary disease (COPD) which presented an opportunity to test an active implementation model against the usual implementation model. The aim of the present paper is to describe the development of an active implementation model using the Medical Research Council's model for complex interventions and the Chronic Care Model. We used the Medical Research Council's five-stage model for developing complex interventions to design an implementation model for a disease management program for COPD. First, literature on implementing change in general practice was scrutinised and empirical knowledge was assessed for suitability. In phase I, the intervention was developed; and in phases II and III, it was tested in a block- and cluster-randomised study. In phase IV, we evaluated the feasibility for others to use our active implementation model. The Chronic Care Model was identified as a model for designing efficient implementation elements. These elements were combined into a multifaceted intervention, and a timeline for the trial in a randomised study was decided upon in accordance with the five stages in the Medical Research Council's model; this was captured in a PaTPlot, which allowed us to focus on the structure and the timing of the intervention. The implementation strategies identified as efficient were use of the Breakthrough Series, academic detailing, provision of patient material and meetings between providers. The active implementation model was tested in a randomised trial (results reported elsewhere). The combination of the theoretical model for complex interventions and the Chronic Care Model and the chosen specific implementation strategies proved feasible for a practice-based active implementation model for a chronic-disease-management-program for COPD. Using the Medical Research Council's model added transparency to the design phase which further facilitated the process of implementing the program. http://www.clinicaltrials.gov/(NCT01228708).
Drawert, Brian; Engblom, Stefan; Hellander, Andreas
2012-06-22
Experiments in silico using stochastic reaction-diffusion models have emerged as an important tool in molecular systems biology. Designing computational software for such applications poses several challenges. Firstly, realistic lattice-based modeling for biological applications requires a consistent way of handling complex geometries, including curved inner- and outer boundaries. Secondly, spatiotemporal stochastic simulations are computationally expensive due to the fast time scales of individual reaction- and diffusion events when compared to the biological phenomena of actual interest. We therefore argue that simulation software needs to be both computationally efficient, employing sophisticated algorithms, yet in the same time flexible in order to meet present and future needs of increasingly complex biological modeling. We have developed URDME, a flexible software framework for general stochastic reaction-transport modeling and simulation. URDME uses Unstructured triangular and tetrahedral meshes to resolve general geometries, and relies on the Reaction-Diffusion Master Equation formalism to model the processes under study. An interface to a mature geometry and mesh handling external software (Comsol Multiphysics) provides for a stable and interactive environment for model construction. The core simulation routines are logically separated from the model building interface and written in a low-level language for computational efficiency. The connection to the geometry handling software is realized via a Matlab interface which facilitates script computing, data management, and post-processing. For practitioners, the software therefore behaves much as an interactive Matlab toolbox. At the same time, it is possible to modify and extend URDME with newly developed simulation routines. Since the overall design effectively hides the complexity of managing the geometry and meshes, this means that newly developed methods may be tested in a realistic setting already at an early stage of development. In this paper we demonstrate, in a series of examples with high relevance to the molecular systems biology community, that the proposed software framework is a useful tool for both practitioners and developers of spatial stochastic simulation algorithms. Through the combined efforts of algorithm development and improved modeling accuracy, increasingly complex biological models become feasible to study through computational methods. URDME is freely available at http://www.urdme.org.
Automated dynamic analytical model improvement for damped structures
NASA Technical Reports Server (NTRS)
Fuh, J. S.; Berman, A.
1985-01-01
A method is described to improve a linear nonproportionally damped analytical model of a structure. The procedure finds the smallest changes in the analytical model such that the improved model matches the measured modal parameters. Features of the method are: (1) ability to properly treat complex valued modal parameters of a damped system; (2) applicability to realistically large structural models; and (3) computationally efficiency without involving eigensolutions and inversion of a large matrix.
Intelligent classifier for dynamic fault patterns based on hidden Markov model
NASA Astrophysics Data System (ADS)
Xu, Bo; Feng, Yuguang; Yu, Jinsong
2006-11-01
It's difficult to build precise mathematical models for complex engineering systems because of the complexity of the structure and dynamics characteristics. Intelligent fault diagnosis introduces artificial intelligence and works in a different way without building the analytical mathematical model of a diagnostic object, so it's a practical approach to solve diagnostic problems of complex systems. This paper presents an intelligent fault diagnosis method, an integrated fault-pattern classifier based on Hidden Markov Model (HMM). This classifier consists of dynamic time warping (DTW) algorithm, self-organizing feature mapping (SOFM) network and Hidden Markov Model. First, after dynamic observation vector in measuring space is processed by DTW, the error vector including the fault feature of being tested system is obtained. Then a SOFM network is used as a feature extractor and vector quantization processor. Finally, fault diagnosis is realized by fault patterns classifying with the Hidden Markov Model classifier. The importing of dynamic time warping solves the problem of feature extracting from dynamic process vectors of complex system such as aeroengine, and makes it come true to diagnose complex system by utilizing dynamic process information. Simulating experiments show that the diagnosis model is easy to extend, and the fault pattern classifier is efficient and is convenient to the detecting and diagnosing of new faults.
Advanced laser modeling with BLAZE multiphysics
NASA Astrophysics Data System (ADS)
Palla, Andrew D.; Carroll, David L.; Gray, Michael I.; Suzuki, Lui
2017-01-01
The BLAZE Multiphysics™ software simulation suite was specifically developed to model highly complex multiphysical systems in a computationally efficient and highly scalable manner. These capabilities are of particular use when applied to the complexities associated with high energy laser systems that combine subsonic/transonic/supersonic fluid dynamics, chemically reacting flows, laser electronics, heat transfer, optical physics, and in some cases plasma discharges. In this paper we present detailed cw and pulsed gas laser calculations using the BLAZE model with comparisons to data. Simulations of DPAL, XPAL, ElectricOIL (EOIL), and the optically pumped rare gas laser were found to be in good agreement with experimental data.
The provision of therapy mattresses for pressure ulcer prevention.
Pagnamenta, Fania
2017-03-23
Preventing pressure ulcers is complex and involves skin care, the provision of therapy mattresses, repositioning, the management of incontinence and adequate nutritional support. This article describes a model of therapy mattress provision that is based on non-powered products. Evaluating the efficiency of this model is challenging, due to the complexities of care, but Safety Thermometer data and incidents reports offer reassurance that non-powered therapy mattresses can provide adequate pressure ulcer prevention. Therapy mattress provision is only one of the five interventions and these are described in details to give readers a fuller picture of the model used at the author's trust.
A continuum theory for multicomponent chromatography modeling.
Pfister, David; Morbidelli, Massimo; Nicoud, Roger-Marc
2016-05-13
A continuum theory is proposed for modeling multicomponent chromatographic systems under linear conditions. The model is based on the description of complex mixtures, possibly involving tens or hundreds of solutes, by a continuum. The present approach is shown to be very efficient when dealing with a large number of similar components presenting close elution behaviors and whose individual analytical characterization is impossible. Moreover, approximating complex mixtures by continuous distributions of solutes reduces the required number of model parameters to the few ones specific to the characterization of the selected continuous distributions. Therefore, in the frame of the continuum theory, the simulation of large multicomponent systems gets simplified and the computational effectiveness of the chromatographic model is thus dramatically improved. Copyright © 2016 Elsevier B.V. All rights reserved.
A Novel Scheme for an Energy Efficient Internet of Things Based on Wireless Sensor Networks.
Rani, Shalli; Talwar, Rajneesh; Malhotra, Jyoteesh; Ahmed, Syed Hassan; Sarkar, Mahasweta; Song, Houbing
2015-11-12
One of the emerging networking standards that gap between the physical world and the cyber one is the Internet of Things. In the Internet of Things, smart objects communicate with each other, data are gathered and certain requests of users are satisfied by different queried data. The development of energy efficient schemes for the IoT is a challenging issue as the IoT becomes more complex due to its large scale the current techniques of wireless sensor networks cannot be applied directly to the IoT. To achieve the green networked IoT, this paper addresses energy efficiency issues by proposing a novel deployment scheme. This scheme, introduces: (1) a hierarchical network design; (2) a model for the energy efficient IoT; (3) a minimum energy consumption transmission algorithm to implement the optimal model. The simulation results show that the new scheme is more energy efficient and flexible than traditional WSN schemes and consequently it can be implemented for efficient communication in the IoT.
A Novel Scheme for an Energy Efficient Internet of Things Based on Wireless Sensor Networks
Rani, Shalli; Talwar, Rajneesh; Malhotra, Jyoteesh; Ahmed, Syed Hassan; Sarkar, Mahasweta; Song, Houbing
2015-01-01
One of the emerging networking standards that gap between the physical world and the cyber one is the Internet of Things. In the Internet of Things, smart objects communicate with each other, data are gathered and certain requests of users are satisfied by different queried data. The development of energy efficient schemes for the IoT is a challenging issue as the IoT becomes more complex due to its large scale the current techniques of wireless sensor networks cannot be applied directly to the IoT. To achieve the green networked IoT, this paper addresses energy efficiency issues by proposing a novel deployment scheme. This scheme, introduces: (1) a hierarchical network design; (2) a model for the energy efficient IoT; (3) a minimum energy consumption transmission algorithm to implement the optimal model. The simulation results show that the new scheme is more energy efficient and flexible than traditional WSN schemes and consequently it can be implemented for efficient communication in the IoT. PMID:26569260
Calculations of the binding affinities of protein-protein complexes with the fast multipole method
NASA Astrophysics Data System (ADS)
Kim, Bongkeun; Song, Jiming; Song, Xueyu
2010-09-01
In this paper, we used a coarse-grained model at the residue level to calculate the binding free energies of three protein-protein complexes. General formulations to calculate the electrostatic binding free energy and the van der Waals free energy are presented by solving linearized Poisson-Boltzmann equations using the boundary element method in combination with the fast multipole method. The residue level model with the fast multipole method allows us to efficiently investigate how the mutations on the active site of the protein-protein interface affect the changes in binding affinities of protein complexes. Good correlations between the calculated results and the experimental ones indicate that our model can capture the dominant contributions to the protein-protein interactions. At the same time, additional effects on protein binding due to atomic details are also discussed in the context of the limitations of such a coarse-grained model.
Pérez-Garrido, Alfonso; Morales Helguera, Aliuska; Abellán Guillén, Adela; Cordeiro, M Natália D S; Garrido Escudero, Amalio
2009-01-15
This paper reports a QSAR study for predicting the complexation of a large and heterogeneous variety of substances (233 organic compounds) with beta-cyclodextrins (beta-CDs). Several different theoretical molecular descriptors, calculated solely from the molecular structure of the compounds under investigation, and an efficient variable selection procedure, like the Genetic Algorithm, led to models with satisfactory global accuracy and predictivity. But the best-final QSAR model is based on Topological descriptors meanwhile offering a reasonable interpretation. This QSAR model was able to explain ca. 84% of the variance in the experimental activity, and displayed very good internal cross-validation statistics and predictivity on external data. It shows that the driving forces for CD complexation are mainly hydrophobic and steric (van der Waals) interactions. Thus, the results of our study provide a valuable tool for future screening and priority testing of beta-CDs guest molecules.
NASA Astrophysics Data System (ADS)
Yoon, J.; Klassert, C. J. A.; Lachaut, T.; Selby, P. D.; Knox, S.; Gorelick, S.; Rajsekhar, D.; Tilmant, A.; Avisse, N.; Harou, J. J.; Gawel, E.; Klauer, B.; Mustafa, D.; Talozi, S.; Sigel, K.
2015-12-01
Our work focuses on development of a multi-agent, hydroeconomic model for purposes of water policy evaluation in Jordan. The model adopts a modular approach, integrating biophysical modules that simulate natural and engineered phenomena with human modules that represent behavior at multiple levels of decision making. The hydrologic modules are developed using spatially-distributed groundwater and surface water models, which are translated into compact simulators for efficient integration into the multi-agent model. For the groundwater model, we adopt a response matrix method approach in which a 3-dimensional MODFLOW model of a complex regional groundwater system is converted into a linear simulator of groundwater response by pre-processing drawdown results from several hundred numerical simulation runs. Surface water models for each major surface water basin in the country are developed in SWAT and similarly translated into simple rainfall-runoff functions for integration with the multi-agent model. The approach balances physically-based, spatially-explicit representation of hydrologic systems with the efficiency required for integration into a complex multi-agent model that is computationally amenable to robust scenario analysis. For the multi-agent model, we explicitly represent human agency at multiple levels of decision making, with agents representing riparian, management, supplier, and water user groups. The agents' decision making models incorporate both rule-based heuristics as well as economic optimization. The model is programmed in Python using Pynsim, a generalizable, open-source object-oriented code framework for modeling network-based water resource systems. The Jordan model is one of the first applications of Pynsim to a real-world water management case study. Preliminary results from a tanker market scenario run through year 2050 are presented in which several salient features of the water system are investigated: competition between urban and private farmer agents, the emergence of a private tanker market, disparities in economic wellbeing to different user groups caused by unique supply conditions, and response of the complex system to various policy interventions.
Power law-based local search in spider monkey optimisation for lower order system modelling
NASA Astrophysics Data System (ADS)
Sharma, Ajay; Sharma, Harish; Bhargava, Annapurna; Sharma, Nirmala
2017-01-01
The nature-inspired algorithms (NIAs) have shown efficiency to solve many complex real-world optimisation problems. The efficiency of NIAs is measured by their ability to find adequate results within a reasonable amount of time, rather than an ability to guarantee the optimal solution. This paper presents a solution for lower order system modelling using spider monkey optimisation (SMO) algorithm to obtain a better approximation for lower order systems and reflects almost original higher order system's characteristics. Further, a local search strategy, namely, power law-based local search is incorporated with SMO. The proposed strategy is named as power law-based local search in SMO (PLSMO). The efficiency, accuracy and reliability of the proposed algorithm is tested over 20 well-known benchmark functions. Then, the PLSMO algorithm is applied to solve the lower order system modelling problem.
Coherent transport and energy flow patterns in photosynthesis under incoherent excitation.
Pelzer, Kenley M; Can, Tankut; Gray, Stephen K; Morr, Dirk K; Engel, Gregory S
2014-03-13
Long-lived coherences have been observed in photosynthetic complexes after laser excitation, inspiring new theories regarding the extreme quantum efficiency of photosynthetic energy transfer. Whether coherent (ballistic) transport occurs in nature and whether it improves photosynthetic efficiency remain topics of debate. Here, we use a nonequilibrium Green's function analysis to model exciton transport after excitation from an incoherent source (as opposed to coherent laser excitation). We find that even with an incoherent source, the rate of environmental dephasing strongly affects exciton transport efficiency, suggesting that the relationship between dephasing and efficiency is not an artifact of coherent excitation. The Green's function analysis provides a clear view of both the pattern of excitonic fluxes among chromophores and the multidirectionality of energy transfer that is a feature of coherent transport. We see that even in the presence of an incoherent source, transport occurs by qualitatively different mechanisms as dephasing increases. Our approach can be generalized to complex synthetic systems and may provide a new tool for optimizing synthetic light harvesting materials.
ERIC Educational Resources Information Center
Hsieh, Chueh-An; Maier, Kimberly S.
2009-01-01
The capacity of Bayesian methods in estimating complex statistical models is undeniable. Bayesian data analysis is seen as having a range of advantages, such as an intuitive probabilistic interpretation of the parameters of interest, the efficient incorporation of prior information to empirical data analysis, model averaging and model selection.…
Complex networks under dynamic repair model
NASA Astrophysics Data System (ADS)
Chaoqi, Fu; Ying, Wang; Kun, Zhao; Yangjun, Gao
2018-01-01
Invulnerability is not the only factor of importance when considering complex networks' security. It is also critical to have an effective and reasonable repair strategy. Existing research on network repair is confined to the static model. The dynamic model makes better use of the redundant capacity of repaired nodes and repairs the damaged network more efficiently than the static model; however, the dynamic repair model is complex and polytropic. In this paper, we construct a dynamic repair model and systematically describe the energy-transfer relationships between nodes in the repair process of the failure network. Nodes are divided into three types, corresponding to three structures. We find that the strong coupling structure is responsible for secondary failure of the repaired nodes and propose an algorithm that can select the most suitable targets (nodes or links) to repair the failure network with minimal cost. Two types of repair strategies are identified, with different effects under the two energy-transfer rules. The research results enable a more flexible approach to network repair.
Asymptotic behavior of solutions of the renormalization group K-epsilon turbulence model
NASA Technical Reports Server (NTRS)
Yakhot, A.; Staroselsky, I.; Orszag, S. A.
1994-01-01
Presently, the only efficient way to calculate turbulent flows in complex geometries of engineering interest is to use Reynolds-average Navier-Stokes (RANS) equations. As compared to the original Navier-Stokes problem, these RANS equations posses much more complicated nonlinear structure and may exhibit far more complex nonlinear behavior. In certain cases, the asymptotic behavior of such models can be studied analytically which, aside from being an interesting fundamental problem, is important for better understanding of the internal structure of the models as well as to improve their performances. The renormalization group (RNG) K-epsilon turbulence model, derived directly from the incompresible Navier-Stokes equations, is analyzed. It has already been used to calculate a variety of turbulent and transitional flows in complex geometries. For large values of the RNG viscosity parameter, the model may exhibit singular behavior. In the form of the RNG K-epsilon model that avoids the use of explicit wall functions, a = 1, so the RNG viscosity parameter must be smaller than 23.62 to avoid singularities.
NASA Astrophysics Data System (ADS)
Zhang, Chuan-Biao; Ming, Li; Xin, Zhou
2015-12-01
Ensemble simulations, which use multiple short independent trajectories from dispersive initial conformations, rather than a single long trajectory as used in traditional simulations, are expected to sample complex systems such as biomolecules much more efficiently. The re-weighted ensemble dynamics (RED) is designed to combine these short trajectories to reconstruct the global equilibrium distribution. In the RED, a number of conformational functions, named as basis functions, are applied to relate these trajectories to each other, then a detailed-balance-based linear equation is built, whose solution provides the weights of these trajectories in equilibrium distribution. Thus, the sufficient and efficient selection of basis functions is critical to the practical application of RED. Here, we review and present a few possible ways to generally construct basis functions for applying the RED in complex molecular systems. Especially, for systems with less priori knowledge, we could generally use the root mean squared deviation (RMSD) among conformations to split the whole conformational space into a set of cells, then use the RMSD-based-cell functions as basis functions. We demonstrate the application of the RED in typical systems, including a two-dimensional toy model, the lattice Potts model, and a short peptide system. The results indicate that the RED with the constructions of basis functions not only more efficiently sample the complex systems, but also provide a general way to understand the metastable structure of conformational space. Project supported by the National Natural Science Foundation of China (Grant No. 11175250).
NASA Astrophysics Data System (ADS)
Safaei, S.; Haghnegahdar, A.; Razavi, S.
2016-12-01
Complex environmental models are now the primary tool to inform decision makers for the current or future management of environmental resources under the climate and environmental changes. These complex models often contain a large number of parameters that need to be determined by a computationally intensive calibration procedure. Sensitivity analysis (SA) is a very useful tool that not only allows for understanding the model behavior, but also helps in reducing the number of calibration parameters by identifying unimportant ones. The issue is that most global sensitivity techniques are highly computationally demanding themselves for generating robust and stable sensitivity metrics over the entire model response surface. Recently, a novel global sensitivity analysis method, Variogram Analysis of Response Surfaces (VARS), is introduced that can efficiently provide a comprehensive assessment of global sensitivity using the Variogram concept. In this work, we aim to evaluate the effectiveness of this highly efficient GSA method in saving computational burden, when applied to systems with extra-large number of input factors ( 100). We use a test function and a hydrological modelling case study to demonstrate the capability of VARS method in reducing problem dimensionality by identifying important vs unimportant input factors.
Modeling and simulating networks of interdependent protein interactions.
Stöcker, Bianca K; Köster, Johannes; Zamir, Eli; Rahmann, Sven
2018-05-21
Protein interactions are fundamental building blocks of biochemical reaction systems underlying cellular functions. The complexity and functionality of these systems emerge not only from the protein interactions themselves but also from the dependencies between these interactions, as generated by allosteric effects or mutual exclusion due to steric hindrance. Therefore, formal models for integrating and utilizing information about interaction dependencies are of high interest. Here, we describe an approach for endowing protein networks with interaction dependencies using propositional logic, thereby obtaining constrained protein interaction networks ("constrained networks"). The construction of these networks is based on public interaction databases as well as text-mined information about interaction dependencies. We present an efficient data structure and algorithm to simulate protein complex formation in constrained networks. The efficiency of the model allows fast simulation and facilitates the analysis of many proteins in large networks. In addition, this approach enables the simulation of perturbation effects, such as knockout of single or multiple proteins and changes of protein concentrations. We illustrate how our model can be used to analyze a constrained human adhesome protein network, which is responsible for the formation of diverse and dynamic cell-matrix adhesion sites. By comparing protein complex formation under known interaction dependencies versus without dependencies, we investigate how these dependencies shape the resulting repertoire of protein complexes. Furthermore, our model enables investigating how the interplay of network topology with interaction dependencies influences the propagation of perturbation effects across a large biochemical system. Our simulation software CPINSim (for Constrained Protein Interaction Network Simulator) is available under the MIT license at http://github.com/BiancaStoecker/cpinsim and as a Bioconda package (https://bioconda.github.io).
Mo, Shaoxing; Lu, Dan; Shi, Xiaoqing; ...
2017-12-27
Global sensitivity analysis (GSA) and uncertainty quantification (UQ) for groundwater modeling are challenging because of the model complexity and significant computational requirements. To reduce the massive computational cost, a cheap-to-evaluate surrogate model is usually constructed to approximate and replace the expensive groundwater models in the GSA and UQ. Constructing an accurate surrogate requires actual model simulations on a number of parameter samples. Thus, a robust experimental design strategy is desired to locate informative samples so as to reduce the computational cost in surrogate construction and consequently to improve the efficiency in the GSA and UQ. In this study, we developmore » a Taylor expansion-based adaptive design (TEAD) that aims to build an accurate global surrogate model with a small training sample size. TEAD defines a novel hybrid score function to search informative samples, and a robust stopping criterion to terminate the sample search that guarantees the resulted approximation errors satisfy the desired accuracy. The good performance of TEAD in building global surrogate models is demonstrated in seven analytical functions with different dimensionality and complexity in comparison to two widely used experimental design methods. The application of the TEAD-based surrogate method in two groundwater models shows that the TEAD design can effectively improve the computational efficiency of GSA and UQ for groundwater modeling.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mo, Shaoxing; Lu, Dan; Shi, Xiaoqing
Global sensitivity analysis (GSA) and uncertainty quantification (UQ) for groundwater modeling are challenging because of the model complexity and significant computational requirements. To reduce the massive computational cost, a cheap-to-evaluate surrogate model is usually constructed to approximate and replace the expensive groundwater models in the GSA and UQ. Constructing an accurate surrogate requires actual model simulations on a number of parameter samples. Thus, a robust experimental design strategy is desired to locate informative samples so as to reduce the computational cost in surrogate construction and consequently to improve the efficiency in the GSA and UQ. In this study, we developmore » a Taylor expansion-based adaptive design (TEAD) that aims to build an accurate global surrogate model with a small training sample size. TEAD defines a novel hybrid score function to search informative samples, and a robust stopping criterion to terminate the sample search that guarantees the resulted approximation errors satisfy the desired accuracy. The good performance of TEAD in building global surrogate models is demonstrated in seven analytical functions with different dimensionality and complexity in comparison to two widely used experimental design methods. The application of the TEAD-based surrogate method in two groundwater models shows that the TEAD design can effectively improve the computational efficiency of GSA and UQ for groundwater modeling.« less
NASA Astrophysics Data System (ADS)
Xu, M.; van Overloop, P. J.; van de Giesen, N. C.
2011-02-01
Model predictive control (MPC) of open channel flow is becoming an important tool in water management. The complexity of the prediction model has a large influence on the MPC application in terms of control effectiveness and computational efficiency. The Saint-Venant equations, called SV model in this paper, and the Integrator Delay (ID) model are either accurate but computationally costly, or simple but restricted to allowed flow changes. In this paper, a reduced Saint-Venant (RSV) model is developed through a model reduction technique, Proper Orthogonal Decomposition (POD), on the SV equations. The RSV model keeps the main flow dynamics and functions over a large flow range but is easier to implement in MPC. In the test case of a modeled canal reach, the number of states and disturbances in the RSV model is about 45 and 16 times less than the SV model, respectively. The computational time of MPC with the RSV model is significantly reduced, while the controller remains effective. Thus, the RSV model is a promising means to balance the control effectiveness and computational efficiency.
A fast mass spring model solver for high-resolution elastic objects
NASA Astrophysics Data System (ADS)
Zheng, Mianlun; Yuan, Zhiyong; Zhu, Weixu; Zhang, Guian
2017-03-01
Real-time simulation of elastic objects is of great importance for computer graphics and virtual reality applications. The fast mass spring model solver can achieve visually realistic simulation in an efficient way. Unfortunately, this method suffers from resolution limitations and lack of mechanical realism for a surface geometry model, which greatly restricts its application. To tackle these problems, in this paper we propose a fast mass spring model solver for high-resolution elastic objects. First, we project the complex surface geometry model into a set of uniform grid cells as cages through *cages mean value coordinate method to reflect its internal structure and mechanics properties. Then, we replace the original Cholesky decomposition method in the fast mass spring model solver with a conjugate gradient method, which can make the fast mass spring model solver more efficient for detailed surface geometry models. Finally, we propose a graphics processing unit accelerated parallel algorithm for the conjugate gradient method. Experimental results show that our method can realize efficient deformation simulation of 3D elastic objects with visual reality and physical fidelity, which has a great potential for applications in computer animation.
Understanding GPU Power. A Survey of Profiling, Modeling, and Simulation Methods
Bridges, Robert A.; Imam, Neena; Mintz, Tiffany M.
2016-09-01
Modern graphics processing units (GPUs) have complex architectures that admit exceptional performance and energy efficiency for high throughput applications.Though GPUs consume large amounts of power, their use for high throughput applications facilitate state-of-the-art energy efficiency and performance. Consequently, continued development relies on understanding their power consumption. Our work is a survey of GPU power modeling and profiling methods with increased detail on noteworthy efforts. Moreover, as direct measurement of GPU power is necessary for model evaluation and parameter initiation, internal and external power sensors are discussed. Hardware counters, which are low-level tallies of hardware events, share strong correlation to powermore » use and performance. Statistical correlation between power and performance counters has yielded worthwhile GPU power models, yet the complexity inherent to GPU architectures presents new hurdles for power modeling. Developments and challenges of counter-based GPU power modeling is discussed. Often building on the counter-based models, research efforts for GPU power simulation, which make power predictions from input code and hardware knowledge, provide opportunities for optimization in programming or architectural design. Noteworthy strides in power simulations for GPUs are included along with their performance or functional simulator counterparts when appropriate. Lastly, possible directions for future research are discussed.« less
Understanding GPU Power. A Survey of Profiling, Modeling, and Simulation Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bridges, Robert A.; Imam, Neena; Mintz, Tiffany M.
Modern graphics processing units (GPUs) have complex architectures that admit exceptional performance and energy efficiency for high throughput applications.Though GPUs consume large amounts of power, their use for high throughput applications facilitate state-of-the-art energy efficiency and performance. Consequently, continued development relies on understanding their power consumption. Our work is a survey of GPU power modeling and profiling methods with increased detail on noteworthy efforts. Moreover, as direct measurement of GPU power is necessary for model evaluation and parameter initiation, internal and external power sensors are discussed. Hardware counters, which are low-level tallies of hardware events, share strong correlation to powermore » use and performance. Statistical correlation between power and performance counters has yielded worthwhile GPU power models, yet the complexity inherent to GPU architectures presents new hurdles for power modeling. Developments and challenges of counter-based GPU power modeling is discussed. Often building on the counter-based models, research efforts for GPU power simulation, which make power predictions from input code and hardware knowledge, provide opportunities for optimization in programming or architectural design. Noteworthy strides in power simulations for GPUs are included along with their performance or functional simulator counterparts when appropriate. Lastly, possible directions for future research are discussed.« less
Efficient implementation of the many-body Reactive Bond Order (REBO) potential on GPU
NASA Astrophysics Data System (ADS)
Trędak, Przemysław; Rudnicki, Witold R.; Majewski, Jacek A.
2016-09-01
The second generation Reactive Bond Order (REBO) empirical potential is commonly used to accurately model a wide range hydrocarbon materials. It is also extensible to other atom types and interactions. REBO potential assumes complex multi-body interaction model, that is difficult to represent efficiently in the SIMD or SIMT programming model. Hence, despite its importance, no efficient GPGPU implementation has been developed for this potential. Here we present a detailed description of a highly efficient GPGPU implementation of molecular dynamics algorithm using REBO potential. The presented algorithm takes advantage of rarely used properties of the SIMT architecture of a modern GPU to solve difficult synchronizations issues that arise in computations of multi-body potential. Techniques developed for this problem may be also used to achieve efficient solutions of different problems. The performance of proposed algorithm is assessed using a range of model systems. It is compared to highly optimized CPU implementation (both single core and OpenMP) available in LAMMPS package. These experiments show up to 6x improvement in forces computation time using single processor of the NVIDIA Tesla K80 compared to high end 16-core Intel Xeon processor.
Physiological and Anatomical Visual Analytics (PAVA) Background
The need to efficiently analyze human chemical disposition data from in vivo studies or in silico PBPK modeling efforts, and to see complex disposition data in a logical manner, has created a unique opportunity for visual analytics applid to PAD.
Empirical modeling ENSO dynamics with complex-valued artificial neural networks
NASA Astrophysics Data System (ADS)
Seleznev, Aleksei; Gavrilov, Andrey; Mukhin, Dmitry
2016-04-01
The main difficulty in empirical reconstructing the distributed dynamical systems (e.g. regional climate systems, such as El-Nino-Southern Oscillation - ENSO) is a huge amount of observational data comprising time-varying spatial fields of several variables. An efficient reduction of system's dimensionality thereby is essential for inferring an evolution operator (EO) for a low-dimensional subsystem that determines the key properties of the observed dynamics. In this work, to efficient reduction of observational data sets we use complex-valued (Hilbert) empirical orthogonal functions which are appropriate, by their nature, for describing propagating structures unlike traditional empirical orthogonal functions. For the approximation of the EO, a universal model in the form of complex-valued artificial neural network is suggested. The effectiveness of this approach is demonstrated by predicting both the Jin-Neelin-Ghil ENSO model [1] behavior and real ENSO variability from sea surface temperature anomalies data [2]. The study is supported by Government of Russian Federation (agreement #14.Z50.31.0033 with the Institute of Applied Physics of RAS). 1. Jin, F.-F., J. D. Neelin, and M. Ghil, 1996: El Ni˜no/Southern Oscillation and the annual cycle: subharmonic frequency locking and aperiodicity. Physica D, 98, 442-465. 2. http://iridl.ldeo.columbia.edu/SOURCES/.KAPLAN/.EXTENDED/.v2/.ssta/
Route complexity and simulated physical ageing negatively influence wayfinding.
Zijlstra, Emma; Hagedoorn, Mariët; Krijnen, Wim P; van der Schans, Cees P; Mobach, Mark P
2016-09-01
The aim of this age-simulation field experiment was to assess the influence of route complexity and physical ageing on wayfinding. Seventy-five people (aged 18-28) performed a total of 108 wayfinding tasks (i.e., 42 participants performed two wayfinding tasks and 33 performed one wayfinding task), of which 59 tasks were performed wearing gerontologic ageing suits. Outcome variables were wayfinding performance (i.e., efficiency and walking speed) and physiological outcomes (i.e., heart and respiratory rates). Analysis of covariance showed that persons on more complex routes (i.e., more floor and building changes) walked less efficiently than persons on less complex routes. In addition, simulated elderly participants perform worse in wayfinding than young participants in terms of speed (p < 0.001). Moreover, a linear mixed model showed that simulated elderly persons had higher heart rates and respiratory rates compared to young people during a wayfinding task, suggesting that simulated elderly consumed more energy during this task. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Du, Shihong; Guo, Luo; Wang, Qiao; Qin, Qimin
The extended 9-intersection matrix is used to formalize topological relations between uncertain regions while it is designed to satisfy the requirements at a concept level, and to deal with the complex regions with broad boundaries (CBBRs) as a whole without considering their hierarchical structures. In contrast to simple regions with broad boundaries, CBBRs have complex hierarchical structures. Therefore, it is necessary to take into account the complex hierarchical structure and to represent the topological relations between all regions in CBBRs as a relation matrix, rather than using the extended 9-intersection matrix to determine topological relations. In this study, a tree model is first used to represent the intrinsic configuration of CBBRs hierarchically. Then, the reasoning tables are presented for deriving topological relations between child, parent and sibling regions from the relations between two given regions in CBBRs. Finally, based on the reasoning, efficient methods are proposed to compute and derive the topological relation matrix. The proposed methods can be incorporated into spatial databases to facilitate geometric-oriented applications.
Vitol, Elina A.; Rozhkova, Elena A.; Rose, Volker; ...
2014-06-06
Temperature-responsive magnetic nanomicelles can serve as thermal energy and cargo carriers with controlled drug release functionality. In view of their potential biomedical applications, understanding the modes of interaction between nanomaterials and living systems and evaluation of efficiency of cargo delivery is of the utmost importance. In this paper, we investigate the interaction between the hybrid magnetic nanomicelles engineered for controlled platinum complex drug delivery and a biological system at three fundamental levels: subcellular compartments, a single cell and whole living animal. Nanomicelles with polymeric P(NIPAAm-co-AAm)-b-PCL core-shell were loaded with a hydrophobic Pt(IV) complex and Fe 3O 4 nanoparticles though self-assembly.more » The distribution of a platinum complex on subcellular level is visualized using hard X-ray fluorescence microscopy with unprecedented level of detail at sub-100 nm spatial resolution. We then study the cytotoxic effects of platinum complex-loaded micelles in vitro on a head and neck cancer cell culture model SQ20B. In conclusion, by employing the magnetic functionality of the micelles and additionally loading them with a near infrared fluorescent dye, we magnetically target them to a tumor site in a live animal xenografted model which allows to visualize their biodistribution in vivo.« less
Raymond L. Czaplewski
1989-01-01
It is difficult to design systems for national and global resource inventory and analysis that efficiently satisfy changing, and increasingly complex objectives. It is proposed that individual inventory, monitoring, modeling, and remote sensing systems be specialized to achieve portions of the objectives. These separate systems can be statistically linked to accomplish...
ERIC Educational Resources Information Center
Darabi, Aubteen; Arrastia-Lloyd, Meagan C.; Nelson, David W.; Liang, Xinya; Farrell, Jennifer
2015-01-01
In order to develop an expert-like mental model of complex systems, causal reasoning is essential. This study examines the differences between forward and backward instructional strategies in terms of efficiency, students' learning and progression of their mental models of the electronic transport chain in an undergraduate metabolism course…
NASA Astrophysics Data System (ADS)
Wray, Timothy J.
Computational fluid dynamics (CFD) is routinely used in performance prediction and design of aircraft, turbomachinery, automobiles, and in many other industrial applications. Despite its wide range of use, deficiencies in its prediction accuracy still exist. One critical weakness is the accurate simulation of complex turbulent flows using the Reynolds-Averaged Navier-Stokes equations in conjunction with a turbulence model. The goal of this research has been to develop an eddy viscosity type turbulence model to increase the accuracy of flow simulations for mildly separated flows, flows with rotation and curvature effects, and flows with surface roughness. It is accomplished by developing a new zonal one-equation turbulence model which relies heavily on the flow physics; it is now known in the literature as the Wray-Agarwal one-equation turbulence model. The effectiveness of the new model is demonstrated by comparing its results with those obtained by the industry standard one-equation Spalart-Allmaras model and two-equation Shear-Stress-Transport k - o model and experimental data. Results for subsonic, transonic, and supersonic flows in and about complex geometries are presented. It is demonstrated that the Wray-Agarwal model can provide the industry and CFD researchers an accurate, efficient, and reliable turbulence model for the computation of a large class of complex turbulent flows.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trędak, Przemysław, E-mail: przemyslaw.tredak@fuw.edu.pl; Rudnicki, Witold R.; Interdisciplinary Centre for Mathematical and Computational Modelling, University of Warsaw, ul. Pawińskiego 5a, 02-106 Warsaw
The second generation Reactive Bond Order (REBO) empirical potential is commonly used to accurately model a wide range hydrocarbon materials. It is also extensible to other atom types and interactions. REBO potential assumes complex multi-body interaction model, that is difficult to represent efficiently in the SIMD or SIMT programming model. Hence, despite its importance, no efficient GPGPU implementation has been developed for this potential. Here we present a detailed description of a highly efficient GPGPU implementation of molecular dynamics algorithm using REBO potential. The presented algorithm takes advantage of rarely used properties of the SIMT architecture of a modern GPUmore » to solve difficult synchronizations issues that arise in computations of multi-body potential. Techniques developed for this problem may be also used to achieve efficient solutions of different problems. The performance of proposed algorithm is assessed using a range of model systems. It is compared to highly optimized CPU implementation (both single core and OpenMP) available in LAMMPS package. These experiments show up to 6x improvement in forces computation time using single processor of the NVIDIA Tesla K80 compared to high end 16-core Intel Xeon processor.« less
Glynne-Jones, Peter; Mishra, Puja P; Boltryk, Rosemary J; Hill, Martyn
2013-04-01
A finite element based method is presented for calculating the acoustic radiation force on arbitrarily shaped elastic and fluid particles. Importantly for future applications, this development will permit the modeling of acoustic forces on complex structures such as biological cells, and the interactions between them and other bodies. The model is based on a non-viscous approximation, allowing the results from an efficient, numerical, linear scattering model to provide the basis for the second-order forces. Simulation times are of the order of a few seconds for an axi-symmetric structure. The model is verified against a range of existing analytical solutions (typical accuracy better than 0.1%), including those for cylinders, elastic spheres that are of significant size compared to the acoustic wavelength, and spheroidal particles.
Fitting neuron models to spike trains.
Rossant, Cyrille; Goodman, Dan F M; Fontaine, Bertrand; Platkiewicz, Jonathan; Magnusson, Anna K; Brette, Romain
2011-01-01
Computational modeling is increasingly used to understand the function of neural circuits in systems neuroscience. These studies require models of individual neurons with realistic input-output properties. Recently, it was found that spiking models can accurately predict the precisely timed spike trains produced by cortical neurons in response to somatically injected currents, if properly fitted. This requires fitting techniques that are efficient and flexible enough to easily test different candidate models. We present a generic solution, based on the Brian simulator (a neural network simulator in Python), which allows the user to define and fit arbitrary neuron models to electrophysiological recordings. It relies on vectorization and parallel computing techniques to achieve efficiency. We demonstrate its use on neural recordings in the barrel cortex and in the auditory brainstem, and confirm that simple adaptive spiking models can accurately predict the response of cortical neurons. Finally, we show how a complex multicompartmental model can be reduced to a simple effective spiking model.
Efficient quantum walk on a quantum processor
Qiang, Xiaogang; Loke, Thomas; Montanaro, Ashley; Aungskunsiri, Kanin; Zhou, Xiaoqi; O'Brien, Jeremy L.; Wang, Jingbo B.; Matthews, Jonathan C. F.
2016-01-01
The random walk formalism is used across a wide range of applications, from modelling share prices to predicting population genetics. Likewise, quantum walks have shown much potential as a framework for developing new quantum algorithms. Here we present explicit efficient quantum circuits for implementing continuous-time quantum walks on the circulant class of graphs. These circuits allow us to sample from the output probability distributions of quantum walks on circulant graphs efficiently. We also show that solving the same sampling problem for arbitrary circulant quantum circuits is intractable for a classical computer, assuming conjectures from computational complexity theory. This is a new link between continuous-time quantum walks and computational complexity theory and it indicates a family of tasks that could ultimately demonstrate quantum supremacy over classical computers. As a proof of principle, we experimentally implement the proposed quantum circuit on an example circulant graph using a two-qubit photonics quantum processor. PMID:27146471
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ward, Gregory; Mistrick, Ph.D., Richard; Lee, Eleanor
2011-01-21
We describe two methods which rely on bidirectional scattering distribution functions (BSDFs) to model the daylighting performance of complex fenestration systems (CFS), enabling greater flexibility and accuracy in evaluating arbitrary assemblies of glazing, shading, and other optically-complex coplanar window systems. Two tools within Radiance enable a) efficient annual performance evaluations of CFS, and b) accurate renderings of CFS despite the loss of spatial resolution associated with low-resolution BSDF datasets for inhomogeneous systems. Validation, accuracy, and limitations of the methods are discussed.
Spectral Collocation Time-Domain Modeling of Diffractive Optical Elements
NASA Astrophysics Data System (ADS)
Hesthaven, J. S.; Dinesen, P. G.; Lynov, J. P.
1999-11-01
A spectral collocation multi-domain scheme is developed for the accurate and efficient time-domain solution of Maxwell's equations within multi-layered diffractive optical elements. Special attention is being paid to the modeling of out-of-plane waveguide couplers. Emphasis is given to the proper construction of high-order schemes with the ability to handle very general problems of considerable geometric and material complexity. Central questions regarding efficient absorbing boundary conditions and time-stepping issues are also addressed. The efficacy of the overall scheme for the time-domain modeling of electrically large, and computationally challenging, problems is illustrated by solving a number of plane as well as non-plane waveguide problems.
Ice Accretion Modeling using an Eulerian Approach for Droplet Impingement
NASA Technical Reports Server (NTRS)
Kim, Joe Woong; Garza, Dennis P.; Sankar, Lakshmi N.; Kreeger, Richard E.
2012-01-01
A three-dimensional Eulerian analysis has been developed for modeling droplet impingement on lifting bodes. The Eulerian model solves the conservation equations of mass and momentum to obtain the droplet flow field properties on the same mesh used in CFD simulations. For complex configurations such as a full rotorcraft, the Eulerian approach is more efficient because the Lagrangian approach would require a significant amount of seeding for accurate estimates of collection efficiency. Simulations are done for various benchmark cases such as NACA0012 airfoil, MS317 airfoil and oscillating SC2110 airfoil to illustrate its use. The present results are compared with results from the Lagrangian approach used in an industry standard analysis called LEWICE.
Dos Passos Menezes, Paula; Dos Santos, Polliana Barbosa Pereira; Dória, Grace Anne Azevedo; de Sousa, Bruna Maria Hipólito; Serafini, Mairim Russo; Nunes, Paula Santos; Quintans-Júnior, Lucindo José; de Matos, Iara Lisboa; Alves, Péricles Barreto; Bezerra, Daniel Pereira; Mendonça Júnior, Francisco Jaime Bezerra; da Silva, Gabriel Francisco; de Aquino, Thiago Mendonça; de Souza Bento, Edson; Scotti, Marcus Tullius; Scotti, Luciana; de Souza Araujo, Adriano Antunes
2017-02-01
This study evaluated three different methods for the formation of an inclusion complex between alpha- and beta-cyclodextrin (α- and β-CD) and limonene (LIM) with the goal of improving the physicochemical properties of limonene. The study samples were prepared through physical mixing (PM), paste complexation (PC), and slurry complexation (SC) methods in the molar ratio of 1:1 (cyclodextrin:limonene). The complexes prepared were evaluated with thermogravimetry/derivate thermogravimetry, infrared spectroscopy, X-ray diffraction, complexation efficiency through gas chromatography/mass spectrometry analyses, molecular modeling, and nuclear magnetic resonance. The results showed that the physical mixing procedure did not produce complexation, but the paste and slurry methods produced inclusion complexes, which demonstrated interactions outside of the cavity of the CDs. However, the paste obtained with β-cyclodextrin did not demonstrate complexation in the gas chromatographic technique because, after extraction, most of the limonene was either surface-adsorbed by β-cyclodextrin or volatilized during the procedure. We conclude that paste complexation and slurry complexation are effective and economic methods to improve the physicochemical character of limonene and could have important applications in pharmacological activities in terms of an increase in solubility.
Physics-based statistical model and simulation method of RF propagation in urban environments
Pao, Hsueh-Yuan; Dvorak, Steven L.
2010-09-14
A physics-based statistical model and simulation/modeling method and system of electromagnetic wave propagation (wireless communication) in urban environments. In particular, the model is a computationally efficient close-formed parametric model of RF propagation in an urban environment which is extracted from a physics-based statistical wireless channel simulation method and system. The simulation divides the complex urban environment into a network of interconnected urban canyon waveguides which can be analyzed individually; calculates spectral coefficients of modal fields in the waveguides excited by the propagation using a database of statistical impedance boundary conditions which incorporates the complexity of building walls in the propagation model; determines statistical parameters of the calculated modal fields; and determines a parametric propagation model based on the statistical parameters of the calculated modal fields from which predictions of communications capability may be made.
Experiment Design for Complex VTOL Aircraft with Distributed Propulsion and Tilt Wing
NASA Technical Reports Server (NTRS)
Murphy, Patrick C.; Landman, Drew
2015-01-01
Selected experimental results from a wind tunnel study of a subscale VTOL concept with distributed propulsion and tilt lifting surfaces are presented. The vehicle complexity and automated test facility were ideal for use with a randomized designed experiment. Design of Experiments and Response Surface Methods were invoked to produce run efficient, statistically rigorous regression models with minimized prediction error. Static tests were conducted at the NASA Langley 12-Foot Low-Speed Tunnel to model all six aerodynamic coefficients over a large flight envelope. This work supports investigations at NASA Langley in developing advanced configurations, simulations, and advanced control systems.
Solid phase extraction of copper(II) by fixed bed procedure on cation exchange complexing resins.
Pesavento, Maria; Sturini, Michela; D'Agostino, Girolamo; Biesuz, Raffaela
2010-02-19
The efficiency of the metal ion recovery by solid phase extraction (SPE) in complexing resins columns is predicted by a simple model based on two parameters reflecting the sorption equilibria and kinetics of the metal ion on the considered resin. The parameter related to the adsorption equilibria was evaluated by the Gibbs-Donnan model, and that related to the kinetics by assuming that the ion exchange is the adsorption rate determining step. The predicted parameters make it possible to evaluate the breakthrough volume of the considered metal ion, Cu(II), from different kinds of complexing resins, and at different conditions, such as acidity and ionic composition. Copyright 2009. Published by Elsevier B.V.
Biocellion: accelerating computer simulation of multicellular biological system models
Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya
2014-01-01
Motivation: Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. Results: We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Availability and implementation: Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit http://biocellion.com for additional information. Contact: seunghwa.kang@pnnl.gov PMID:25064572
Comba, Peter; Wunderlich, Steffen
2010-06-25
When the dichloroiron(II) complex of the tetradentate bispidine ligand L=3,7-dimethyl-9-oxo-2,4-bis(2-pyridyl)-3,7-diazabicyclo[3.3.1]nonane-1,5-dicarboxylate methyl ester is oxidized with H(2)O(2), tBuOOH, or iodosylbenzene, the high-valent Fe=O complex efficiently oxidizes and halogenates cyclohexane. Kinetic D isotope effects and the preference for the abstraction of tertiary over secondary carbon-bound hydrogen atoms (quantified in the halogenation of adamantane) indicate that C-H activation is the rate-determining step. The efficiencies (yields in stoichiometric and turnover numbers in catalytic reactions), product ratios (alcohol vs. bromo- vs. chloroalkane), and kinetic isotope effects depend on the oxidant. These results suggest different pathways with different oxidants, and these may include iron(IV)- and iron(V)-oxo complexes as well as oxygen-based radicals.
Efficient Analysis of Systems Biology Markup Language Models of Cellular Populations Using Arrays.
Watanabe, Leandro; Myers, Chris J
2016-08-19
The Systems Biology Markup Language (SBML) has been widely used for modeling biological systems. Although SBML has been successful in representing a wide variety of biochemical models, the core standard lacks the structure for representing large complex regular systems in a standard way, such as whole-cell and cellular population models. These models require a large number of variables to represent certain aspects of these types of models, such as the chromosome in the whole-cell model and the many identical cell models in a cellular population. While SBML core is not designed to handle these types of models efficiently, the proposed SBML arrays package can represent such regular structures more easily. However, in order to take full advantage of the package, analysis needs to be aware of the arrays structure. When expanding the array constructs within a model, some of the advantages of using arrays are lost. This paper describes a more efficient way to simulate arrayed models. To illustrate the proposed method, this paper uses a population of repressilator and genetic toggle switch circuits as examples. Results show that there are memory benefits using this approach with a modest cost in runtime.
Fast and Accurate Circuit Design Automation through Hierarchical Model Switching.
Huynh, Linh; Tagkopoulos, Ilias
2015-08-21
In computer-aided biological design, the trifecta of characterized part libraries, accurate models and optimal design parameters is crucial for producing reliable designs. As the number of parts and model complexity increase, however, it becomes exponentially more difficult for any optimization method to search the solution space, hence creating a trade-off that hampers efficient design. To address this issue, we present a hierarchical computer-aided design architecture that uses a two-step approach for biological design. First, a simple model of low computational complexity is used to predict circuit behavior and assess candidate circuit branches through branch-and-bound methods. Then, a complex, nonlinear circuit model is used for a fine-grained search of the reduced solution space, thus achieving more accurate results. Evaluation with a benchmark of 11 circuits and a library of 102 experimental designs with known characterization parameters demonstrates a speed-up of 3 orders of magnitude when compared to other design methods that provide optimality guarantees.
NASA Technical Reports Server (NTRS)
Bardina, J. E.
1994-01-01
A new computational efficient 3-D compressible Reynolds-averaged implicit Navier-Stokes method with advanced two equation turbulence models for high speed flows is presented. All convective terms are modeled using an entropy satisfying higher-order Total Variation Diminishing (TVD) scheme based on implicit upwind flux-difference split approximations and arithmetic averaging procedure of primitive variables. This method combines the best features of data management and computational efficiency of space marching procedures with the generality and stability of time dependent Navier-Stokes procedures to solve flows with mixed supersonic and subsonic zones, including streamwise separated flows. Its robust stability derives from a combination of conservative implicit upwind flux-difference splitting with Roe's property U to provide accurate shock capturing capability that non-conservative schemes do not guarantee, alternating symmetric Gauss-Seidel 'method of planes' relaxation procedure coupled with a three-dimensional two-factor diagonal-dominant approximate factorization scheme, TVD flux limiters of higher-order flux differences satisfying realizability, and well-posed characteristic-based implicit boundary-point a'pproximations consistent with the local characteristics domain of dependence. The efficiency of the method is highly increased with Newton Raphson acceleration which allows convergence in essentially one forward sweep for supersonic flows. The method is verified by comparing with experiment and other Navier-Stokes methods. Here, results of adiabatic and cooled flat plate flows, compression corner flow, and 3-D hypersonic shock-wave/turbulent boundary layer interaction flows are presented. The robust 3-D method achieves a better computational efficiency of at least one order of magnitude over the CNS Navier-Stokes code. It provides cost-effective aerodynamic predictions in agreement with experiment, and the capability of predicting complex flow structures in complex geometries with good accuracy.
Parameters estimation for reactive transport: A way to test the validity of a reactive model
NASA Astrophysics Data System (ADS)
Aggarwal, Mohit; Cheikh Anta Ndiaye, Mame; Carrayrou, Jérôme
The chemical parameters used in reactive transport models are not known accurately due to the complexity and the heterogeneous conditions of a real domain. We will present an efficient algorithm in order to estimate the chemical parameters using Monte-Carlo method. Monte-Carlo methods are very robust for the optimisation of the highly non-linear mathematical model describing reactive transport. Reactive transport of tributyltin (TBT) through natural quartz sand at seven different pHs is taken as the test case. Our algorithm will be used to estimate the chemical parameters of the sorption of TBT onto the natural quartz sand. By testing and comparing three models of surface complexation, we show that the proposed adsorption model cannot explain the experimental data.
Review of the "AS-BUILT BIM" Approaches
NASA Astrophysics Data System (ADS)
Hichri, N.; Stefani, C.; De Luca, L.; Veron, P.
2013-02-01
Today, we need 3D models of heritage buildings in order to handle more efficiently projects of restoration, documentation and maintenance. In this context, developing a performing approach, based on a first phase of building survey, is a necessary step in order to build a semantically enriched digital model. For this purpose, the Building Information Modeling is an efficient tool for storing and exchanging knowledge about buildings. In order to create such a model, there are three fundamental steps: acquisition, segmentation and modeling. For these reasons, it is essential to understand and analyze this entire chain that leads to a well- structured and enriched 3D digital model. This paper proposes a survey and an analysis of the existing approaches on these topics and tries to define a new approach of semantic structuring taking into account the complexity of this chain.
NASA Astrophysics Data System (ADS)
Tsaturyan, Arshak; Machida, Yosuke; Akitsu, Takashiro; Gozhikova, Inna; Shcherbakov, Igor
2018-06-01
We report on synthesis and characterization of binaphthyl containing Schiff base Ni(II), Cu(II), and Zn(II) complexes as promising photosensitizers for dye-sensitized solar cells (DSSC). Based on theoretical and experimental data, the possibility of their application in DSSC was confirmed. To our knowledge, we find dye performance of complex is steric and rigid structure widely spread to efficiency. The spatial and electronic structures of the complexes were studied by means of the quantum chemical modeling using DFT and TD-DFT approaches. The adsorption energies of the complexes on TiO2 cluster were calculated and appeared to be very close in value. The Zn(II) complex has the biggest value of molar extinction.
Hybrid estimation of complex systems.
Hofbaur, Michael W; Williams, Brian C
2004-10-01
Modern automated systems evolve both continuously and discretely, and hence require estimation techniques that go well beyond the capability of a typical Kalman Filter. Multiple model (MM) estimation schemes track these system evolutions by applying a bank of filters, one for each discrete system mode. Modern systems, however, are often composed of many interconnected components that exhibit rich behaviors, due to complex, system-wide interactions. Modeling these systems leads to complex stochastic hybrid models that capture the large number of operational and failure modes. This large number of modes makes a typical MM estimation approach infeasible for online estimation. This paper analyzes the shortcomings of MM estimation, and then introduces an alternative hybrid estimation scheme that can efficiently estimate complex systems with large number of modes. It utilizes search techniques from the toolkit of model-based reasoning in order to focus the estimation on the set of most likely modes, without missing symptoms that might be hidden amongst the system noise. In addition, we present a novel approach to hybrid estimation in the presence of unknown behavioral modes. This leads to an overall hybrid estimation scheme for complex systems that robustly copes with unforeseen situations in a degraded, but fail-safe manner.
NASA Astrophysics Data System (ADS)
Lima, Aranildo R.; Hsieh, William W.; Cannon, Alex J.
2017-12-01
In situations where new data arrive continually, online learning algorithms are computationally much less costly than batch learning ones in maintaining the model up-to-date. The extreme learning machine (ELM), a single hidden layer artificial neural network with random weights in the hidden layer, is solved by linear least squares, and has an online learning version, the online sequential ELM (OSELM). As more data become available during online learning, information on the longer time scale becomes available, so ideally the model complexity should be allowed to change, but the number of hidden nodes (HN) remains fixed in OSELM. A variable complexity VC-OSELM algorithm is proposed to dynamically add or remove HN in the OSELM, allowing the model complexity to vary automatically as online learning proceeds. The performance of VC-OSELM was compared with OSELM in daily streamflow predictions at two hydrological stations in British Columbia, Canada, with VC-OSELM significantly outperforming OSELM in mean absolute error, root mean squared error and Nash-Sutcliffe efficiency at both stations.
Model Checking with Edge-Valued Decision Diagrams
NASA Technical Reports Server (NTRS)
Roux, Pierre; Siminiceanu, Radu I.
2010-01-01
We describe an algebra of Edge-Valued Decision Diagrams (EVMDDs) to encode arithmetic functions and its implementation in a model checking library. We provide efficient algorithms for manipulating EVMDDs and review the theoretical time complexity of these algorithms for all basic arithmetic and relational operators. We also demonstrate that the time complexity of the generic recursive algorithm for applying a binary operator on EVMDDs is no worse than that of Multi- Terminal Decision Diagrams. We have implemented a new symbolic model checker with the intention to represent in one formalism the best techniques available at the moment across a spectrum of existing tools. Compared to the CUDD package, our tool is several orders of magnitude faster
Heo, Lim; Lee, Hasup; Seok, Chaok
2016-08-18
Protein-protein docking methods have been widely used to gain an atomic-level understanding of protein interactions. However, docking methods that employ low-resolution energy functions are popular because of computational efficiency. Low-resolution docking tends to generate protein complex structures that are not fully optimized. GalaxyRefineComplex takes such low-resolution docking structures and refines them to improve model accuracy in terms of both interface contact and inter-protein orientation. This refinement method allows flexibility at the protein interface and in the overall docking structure to capture conformational changes that occur upon binding. Symmetric refinement is also provided for symmetric homo-complexes. This method was validated by refining models produced by available docking programs, including ZDOCK and M-ZDOCK, and was successfully applied to CAPRI targets in a blind fashion. An example of using the refinement method with an existing docking method for ligand binding mode prediction of a drug target is also presented. A web server that implements the method is freely available at http://galaxy.seoklab.org/refinecomplex.
NASA Technical Reports Server (NTRS)
Wagner, Michael Broderick
1987-01-01
The modeled cascade cells offer an alternative to conventional series cascade designs that require a monolithic intercell ohmic contact. Selective electrodes provide a simple means of fabricating three-terminal devices, which can be configured in complementary pairs to circumvent the attendant losses and fabrication complexities of intercell ohmic contacts. Moreover, selective electrodes allow incorporation of additional layers in the upper subcell which can improve spectral response and increase radiation tolerance. Realistic simulations of such cells operating under one-sun AMO conditions show that the seven-layer structure is optimum from the standpoint of beginning-of-life efficiency and radiation tolerance. Projected efficiencies exceed 26 percent. Under higher concentration factors, it should be possible to achieve efficiencies beyond 30 percent. However, to simulate operation at high concentration will require a model for resistive losses. Overall, these devices appear to be a promising contender for future space applications.
Li, X Y; Yang, G W; Zheng, D S; Guo, W S; Hung, W N N
2015-04-28
Genetic regulatory networks are the key to understanding biochemical systems. One condition of the genetic regulatory network under different living environments can be modeled as a synchronous Boolean network. The attractors of these Boolean networks will help biologists to identify determinant and stable factors. Existing methods identify attractors based on a random initial state or the entire state simultaneously. They cannot identify the fixed length attractors directly. The complexity of including time increases exponentially with respect to the attractor number and length of attractors. This study used the bounded model checking to quickly locate fixed length attractors. Based on the SAT solver, we propose a new algorithm for efficiently computing the fixed length attractors, which is more suitable for large Boolean networks and numerous attractors' networks. After comparison using the tool BooleNet, empirical experiments involving biochemical systems demonstrated the feasibility and efficiency of our approach.
Crack Damage Detection Method via Multiple Visual Features and Efficient Multi-Task Learning Model.
Wang, Baoxian; Zhao, Weigang; Gao, Po; Zhang, Yufeng; Wang, Zhe
2018-06-02
This paper proposes an effective and efficient model for concrete crack detection. The presented work consists of two modules: multi-view image feature extraction and multi-task crack region detection. Specifically, multiple visual features (such as texture, edge, etc.) of image regions are calculated, which can suppress various background noises (such as illumination, pockmark, stripe, blurring, etc.). With the computed multiple visual features, a novel crack region detector is advocated using a multi-task learning framework, which involves restraining the variability for different crack region features and emphasizing the separability between crack region features and complex background ones. Furthermore, the extreme learning machine is utilized to construct this multi-task learning model, thereby leading to high computing efficiency and good generalization. Experimental results of the practical concrete images demonstrate that the developed algorithm can achieve favorable crack detection performance compared with traditional crack detectors.
NASA Astrophysics Data System (ADS)
Linker, Thomas M.; Lee, Glenn S.; Beekman, Matt
2018-06-01
The semi-analytical methods of thermoelectric energy conversion efficiency calculation based on the cumulative properties approach and reduced variables approach are compared for 21 high performance thermoelectric materials. Both approaches account for the temperature dependence of the material properties as well as the Thomson effect, thus the predicted conversion efficiencies are generally lower than that based on the conventional thermoelectric figure of merit ZT for nearly all of the materials evaluated. The two methods also predict material energy conversion efficiencies that are in very good agreement which each other, even for large temperature differences (average percent difference of 4% with maximum observed deviation of 11%). The tradeoff between obtaining a reliable assessment of a material's potential for thermoelectric applications and the complexity of implementation of the three models, as well as the advantages of using more accurate modeling approaches in evaluating new thermoelectric materials, are highlighted.
Cost efficiency of the non-associative flow rule simulation of an industrial component
NASA Astrophysics Data System (ADS)
Galdos, Lander; de Argandoña, Eneko Saenz; Mendiguren, Joseba
2017-10-01
In the last decade, metal forming industry is becoming more and more competitive. In this context, the FEM modeling has become a primary tool of information for the component and process design. Numerous researchers have been focused on improving the accuracy of the material models implemented on the FEM in order to improve the efficiency of the simulations. Aimed at increasing the efficiency of the anisotropic behavior modelling, in the last years the use of non-associative flow rule models (NAFR) has been presented as an alternative to the classic associative flow rule models (AFR). In this work, the cost efficiency of the used flow rule model has been numerically analyzed by simulating an industrial drawing operation with two different models of the same degree of flexibility: one AFR model and one NAFR model. From the present study, it has been concluded that the flow rule has a negligible influence on the final drawing prediction; this is mainly driven by the model parameter identification procedure. Even though the NAFR formulation is complex when compared to the AFR, the present study shows that the total simulation time while using explicit FE solvers has been reduced without loss of accuracy. Furthermore, NAFR formulations have an advantage over AFR formulations in parameter identification because the formulation decouples the yield stress and the Lankford coefficients.
Pope, Bernard J; Fitch, Blake G; Pitman, Michael C; Rice, John J; Reumann, Matthias
2011-01-01
Future multiscale and multiphysics models must use the power of high performance computing (HPC) systems to enable research into human disease, translational medical science, and treatment. Previously we showed that computationally efficient multiscale models will require the use of sophisticated hybrid programming models, mixing distributed message passing processes (e.g. the message passing interface (MPI)) with multithreading (e.g. OpenMP, POSIX pthreads). The objective of this work is to compare the performance of such hybrid programming models when applied to the simulation of a lightweight multiscale cardiac model. Our results show that the hybrid models do not perform favourably when compared to an implementation using only MPI which is in contrast to our results using complex physiological models. Thus, with regards to lightweight multiscale cardiac models, the user may not need to increase programming complexity by using a hybrid programming approach. However, considering that model complexity will increase as well as the HPC system size in both node count and number of cores per node, it is still foreseeable that we will achieve faster than real time multiscale cardiac simulations on these systems using hybrid programming models.
Subband Image Coding with Jointly Optimized Quantizers
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Chung, Wilson C.; Smith Mark J. T.
1995-01-01
An iterative design algorithm for the joint design of complexity- and entropy-constrained subband quantizers and associated entropy coders is proposed. Unlike conventional subband design algorithms, the proposed algorithm does not require the use of various bit allocation algorithms. Multistage residual quantizers are employed here because they provide greater control of the complexity-performance tradeoffs, and also because they allow efficient and effective high-order statistical modeling. The resulting subband coder exploits statistical dependencies within subbands, across subbands, and across stages, mainly through complexity-constrained high-order entropy coding. Experimental results demonstrate that the complexity-rate-distortion performance of the new subband coder is exceptional.
Techniques and resources for storm-scale numerical weather prediction
NASA Technical Reports Server (NTRS)
Droegemeier, Kelvin; Grell, Georg; Doyle, James; Soong, Su-Tzai; Skamarock, William; Bacon, David; Staniforth, Andrew; Crook, Andrew; Wilhelmson, Robert
1993-01-01
The topics discussed include the following: multiscale application of the 5th-generation PSU/NCAR mesoscale model, the coupling of nonhydrostatic atmospheric and hydrostatic ocean models for air-sea interaction studies; a numerical simulation of cloud formation over complex topography; adaptive grid simulations of convection; an unstructured grid, nonhydrostatic meso/cloud scale model; efficient mesoscale modeling for multiple scales using variable resolution; initialization of cloud-scale models with Doppler radar data; and making effective use of future computing architectures, networks, and visualization software.
Pathak, Rajesh Kumar; Gupta, Sanjay Mohan; Gaur, Vikram Singh; Pandey, Dinesh
2015-01-01
Abstract In recent years, rapid developments in several omics platforms and next generation sequencing technology have generated a huge amount of biological data about plants. Systems biology aims to develop and use well-organized and efficient algorithms, data structure, visualization, and communication tools for the integration of these biological data with the goal of computational modeling and simulation. It studies crop plant systems by systematically perturbing them, checking the gene, protein, and informational pathway responses; integrating these data; and finally, formulating mathematical models that describe the structure of system and its response to individual perturbations. Consequently, systems biology approaches, such as integrative and predictive ones, hold immense potential in understanding of molecular mechanism of agriculturally important complex traits linked to agricultural productivity. This has led to identification of some key genes and proteins involved in networks of pathways involved in input use efficiency, biotic and abiotic stress resistance, photosynthesis efficiency, root, stem and leaf architecture, and nutrient mobilization. The developments in the above fields have made it possible to design smart crops with superior agronomic traits through genetic manipulation of key candidate genes. PMID:26484978
Calibration of 3D ALE finite element model from experiments on friction stir welding of lap joints
NASA Astrophysics Data System (ADS)
Fourment, Lionel; Gastebois, Sabrina; Dubourg, Laurent
2016-10-01
In order to support the design of such a complex process like Friction Stir Welding (FSW) for the aeronautic industry, numerical simulation software requires (1) developing an efficient and accurate Finite Element (F.E.) formulation that allows predicting welding defects, (2) properly modeling the thermo-mechanical complexity of the FSW process and (3) calibrating the F.E. model from accurate measurements from FSW experiments. This work uses a parallel ALE formulation developed in the Forge® F.E. code to model the different possible defects (flashes and worm holes), while pin and shoulder threads are modeled by a new friction law at the tool / material interface. FSW experiments require using a complex tool with scroll on shoulder, which is instrumented for providing sensitive thermal data close to the joint. Calibration of unknown material thermal coefficients, constitutive equations parameters and friction model from measured forces, torques and temperatures is carried out using two F.E. models, Eulerian and ALE, to reach a satisfactory agreement assessed by the proper sensitivity of the simulation to process parameters.
Using the Multilayer Free-Surface Flow Model to Solve Wave Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prokof’ev, V. A., E-mail: ProkofyevVA@vniig.ru
2017-01-15
A method is presented for changing over from a single-layer shallow-water model to a multilayer model with hydrostatic pressure profile and, then, to a multilayer model with nonhydrostatic pressure profile. The method does not require complex procedures for solving the discrete Poisson’s equation and features high computation efficiency. The results of validating the algorithm against experimental data critical for the numerical dissipation of the numerical scheme are presented. Examples are considered.
DOE Office of Scientific and Technical Information (OSTI.GOV)
P. H. Titus, S. Avasaralla, A.Brooks, R. Hatcher
2010-09-22
The National Spherical Torus Experiment (NSTX) project is planning upgrades to the toroidal field, plasma current and pulse length. This involves the replacement of the center-stack, including the inner legs of the TF, OH, and inner PF coils. A second neutral beam will also be added. The increased performance of the upgrade requires qualification of the remaining components including the vessel, passive plates, and divertor for higher disruption loads. The hardware needing qualification is more complex than is typically accessible by large scale electromagnetic (EM) simulations of the plasma disruptions. The usual method is to include simplified representations of componentsmore » in the large EM models and attempt to extract forces to apply to more detailed models. This paper describes a more efficient approach of combining comprehensive modeling of the plasma and tokamak conducting structures, using the 2D OPERA code, with much more detailed treatment of individual components using ANSYS electromagnetic (EM) and mechanical analysis. This capture local eddy currents and resulting loads in complex details, and allows efficient non-linear, and dynamic structural analyses.« less
Huang, Like; Xu, Jie; Sun, Xiaoxiang; Du, Yangyang; Cai, Hongkun; Ni, Jian; Li, Juan; Hu, Ziyang; Zhang, Jianjun
2016-04-20
Currently, most efficient perovskite solar cells (PVKSCs) with a p-i-n structure require simultaneously electron transport layers (ETLs) and hole transport layers (HTLs) to help collecting photogenerated electrons and holes for obtaining high performance. ETL free planar PVKSC is a relatively new and simple structured solar cell that gets rid of the complex and high temperature required ETL (such as compact and mesoporous TiO2). Here, we demonstrate the critical role of high coverage of perovskite in efficient ETL free PVKSCs from an energy band and equivalent circuit model perspective. From an electrical point of view, we confirmed that the low coverage of perovskite does cause localized short circuit of the device. With coverage optimization, a planar p-i-n(++) device with a power conversion efficiency of over 11% was achieved, implying that the ETL layer may not be necessary for an efficient device as long as the perovskite coverage is approaching 100%.
NASA Astrophysics Data System (ADS)
Plattner, A.; Maurer, H. R.; Vorloeper, J.; Dahmen, W.
2010-08-01
Despite the ever-increasing power of modern computers, realistic modelling of complex 3-D earth models is still a challenging task and requires substantial computing resources. The overwhelming majority of current geophysical modelling approaches includes either finite difference or non-adaptive finite element algorithms and variants thereof. These numerical methods usually require the subsurface to be discretized with a fine mesh to accurately capture the behaviour of the physical fields. However, this may result in excessive memory consumption and computing times. A common feature of most of these algorithms is that the modelled data discretizations are independent of the model complexity, which may be wasteful when there are only minor to moderate spatial variations in the subsurface parameters. Recent developments in the theory of adaptive numerical solvers have the potential to overcome this problem. Here, we consider an adaptive wavelet-based approach that is applicable to a large range of problems, also including nonlinear problems. In comparison with earlier applications of adaptive solvers to geophysical problems we employ here a new adaptive scheme whose core ingredients arose from a rigorous analysis of the overall asymptotically optimal computational complexity, including in particular, an optimal work/accuracy rate. Our adaptive wavelet algorithm offers several attractive features: (i) for a given subsurface model, it allows the forward modelling domain to be discretized with a quasi minimal number of degrees of freedom, (ii) sparsity of the associated system matrices is guaranteed, which makes the algorithm memory efficient and (iii) the modelling accuracy scales linearly with computing time. We have implemented the adaptive wavelet algorithm for solving 3-D geoelectric problems. To test its performance, numerical experiments were conducted with a series of conductivity models exhibiting varying degrees of structural complexity. Results were compared with a non-adaptive finite element algorithm, which incorporates an unstructured mesh to best-fitting subsurface boundaries. Such algorithms represent the current state-of-the-art in geoelectric modelling. An analysis of the numerical accuracy as a function of the number of degrees of freedom revealed that the adaptive wavelet algorithm outperforms the finite element solver for simple and moderately complex models, whereas the results become comparable for models with high spatial variability of electrical conductivities. The linear dependence of the modelling error and the computing time proved to be model-independent. This feature will allow very efficient computations using large-scale models as soon as our experimental code is optimized in terms of its implementation.
Interactive Visualization of Complex Seismic Data and Models Using Bokeh
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chai, Chengping; Ammon, Charles J.; Maceira, Monica
Visualizing multidimensional data and models becomes more challenging as the volume and resolution of seismic data and models increase. But thanks to the development of powerful and accessible computer systems, a model web browser can be used to visualize complex scientific data and models dynamically. In this paper, we present four examples of seismic model visualization using an open-source Python package Bokeh. One example is a visualization of a surface-wave dispersion data set, another presents a view of three-component seismograms, and two illustrate methods to explore a 3D seismic-velocity model. Unlike other 3D visualization packages, our visualization approach has amore » minimum requirement on users and is relatively easy to develop, provided you have reasonable programming skills. Finally, utilizing familiar web browsing interfaces, the dynamic tools provide us an effective and efficient approach to explore large data sets and models.« less
Interactive Visualization of Complex Seismic Data and Models Using Bokeh
Chai, Chengping; Ammon, Charles J.; Maceira, Monica; ...
2018-02-14
Visualizing multidimensional data and models becomes more challenging as the volume and resolution of seismic data and models increase. But thanks to the development of powerful and accessible computer systems, a model web browser can be used to visualize complex scientific data and models dynamically. In this paper, we present four examples of seismic model visualization using an open-source Python package Bokeh. One example is a visualization of a surface-wave dispersion data set, another presents a view of three-component seismograms, and two illustrate methods to explore a 3D seismic-velocity model. Unlike other 3D visualization packages, our visualization approach has amore » minimum requirement on users and is relatively easy to develop, provided you have reasonable programming skills. Finally, utilizing familiar web browsing interfaces, the dynamic tools provide us an effective and efficient approach to explore large data sets and models.« less
R&D 100, 2016: Pyomo 4.0 â Python Optimization Modeling Objects
Hart, William; Laird, Carl; Siirola, John
2018-06-13
Pyomo provides a rich software environment for formulating and analyzing optimization applications. Pyomo supports the algebraic specification of complex sets of objectives and constraints, which enables optimization solvers to exploit problem structure to efficiently perform optimization.
Electrical and thermal modeling of a large-format lithium titanate oxide battery system.
DOT National Transportation Integrated Search
2015-04-01
The future of mass transportation is clearly moving towards the increased efficiency of hybrid and electric vehicles. Electrical : energy storage is a key component in most of these advanced vehicles, with the system complexity and vehicle cost shift...
Le Meur, Nolwenn; Gentleman, Robert
2008-01-01
Background Synthetic lethality defines a genetic interaction where the combination of mutations in two or more genes leads to cell death. The implications of synthetic lethal screens have been discussed in the context of drug development as synthetic lethal pairs could be used to selectively kill cancer cells, but leave normal cells relatively unharmed. A challenge is to assess genome-wide experimental data and integrate the results to better understand the underlying biological processes. We propose statistical and computational tools that can be used to find relationships between synthetic lethality and cellular organizational units. Results In Saccharomyces cerevisiae, we identified multi-protein complexes and pairs of multi-protein complexes that share an unusually high number of synthetic genetic interactions. As previously predicted, we found that synthetic lethality can arise from subunits of an essential multi-protein complex or between pairs of multi-protein complexes. Finally, using multi-protein complexes allowed us to take into account the pleiotropic nature of the gene products. Conclusions Modeling synthetic lethality using current estimates of the yeast interactome is an efficient approach to disentangle some of the complex molecular interactions that drive a cell. Our model in conjunction with applied statistical methods and computational methods provides new tools to better characterize synthetic genetic interactions. PMID:18789146
Efficient Analysis of Complex Structures
NASA Technical Reports Server (NTRS)
Kapania, Rakesh K.
2000-01-01
Last various accomplishments achieved during this project are : (1) A Survey of Neural Network (NN) applications using MATLAB NN Toolbox on structural engineering especially on equivalent continuum models (Appendix A). (2) Application of NN and GAs to simulate and synthesize substructures: 1-D and 2-D beam problems (Appendix B). (3) Development of an equivalent plate-model analysis method (EPA) for static and vibration analysis of general trapezoidal built-up wing structures composed of skins, spars and ribs. Calculation of all sorts of test cases and comparison with measurements or FEA results. (Appendix C). (4) Basic work on using second order sensitivities on simulating wing modal response, discussion of sensitivity evaluation approaches, and some results (Appendix D). (5) Establishing a general methodology of simulating the modal responses by direct application of NN and by sensitivity techniques, in a design space composed of a number of design points. Comparison is made through examples using these two methods (Appendix E). (6) Establishing a general methodology of efficient analysis of complex wing structures by indirect application of NN: the NN-aided Equivalent Plate Analysis. Training of the Neural Networks for this purpose in several cases of design spaces, which can be applicable for actual design of complex wings (Appendix F).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakhleh, Luay
I proposed to develop computationally efficient tools for accurate detection and reconstruction of microbes' complex evolutionary mechanisms, thus enabling rapid and accurate annotation, analysis and understanding of their genomes. To achieve this goal, I proposed to address three aspects. (1) Mathematical modeling. A major challenge facing the accurate detection of HGT is that of distinguishing between these two events on the one hand and other events that have similar "effects." I proposed to develop a novel mathematical approach for distinguishing among these events. Further, I proposed to develop a set of novel optimization criteria for the evolutionary analysis of microbialmore » genomes in the presence of these complex evolutionary events. (2) Algorithm design. In this aspect of the project, I proposed to develop an array of e cient and accurate algorithms for analyzing microbial genomes based on the formulated optimization criteria. Further, I proposed to test the viability of the criteria and the accuracy of the algorithms in an experimental setting using both synthetic as well as biological data. (3) Software development. I proposed the nal outcome to be a suite of software tools which implements the mathematical models as well as the algorithms developed.« less
Application of constraint-based satellite mission planning model in forest fire monitoring
NASA Astrophysics Data System (ADS)
Guo, Bingjun; Wang, Hongfei; Wu, Peng
2017-10-01
In this paper, a constraint-based satellite mission planning model is established based on the thought of constraint satisfaction. It includes target, request, observation, satellite, payload and other elements, with constraints linked up. The optimization goal of the model is to make full use of time and resources, and improve the efficiency of target observation. Greedy algorithm is used in the model solving to make observation plan and data transmission plan. Two simulation experiments are designed and carried out, which are routine monitoring of global forest fire and emergency monitoring of forest fires in Australia. The simulation results proved that the model and algorithm perform well. And the model is of good emergency response capability. Efficient and reasonable plan can be worked out to meet users' needs under complex cases of multiple payloads, multiple targets and variable priorities with this model.
Modelling the molecular mechanisms of aging
Mc Auley, Mark T.; Guimera, Alvaro Martinez; Hodgson, David; Mcdonald, Neil; Mooney, Kathleen M.; Morgan, Amy E.
2017-01-01
The aging process is driven at the cellular level by random molecular damage that slowly accumulates with age. Although cells possess mechanisms to repair or remove damage, they are not 100% efficient and their efficiency declines with age. There are many molecular mechanisms involved and exogenous factors such as stress also contribute to the aging process. The complexity of the aging process has stimulated the use of computational modelling in order to increase our understanding of the system, test hypotheses and make testable predictions. As many different mechanisms are involved, a wide range of models have been developed. This paper gives an overview of the types of models that have been developed, the range of tools used, modelling standards and discusses many specific examples of models that have been grouped according to the main mechanisms that they address. We conclude by discussing the opportunities and challenges for future modelling in this field. PMID:28096317
NASA Astrophysics Data System (ADS)
Voronin, Alexander; Vasilchenko, Ann; Khoperskov, Alexander
2018-03-01
The project of small watercourses restoration in the northern part of the Volga-Akhtuba floodplain is considered together with the aim of increasing the watering of the territory during small and medium floods. The topography irregularity, the complex structure of the floodplain valley consisting of large number of small watercourses, the presence of urbanized and agricultural areas require careful preliminary analysis of the hydrological safety and efficiency of geographically distributed project activities. Using the digital terrain and watercourses structure models of the floodplain, the hydrodynamic flood model, the analysis of the hydrological safety and efficiency of several project implementation strategies has been conducted. The objective function values have been obtained from the hydrodynamic calculations of the floodplain territory flooding for virtual digital terrain models simulating alternatives for the geographically distributed project activities. The comparative efficiency of several empirical strategies for the geographically distributed project activities, as well as a two-stage exact solution method for the optimization problem has been studied.
Why Bother to Calibrate? Model Consistency and the Value of Prior Information
NASA Astrophysics Data System (ADS)
Hrachowitz, Markus; Fovet, Ophelie; Ruiz, Laurent; Euser, Tanja; Gharari, Shervan; Nijzink, Remko; Savenije, Hubert; Gascuel-Odoux, Chantal
2015-04-01
Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by 4 calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce 20 hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by using prior information about the system to impose "prior constraints", inferred from expert knowledge and to ensure a model which behaves well with respect to the modeller's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model set-up exhibited increased performance in the independent test period and skill to reproduce all 20 signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if efficiently counter-balanced by available prior constraints, can increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge driven strategy of constraining models.
Why Bother and Calibrate? Model Consistency and the Value of Prior Information.
NASA Astrophysics Data System (ADS)
Hrachowitz, M.; Fovet, O.; Ruiz, L.; Euser, T.; Gharari, S.; Nijzink, R.; Freer, J. E.; Savenije, H.; Gascuel-Odoux, C.
2014-12-01
Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by 4 calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce 20 hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by using prior information about the system to impose "prior constraints", inferred from expert knowledge and to ensure a model which behaves well with respect to the modeller's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model set-up exhibited increased performance in the independent test period and skill to reproduce all 20 signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if efficiently counter-balanced by available prior constraints, can increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge driven strategy of constraining models.
NASA Astrophysics Data System (ADS)
Najafi, Amir Abbas; Pourahmadi, Zahra
2016-04-01
Selecting the optimal combination of assets in a portfolio is one of the most important decisions in investment management. As investment is a long term concept, looking into a portfolio optimization problem just in a single period may cause loss of some opportunities that could be exploited in a long term view. Hence, it is tried to extend the problem from single to multi-period model. We include trading costs and uncertain conditions to this model which made it more realistic and complex. Hence, we propose an efficient heuristic method to tackle this problem. The efficiency of the method is examined and compared with the results of the rolling single-period optimization and the buy and hold method which shows the superiority of the proposed method.
NASA Technical Reports Server (NTRS)
Millar, T. J.; Defrees, D. J.; Mclean, A. D.; Herbst, E.
1988-01-01
The approach of Bates to the determination of neutral product branching ratios in ion-electron dissociative recombination reactions has been utilized in conjunction with quantum chemical techniques to redetermine branching ratios for a wide variety of important reactions of this class in dense interstellar clouds. The branching ratios have then been used in a pseudo time-dependent model calculation of the gas phase chemistry of a dark cloud resembling TMC-1 and the results compared with an analogous model containing previously used branching ratios. In general, the changes in branching ratios lead to stronger effects on calculated molecular abundances at steady state than at earlier times and often lead to reductions in the calculated abundances of complex molecules. However, at the so-called 'early time' when complex molecule synthesis is most efficient, the abundances of complex molecules are hardly affected by the newly used branching ratios.
Improving Distributed Diagnosis Through Structural Model Decomposition
NASA Technical Reports Server (NTRS)
Bregon, Anibal; Daigle, Matthew John; Roychoudhury, Indranil; Biswas, Gautam; Koutsoukos, Xenofon; Pulido, Belarmino
2011-01-01
Complex engineering systems require efficient fault diagnosis methodologies, but centralized approaches do not scale well, and this motivates the development of distributed solutions. This work presents an event-based approach for distributed diagnosis of abrupt parametric faults in continuous systems, by using the structural model decomposition capabilities provided by Possible Conflicts. We develop a distributed diagnosis algorithm that uses residuals computed by extending Possible Conflicts to build local event-based diagnosers based on global diagnosability analysis. The proposed approach is applied to a multitank system, and results demonstrate an improvement in the design of local diagnosers. Since local diagnosers use only a subset of the residuals, and use subsystem models to compute residuals (instead of the global system model), the local diagnosers are more efficient than previously developed distributed approaches.
NASA Technical Reports Server (NTRS)
Fridlind, Ann; Seifert, Axel; Ackerman, Andrew; Jensen, Eric
2004-01-01
Numerical models that resolve cloud particles into discrete mass size distributions on an Eulerian grid provide a uniquely powerful means of studying the closely coupled interaction of aerosols, cloud microphysics, and transport that determine cloud properties and evolution. However, such models require many experimentally derived paramaterizations in order to properly represent the complex interactions of droplets within turbulent flow. Many of these parameterizations remain poorly quantified, and the numerical methods of solving the equations for temporal evolution of the mass size distribution can also vary considerably in terms of efficiency and accuracy. In this work, we compare results from two size-resolved microphysics models that employ various widely-used parameterizations and numerical solution methods for several aspects of stochastic collection.
Electrostatically Accelerated Coupled Binding and Folding of Intrinsically Disordered Proteins
Ganguly, Debabani; Otieno, Steve; Waddell, Brett; Iconaru, Luigi; Kriwacki, Richard W.; Chen, Jianhan
2012-01-01
Intrinsically disordered proteins (IDPs) are now recognized to be prevalent in biology, and many potential functional benefits have been discussed. However, the frequent requirement of peptide folding in specific interactions of IDPs could impose a kinetic bottleneck, which could be overcome only by efficient folding upon encounter. Intriguingly, existing kinetic data suggest that specific binding of IDPs is generally no slower than that of globular proteins. Here, we exploited the cell cycle regulator p27Kip1 (p27) as a model system to understand how IDPs might achieve efficient folding upon encounter for facile recognition. Combining experiments and coarse-grained modeling, we demonstrate that long-range electrostatic interactions between enriched charges on p27 and near its binding site on cyclin A not only enhance the encounter rate (i.e., electrostatic steering), but also promote folding-competent topologies in the encounter complexes, allowing rapid subsequent formation of short-range native interactions en route to the specific complex. In contrast, nonspecific hydrophobic interactions, while hardly affecting the encounter rate, can significantly reduce the efficiency of folding upon encounter and lead to slower binding kinetics. Further analysis of charge distributions in a set of known IDP complexes reveals that, although IDP binding sites tend to be more hydrophobic compared to the rest of the target surface, their vicinities are frequently enriched with charges to complement those on IDPs. This observation suggests that electrostatically accelerated encounter and induced folding might represent a prevalent mechanism for promoting facile IDP recognition. PMID:22721951
Nonisothermal glass molding for the cost-efficient production of precision freeform optics
NASA Astrophysics Data System (ADS)
Vu, Anh-Tuan; Kreilkamp, Holger; Dambon, Olaf; Klocke, Fritz
2016-07-01
Glass molding has become a key replication-based technology to satisfy intensively growing demands of complex precision optics in the today's photonic market. However, the state-of-the-art replicative technologies are still limited, mainly due to their insufficiency to meet the requirements of mass production. This paper introduces a newly developed nonisothermal glass molding in which a complex-shaped optic is produced in a very short process cycle. The innovative molding technology promises a cost-efficient production because of increased mold lifetime, less energy consumption, and high throughput from a fast process chain. At the early stage of the process development, the research focuses on an integration of finite element simulation into the process chain to reduce time and labor-intensive cost. By virtue of numerical modeling, defects including chill ripples and glass sticking in the nonisothermal molding process can be predicted and the consequent effects are avoided. In addition, the influences of process parameters and glass preforms on the surface quality, form accuracy, and residual stress are discussed. A series of experiments was carried out to validate the simulation results. The successful modeling, therefore, provides a systematic strategy for glass preform design, mold compensation, and optimization of the process parameters. In conclusion, the integration of simulation into the entire nonisothermal glass molding process chain will significantly increase the manufacturing efficiency as well as reduce the time-to-market for the mass production of complex precision yet low-cost glass optics.
Three-dimensional tracking for efficient fire fighting in complex situations
NASA Astrophysics Data System (ADS)
Akhloufi, Moulay; Rossi, Lucile
2009-05-01
Each year, hundred millions hectares of forests burn causing human and economic losses. For efficient fire fighting, the personnel in the ground need tools permitting the prediction of fire front propagation. In this work, we present a new technique for automatically tracking fire spread in three-dimensional space. The proposed approach uses a stereo system to extract a 3D shape from fire images. A new segmentation technique is proposed and permits the extraction of fire regions in complex unstructured scenes. It works in the visible spectrum and combines information extracted from YUV and RGB color spaces. Unlike other techniques, our algorithm does not require previous knowledge about the scene. The resulting fire regions are classified into different homogenous zones using clustering techniques. Contours are then extracted and a feature detection algorithm is used to detect interest points like local maxima and corners. Extracted points from stereo images are then used to compute the 3D shape of the fire front. The resulting data permits to build the fire volume. The final model is used to compute important spatial and temporal fire characteristics like: spread dynamics, local orientation, heading direction, etc. Tests conducted on the ground show the efficiency of the proposed scheme. This scheme is being integrated with a fire spread mathematical model in order to predict and anticipate the fire behaviour during fire fighting. Also of interest to fire-fighters, is the proposed automatic segmentation technique that can be used in early detection of fire in complex scenes.
Developing an active implementation model for a chronic disease management program
Smidth, Margrethe; Christensen, Morten Bondo; Olesen, Frede; Vedsted, Peter
2013-01-01
Background Introduction and diffusion of new disease management programs in healthcare is usually slow, but active theory-driven implementation seems to outperform other implementation strategies. However, we have only scarce evidence on the feasibility and real effect of such strategies in complex primary care settings where municipalities, general practitioners and hospitals should work together. The Central Denmark Region recently implemented a disease management program for chronic obstructive pulmonary disease (COPD) which presented an opportunity to test an active implementation model against the usual implementation model. The aim of the present paper is to describe the development of an active implementation model using the Medical Research Council’s model for complex interventions and the Chronic Care Model. Methods We used the Medical Research Council’s five-stage model for developing complex interventions to design an implementation model for a disease management program for COPD. First, literature on implementing change in general practice was scrutinised and empirical knowledge was assessed for suitability. In phase I, the intervention was developed; and in phases II and III, it was tested in a block- and cluster-randomised study. In phase IV, we evaluated the feasibility for others to use our active implementation model. Results The Chronic Care Model was identified as a model for designing efficient implementation elements. These elements were combined into a multifaceted intervention, and a timeline for the trial in a randomised study was decided upon in accordance with the five stages in the Medical Research Council’s model; this was captured in a PaTPlot, which allowed us to focus on the structure and the timing of the intervention. The implementation strategies identified as efficient were use of the Breakthrough Series, academic detailing, provision of patient material and meetings between providers. The active implementation model was tested in a randomised trial (results reported elsewhere). Conclusion The combination of the theoretical model for complex interventions and the Chronic Care Model and the chosen specific implementation strategies proved feasible for a practice-based active implementation model for a chronic-disease-management-program for COPD. Using the Medical Research Council’s model added transparency to the design phase which further facilitated the process of implementing the program. Trial registration: http://www.clinicaltrials.gov/(NCT01228708). PMID:23882169
3D Numerical simulation of bed morphological responses to complex in-streamstructures
NASA Astrophysics Data System (ADS)
Xu, Y.; Liu, X.
2017-12-01
In-stream structures are widely used in stream restoration for both hydraulic and ecologicalpurposes. The geometries of the structures are usually designed to be extremely complex andirregular, so as to provide nature-like physical habitat. The aim of this study is to develop anumerical model to accurately predict the bed-load transport and the morphological changescaused by the complex in-stream structures. This model is developed in the platform ofOpenFOAM. In the hydrodynamics part, it utilizes different turbulence models to capture thedetailed turbulence information near the in-stream structures. The technique of immersedboundary method (IBM) is efficiently implemented in the model to describe the movable bendand the rigid solid body of in-stream structures. With IBM, the difficulty of mesh generation onthe complex geometry is greatly alleviated, and the bed surface deformation is able to becoupled in to flow system. This morphodynamics model is firstly validated by simple structures,such as the morphology of the scour in log-vane structure. Then it is applied in a more complexstructure, engineered log jams (ELJ), which consists of multiple logs piled together. Thenumerical results including turbulence flow information and bed morphological responses areevaluated against the experimental measurement within the exact same flow condition.
Anharmonic Vibrational Spectroscopy on Metal Transition Complexes
NASA Astrophysics Data System (ADS)
Latouche, Camille; Bloino, Julien; Barone, Vincenzo
2014-06-01
Advances in hardware performance and the availability of efficient and reliable computational models have made possible the application of computational spectroscopy to ever larger molecular systems. The systematic interpretation of experimental data and the full characterization of complex molecules can then be facilitated. Focusing on vibrational spectroscopy, several approaches have been proposed to simulate spectra beyond the double harmonic approximation, so that more details become available. However, a routine use of such tools requires the preliminary definition of a valid protocol with the most appropriate combination of electronic structure and nuclear calculation models. Several benchmark of anharmonic calculations frequency have been realized on organic molecules. Nevertheless, benchmarks of organometallics or inorganic metal complexes at this level are strongly lacking despite the interest of these systems due to their strong emission and vibrational properties. Herein we report the benchmark study realized with anharmonic calculations on simple metal complexes, along with some pilot applications on systems of direct technological or biological interest.
Multi-thread parallel algorithm for reconstructing 3D large-scale porous structures
NASA Astrophysics Data System (ADS)
Ju, Yang; Huang, Yaohui; Zheng, Jiangtao; Qian, Xu; Xie, Heping; Zhao, Xi
2017-04-01
Geomaterials inherently contain many discontinuous, multi-scale, geometrically irregular pores, forming a complex porous structure that governs their mechanical and transport properties. The development of an efficient reconstruction method for representing porous structures can significantly contribute toward providing a better understanding of the governing effects of porous structures on the properties of porous materials. In order to improve the efficiency of reconstructing large-scale porous structures, a multi-thread parallel scheme was incorporated into the simulated annealing reconstruction method. In the method, four correlation functions, which include the two-point probability function, the linear-path functions for the pore phase and the solid phase, and the fractal system function for the solid phase, were employed for better reproduction of the complex well-connected porous structures. In addition, a random sphere packing method and a self-developed pre-conditioning method were incorporated to cast the initial reconstructed model and select independent interchanging pairs for parallel multi-thread calculation, respectively. The accuracy of the proposed algorithm was evaluated by examining the similarity between the reconstructed structure and a prototype in terms of their geometrical, topological, and mechanical properties. Comparisons of the reconstruction efficiency of porous models with various scales indicated that the parallel multi-thread scheme significantly shortened the execution time for reconstruction of a large-scale well-connected porous model compared to a sequential single-thread procedure.
NASA Astrophysics Data System (ADS)
Moore, J. K.
2016-02-01
The efficiency of the biological pump is influenced by complex interactions between chemical, biological, and physical processes. The efficiency of export out of surface waters and down through the water column to the deep ocean has been linked to a number of factors including biota community composition, production of mineral ballast components, physical aggregation and disaggregation processes, and ocean oxygen concentrations. I will examine spatial patterns in the export ratio and the efficiency of the biological pump at the global scale using the Community Earth System Model (CESM). There are strong spatial variations in the export efficiency as simulated by the CESM, which are strongly correlated with new nutrient inputs to the euphotic zone and their impacts on phytoplankton community structure. I will compare CESM simulations that include dynamic, variable export ratios driven by the phytoplankton community structure, with simulations that impose a near-constant export ratio to examine the effects of export efficiency on nutrient and surface chlorophyll distributions. The model predicted export ratios will also be compared with recent satellite-based estimates.
Heh, Ding Yu; Tan, Eng Leong
2011-04-12
This paper presents the modeling of hemoglobin at optical frequency (250 nm - 1000 nm) using the unconditionally stable fundamental alternating-direction-implicit finite-difference time-domain (FADI-FDTD) method. An accurate model based on complex conjugate pole-residue pairs is proposed to model the complex permittivity of hemoglobin at optical frequency. Two hemoglobin concentrations at 15 g/dL and 33 g/dL are considered. The model is then incorporated into the FADI-FDTD method for solving electromagnetic problems involving interaction of light with hemoglobin. The computation of transmission and reflection coefficients of a half space hemoglobin medium using the FADI-FDTD validates the accuracy of our model and method. The specific absorption rate (SAR) distribution of human capillary at optical frequency is also shown. While maintaining accuracy, the unconditionally stable FADI-FDTD method exhibits high efficiency in modeling hemoglobin.
Heh, Ding Yu; Tan, Eng Leong
2011-01-01
This paper presents the modeling of hemoglobin at optical frequency (250 nm – 1000 nm) using the unconditionally stable fundamental alternating-direction-implicit finite-difference time-domain (FADI-FDTD) method. An accurate model based on complex conjugate pole-residue pairs is proposed to model the complex permittivity of hemoglobin at optical frequency. Two hemoglobin concentrations at 15 g/dL and 33 g/dL are considered. The model is then incorporated into the FADI-FDTD method for solving electromagnetic problems involving interaction of light with hemoglobin. The computation of transmission and reflection coefficients of a half space hemoglobin medium using the FADI-FDTD validates the accuracy of our model and method. The specific absorption rate (SAR) distribution of human capillary at optical frequency is also shown. While maintaining accuracy, the unconditionally stable FADI-FDTD method exhibits high efficiency in modeling hemoglobin. PMID:21559129
Adaptive Wavelet Modeling of Geophysical Data
NASA Astrophysics Data System (ADS)
Plattner, A.; Maurer, H.; Dahmen, W.; Vorloeper, J.
2009-12-01
Despite the ever-increasing power of modern computers, realistic modeling of complex three-dimensional Earth models is still a challenging task and requires substantial computing resources. The overwhelming majority of current geophysical modeling approaches includes either finite difference or non-adaptive finite element algorithms, and variants thereof. These numerical methods usually require the subsurface to be discretized with a fine mesh to accurately capture the behavior of the physical fields. However, this may result in excessive memory consumption and computing times. A common feature of most of these algorithms is that the modeled data discretizations are independent of the model complexity, which may be wasteful when there are only minor to moderate spatial variations in the subsurface parameters. Recent developments in the theory of adaptive numerical solvers have the potential to overcome this problem. Here, we consider an adaptive wavelet based approach that is applicable to a large scope of problems, also including nonlinear problems. To the best of our knowledge such algorithms have not yet been applied in geophysics. Adaptive wavelet algorithms offer several attractive features: (i) for a given subsurface model, they allow the forward modeling domain to be discretized with a quasi minimal number of degrees of freedom, (ii) sparsity of the associated system matrices is guaranteed, which makes the algorithm memory efficient, and (iii) the modeling accuracy scales linearly with computing time. We have implemented the adaptive wavelet algorithm for solving three-dimensional geoelectric problems. To test its performance, numerical experiments were conducted with a series of conductivity models exhibiting varying degrees of structural complexity. Results were compared with a non-adaptive finite element algorithm, which incorporates an unstructured mesh to best fit subsurface boundaries. Such algorithms represent the current state-of-the-art in geoelectrical modeling. An analysis of the numerical accuracy as a function of the number of degrees of freedom revealed that the adaptive wavelet algorithm outperforms the finite element solver for simple and moderately complex models, whereas the results become comparable for models with spatially highly variable electrical conductivities. The linear dependency of the modeling error and the computing time proved to be model-independent. This feature will allow very efficient computations using large-scale models as soon as our experimental code is optimized in terms of its implementation.
Efficiency of endoscopy units can be improved with use of discrete event simulation modeling.
Sauer, Bryan G; Singh, Kanwar P; Wagner, Barry L; Vanden Hoek, Matthew S; Twilley, Katherine; Cohn, Steven M; Shami, Vanessa M; Wang, Andrew Y
2016-11-01
Background and study aims: The projected increased demand for health services obligates healthcare organizations to operate efficiently. Discrete event simulation (DES) is a modeling method that allows for optimization of systems through virtual testing of different configurations before implementation. The objective of this study was to identify strategies to improve the daily efficiencies of an endoscopy center with the use of DES. Methods: We built a DES model of a five procedure room endoscopy unit at a tertiary-care university medical center. After validating the baseline model, we tested alternate configurations to run the endoscopy suite and evaluated outcomes associated with each change. The main outcome measures included adequate number of preparation and recovery rooms, blocked inflow, delay times, blocked outflows, and patient cycle time. Results: Based on a sensitivity analysis, the adequate number of preparation rooms is eight and recovery rooms is nine for a five procedure room unit (total 3.4 preparation and recovery rooms per procedure room). Simple changes to procedure scheduling and patient arrival times led to a modest improvement in efficiency. Increasing the preparation/recovery rooms based on the sensitivity analysis led to significant improvements in efficiency. Conclusions: By applying tools such as DES, we can model changes in an environment with complex interactions and find ways to improve the medical care we provide. DES is applicable to any endoscopy unit and would be particularly valuable to those who are trying to improve on the efficiency of care and patient experience.
Efficiency of endoscopy units can be improved with use of discrete event simulation modeling
Sauer, Bryan G.; Singh, Kanwar P.; Wagner, Barry L.; Vanden Hoek, Matthew S.; Twilley, Katherine; Cohn, Steven M.; Shami, Vanessa M.; Wang, Andrew Y.
2016-01-01
Background and study aims: The projected increased demand for health services obligates healthcare organizations to operate efficiently. Discrete event simulation (DES) is a modeling method that allows for optimization of systems through virtual testing of different configurations before implementation. The objective of this study was to identify strategies to improve the daily efficiencies of an endoscopy center with the use of DES. Methods: We built a DES model of a five procedure room endoscopy unit at a tertiary-care university medical center. After validating the baseline model, we tested alternate configurations to run the endoscopy suite and evaluated outcomes associated with each change. The main outcome measures included adequate number of preparation and recovery rooms, blocked inflow, delay times, blocked outflows, and patient cycle time. Results: Based on a sensitivity analysis, the adequate number of preparation rooms is eight and recovery rooms is nine for a five procedure room unit (total 3.4 preparation and recovery rooms per procedure room). Simple changes to procedure scheduling and patient arrival times led to a modest improvement in efficiency. Increasing the preparation/recovery rooms based on the sensitivity analysis led to significant improvements in efficiency. Conclusions: By applying tools such as DES, we can model changes in an environment with complex interactions and find ways to improve the medical care we provide. DES is applicable to any endoscopy unit and would be particularly valuable to those who are trying to improve on the efficiency of care and patient experience. PMID:27853739
NASA Astrophysics Data System (ADS)
Mekanik, Abolghasem; Soleimani, Mohsen
2007-11-01
Wind effect on natural draught cooling towers has a very complex physics. The fluid flow and temperature distribution around and in a single and two adjacent (tandem and side by side) dry-cooling towers under cross wind are studied numerically in the present work. Cross-wind can significantly reduce cooling efficiency of natural-draft dry-cooling towers, and the adjacent towers can affect the cooling efficiency of both. In this paper we will present a complex computational model involving more than 750,000 finite volume cells under precisely defined boundary condition. Since the flow is turbulent, the standard k-ɛ turbulence model is used. The numerical results are used to estimate the heat transfer between radiators of the tower and air surrounding it. The numerical simulation explained the main reason for decline of the thermo-dynamical performance of dry-cooling tower under cross wind. In this paper, the incompressible fluid flow is simulated, and the flow is assumed steady and three-dimensional.
Li, Wei; Yi, Huangjian; Zhang, Qitan; Chen, Duofang; Liang, Jimin
2012-01-01
An extended finite element method (XFEM) for the forward model of 3D optical molecular imaging is developed with simplified spherical harmonics approximation (SPN). In XFEM scheme of SPN equations, the signed distance function is employed to accurately represent the internal tissue boundary, and then it is used to construct the enriched basis function of the finite element scheme. Therefore, the finite element calculation can be carried out without the time-consuming internal boundary mesh generation. Moreover, the required overly fine mesh conforming to the complex tissue boundary which leads to excess time cost can be avoided. XFEM conveniences its application to tissues with complex internal structure and improves the computational efficiency. Phantom and digital mouse experiments were carried out to validate the efficiency of the proposed method. Compared with standard finite element method and classical Monte Carlo (MC) method, the validation results show the merits and potential of the XFEM for optical imaging. PMID:23227108
Li, Wei; Yi, Huangjian; Zhang, Qitan; Chen, Duofang; Liang, Jimin
2012-01-01
An extended finite element method (XFEM) for the forward model of 3D optical molecular imaging is developed with simplified spherical harmonics approximation (SP(N)). In XFEM scheme of SP(N) equations, the signed distance function is employed to accurately represent the internal tissue boundary, and then it is used to construct the enriched basis function of the finite element scheme. Therefore, the finite element calculation can be carried out without the time-consuming internal boundary mesh generation. Moreover, the required overly fine mesh conforming to the complex tissue boundary which leads to excess time cost can be avoided. XFEM conveniences its application to tissues with complex internal structure and improves the computational efficiency. Phantom and digital mouse experiments were carried out to validate the efficiency of the proposed method. Compared with standard finite element method and classical Monte Carlo (MC) method, the validation results show the merits and potential of the XFEM for optical imaging.
Atkinson, Jo-An; Page, Andrew; Wells, Robert; Milat, Andrew; Wilson, Andrew
2015-03-03
In the design of public health policy, a broader understanding of risk factors for disease across the life course, and an increasing awareness of the social determinants of health, has led to the development of more comprehensive, cross-sectoral strategies to tackle complex problems. However, comprehensive strategies may not represent the most efficient or effective approach to reducing disease burden at the population level. Rather, they may act to spread finite resources less intensively over a greater number of programs and initiatives, diluting the potential impact of the investment. While analytic tools are available that use research evidence to help identify and prioritise disease risk factors for public health action, they are inadequate to support more targeted and effective policy responses for complex public health problems. This paper discusses the limitations of analytic tools that are commonly used to support evidence-informed policy decisions for complex problems. It proposes an alternative policy analysis tool which can integrate diverse evidence sources and provide a platform for virtual testing of policy alternatives in order to design solutions that are efficient, effective, and equitable. The case of suicide prevention in Australia is presented to demonstrate the limitations of current tools to adequately inform prevention policy and discusses the utility of the new policy analysis tool. In contrast to popular belief, a systems approach takes a step beyond comprehensive thinking and seeks to identify where best to target public health action and resources for optimal impact. It is concerned primarily with what can be reasonably left out of strategies for prevention and can be used to explore where disinvestment may occur without adversely affecting population health (or equity). Simulation modelling used for policy analysis offers promise in being able to better operationalise research evidence to support decision making for complex problems, improve targeting of public health policy, and offers a foundation for strengthening relationships between policy makers, stakeholders, and researchers.
Efficient approach to obtain free energy gradient using QM/MM MD simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Asada, Toshio; Koseki, Shiro; The Research Institute for Molecular Electronic Devices
2015-12-31
The efficient computational approach denoted as charge and atom dipole response kernel (CDRK) model to consider polarization effects of the quantum mechanical (QM) region is described using the charge response and the atom dipole response kernels for free energy gradient (FEG) calculations in the quantum mechanical/molecular mechanical (QM/MM) method. CDRK model can reasonably reproduce energies and also energy gradients of QM and MM atoms obtained by expensive QM/MM calculations in a drastically reduced computational time. This model is applied on the acylation reaction in hydrated trypsin-BPTI complex to optimize the reaction path on the free energy surface by means ofmore » FEG and the nudged elastic band (NEB) method.« less
Crook, Jeremy Micah; Wallace, Gordon; Tomaskovic-Crook, Eva
2015-03-01
There is an urgent need for new and advanced approaches to modeling the pathological mechanisms of complex human neurological disorders. This is underscored by the decline in pharmaceutical research and development efficiency resulting in a relative decrease in new drug launches in the last several decades. Induced pluripotent stem cells represent a new tool to overcome many of the shortcomings of conventional methods, enabling live human neural cell modeling of complex conditions relating to aberrant neurodevelopment, such as schizophrenia, epilepsy and autism as well as age-associated neurodegeneration. This review considers the current status of induced pluripotent stem cell-based modeling of neurological disorders, canvassing proven and putative advantages, current constraints, and future prospects of next-generation culture systems for biomedical research and translation.
Science of the science, drug discovery and artificial neural networks.
Patel, Jigneshkumar
2013-03-01
Drug discovery process many times encounters complex problems, which may be difficult to solve by human intelligence. Artificial Neural Networks (ANNs) are one of the Artificial Intelligence (AI) technologies used for solving such complex problems. ANNs are widely used for primary virtual screening of compounds, quantitative structure activity relationship studies, receptor modeling, formulation development, pharmacokinetics and in all other processes involving complex mathematical modeling. Despite having such advanced technologies and enough understanding of biological systems, drug discovery is still a lengthy, expensive, difficult and inefficient process with low rate of new successful therapeutic discovery. In this paper, author has discussed the drug discovery science and ANN from very basic angle, which may be helpful to understand the application of ANN for drug discovery to improve efficiency.
Zhang, Liang; Zhang, Song; Maezawa, Izumi; Trushin, Sergey; Minhas, Paras; Pinto, Matthew; Jin, Lee-Way; Prasain, Keshar; Nguyen, Thi D.T.; Yamazaki, Yu; Kanekiyo, Takahisa; Bu, Guojun; Gateno, Benjamin; Chang, Kyeong-Ok; Nath, Karl A.; Nemutlu, Emirhan; Dzeja, Petras; Pang, Yuan-Ping; Hua, Duy H.; Trushina, Eugenia
2015-01-01
Development of therapeutic strategies to prevent Alzheimer's disease (AD) is of great importance. We show that mild inhibition of mitochondrial complex I with small molecule CP2 reduces levels of amyloid beta and phospho-Tau and averts cognitive decline in three animal models of familial AD. Low-mass molecular dynamics simulations and biochemical studies confirmed that CP2 competes with flavin mononucleotide for binding to the redox center of complex I leading to elevated AMP/ATP ratio and activation of AMP-activated protein kinase in neurons and mouse brain without inducing oxidative damage or inflammation. Furthermore, modulation of complex I activity augmented mitochondrial bioenergetics increasing coupling efficiency of respiratory chain and neuronal resistance to stress. Concomitant reduction of glycogen synthase kinase 3β activity and restoration of axonal trafficking resulted in elevated levels of neurotrophic factors and synaptic proteins in adult AD mice. Our results suggest that metabolic reprogramming induced by modulation of mitochondrial complex I activity represents promising therapeutic strategy for AD. PMID:26086035
An efficient nonviral gene-delivery vector based on hyperbranched cationic glycogen derivatives.
Liang, Xuan; Ren, Xianyue; Liu, Zhenzhen; Liu, Yingliang; Wang, Jue; Wang, Jingnan; Zhang, Li-Ming; Deng, David Yb; Quan, Daping; Yang, Liqun
2014-01-01
The purpose of this study was to synthesize and evaluate hyperbranched cationic glycogen derivatives as an efficient nonviral gene-delivery vector. A series of hyperbranched cationic glycogen derivatives conjugated with 3-(dimethylamino)-1-propylamine (DMAPA-Glyp) and 1-(2-aminoethyl) piperazine (AEPZ-Glyp) residues were synthesized and characterized by Fourier-transform infrared and hydrogen-1 nuclear magnetic resonance spectroscopy. Their buffer capacity was assessed by acid-base titration in aqueous NaCl solution. Plasmid deoxyribonucleic acid (pDNA) condensation ability and protection against DNase I degradation of the glycogen derivatives were assessed using agarose gel electrophoresis. The zeta potentials and particle sizes of the glycogen derivative/pDNA complexes were measured, and the images of the complexes were observed using atomic force microscopy. Blood compatibility and cytotoxicity were evaluated by hemolysis assay and MTT (3-[4,5-dimethylthiazol-2-yl]-2,5-diphenyltetrazolium bromide) assay, respectively. pDNA transfection efficiency mediated by the cationic glycogen derivatives was evaluated by flow cytometry and fluorescence microscopy in the 293T (human embryonic kidney) and the CNE2 (human nasopharyngeal carcinoma) cell lines. In vivo delivery of pDNA in model animals (Sprague Dawley rats) was evaluated to identify the safety and transfection efficiency. The hyperbranched cationic glycogen derivatives conjugated with DMAPA and AEPZ residues were synthesized. They exhibited better blood compatibility and lower cytotoxicity when compared to branched polyethyleneimine (bPEI). They were able to bind and condense pDNA to form the complexes of 100-250 nm in size. The transfection efficiency of the DMAPA-Glyp/pDNA complexes was higher than those of the AEPZ-Glyp/pDNA complexes in both the 293T and CNE2 cells, and almost equal to those of bPEI. Furthermore, pDNA could be more safely delivered to the blood vessels in brain tissue of Sprague Dawley rats by the DMAPA-Glyp derivatives, and then expressed as green fluorescence protein, compared with the control group. The hyperbranched cationic glycogen derivatives, especially the DMAPA-Glyp derivatives, showed high gene-transfection efficiency, good blood compatibility, and low cyto toxicity when transfected in vitro and in vivo, which are novel potential nonviral gene vectors.
A 3D modeling approach to complex faults with multi-source data
NASA Astrophysics Data System (ADS)
Wu, Qiang; Xu, Hua; Zou, Xukai; Lei, Hongzhuan
2015-04-01
Fault modeling is a very important step in making an accurate and reliable 3D geological model. Typical existing methods demand enough fault data to be able to construct complex fault models, however, it is well known that the available fault data are generally sparse and undersampled. In this paper, we propose a workflow of fault modeling, which can integrate multi-source data to construct fault models. For the faults that are not modeled with these data, especially small-scale or approximately parallel with the sections, we propose the fault deduction method to infer the hanging wall and footwall lines after displacement calculation. Moreover, using the fault cutting algorithm can supplement the available fault points on the location where faults cut each other. Increasing fault points in poor sample areas can not only efficiently construct fault models, but also reduce manual intervention. By using a fault-based interpolation and remeshing the horizons, an accurate 3D geological model can be constructed. The method can naturally simulate geological structures no matter whether the available geological data are sufficient or not. A concrete example of using the method in Tangshan, China, shows that the method can be applied to broad and complex geological areas.
Bhatti, A Aziz
2009-12-01
This study proposes an efficient and improved model of a direct storage bidirectional memory, improved bidirectional associative memory (IBAM), and emphasises the use of nanotechnology for efficient implementation of such large-scale neural network structures at a considerable lower cost reduced complexity, and less area required for implementation. This memory model directly stores the X and Y associated sets of M bipolar binary vectors in the form of (MxN(x)) and (MxN(y)) memory matrices, requires O(N) or about 30% of interconnections with weight strength ranging between +/-1, and is computationally very efficient as compared to sequential, intraconnected and other bidirectional associative memory (BAM) models of outer-product type that require O(N(2)) complex interconnections with weight strength ranging between +/-M. It is shown that it is functionally equivalent to and possesses all attributes of a BAM of outer-product type, and yet it is simple and robust in structure, very large scale integration (VLSI), optical and nanotechnology realisable, modular and expandable neural network bidirectional associative memory model in which the addition or deletion of a pair of vectors does not require changes in the strength of interconnections of the entire memory matrix. The analysis of retrieval process, signal-to-noise ratio, storage capacity and stability of the proposed model as well as of the traditional BAM has been carried out. Constraints on and characteristics of unipolar and bipolar binaries for improved storage and retrieval are discussed. The simulation results show that it has log(e) N times higher storage capacity, superior performance, faster convergence and retrieval time, when compared to traditional sequential and intraconnected bidirectional memories.
Antigen Potency and Maximal Efficacy Reveal a Mechanism of Efficient T Cell Activation
Wheeler, Richard J.; Zhang, Hao; Cordoba, Shaun-Paul; Peng, Yan-Chun; Chen, Ji-Li; Cerundolo, Vincenzo; Dong, Tao; Coombs, Daniel; van der Merwe, P. Anton
2014-01-01
T cell activation, a critical event in adaptive immune responses, follows productive interactions between T cell receptors (TCRs) and antigens, in the form of peptide-bound major histocompatibility complexes (pMHCs) on the surfaces of antigen-presenting-cells. Upon activation, T cells can lyse infected cells, secrete cytokines, such as interferon-γ (IFN-γ), and perform other effector functions with various efficiencies that directly depend on the binding parameters of the TCR-pMHC complex. The mechanism that relates binding parameters to the efficiency of activation of the T cell remains controversial; some studies suggest that the dissociation constant (KD) determines the response (the “affinity model”), whereas others suggest that the off-rate (koff) is critical (the “productive hit rate model”). Here, we used mathematical modeling to show that antigen potency, as determined by the EC50, the functional correlate that is used to support KD-based models, could not be used to discriminate between the affinity and productive hit rate models. Our theoretical work showed that both models predicted a correlation between antigen potency and KD, but only the productive hit rate model predicted a correlation between maximal efficacy (Emax) and koff. We confirmed the predictions made by the productive hit rate model in experiments with cytotoxic T cell clones and a panel of pMHC variants. Therefore, we suggest that the activity of an antigen is determined by both its potency and maximal efficacy. We discuss the implications of our findings to the practical evaluation of T cell activation, for example in adoptive immunotherapies, and relate our work to the pharmacological theory of dose-response. PMID:21653229
NASA Astrophysics Data System (ADS)
Kanta, L.; Giacomoni, M.; Shafiee, M. E.; Berglund, E.
2014-12-01
The sustainability of water resources is threatened by urbanization, as increasing demands deplete water availability, and changes to the landscape alter runoff and the flow regime of receiving water bodies. Utility managers typically manage urban water resources through the use of centralized solutions, such as large reservoirs, which may be limited in their ability balance the needs of urbanization and ecological systems. Decentralized technologies, on the other hand, may improve the health of the water resources system and deliver urban water services. For example, low impact development technologies, such as rainwater harvesting, and water-efficient technologies, such as low-flow faucets and toilets, may be adopted by households to retain rainwater and reduce demands, offsetting the need for new centralized infrastructure. Decentralized technologies may create new complexities in infrastructure and water management, as decentralization depends on community behavior and participation beyond traditional water resources planning. Messages about water shortages and water quality from peers and the water utility managers can influence the adoption of new technologies. As a result, feedbacks between consumers and water resources emerge, creating a complex system. This research develops a framework to simulate the diffusion of water-efficient innovations and the sustainability of urban water resources, by coupling models of households in a community, hydrologic models of a water resources system, and a cellular automata model of land use change. Agent-based models are developed to simulate the land use and water demand decisions of individual households, and behavioral rules are encoded to simulate communication with other agents and adoption of decentralized technologies, using a model of the diffusion of innovation. The framework is applied for an illustrative case study to simulate water resources sustainability over a long-term planning horizon.
Modeling Non-homologous End Joining
NASA Technical Reports Server (NTRS)
Li, Yongfeng
2013-01-01
Non-homologous end joining (NHEJ) is the dominant DNA double strand break (DSB) repair pathway and involves several NHEJ proteins such as Ku, DNA-PKcs, XRCC4, Ligase IV and so on. Once DSBs are generated, Ku is first recruited to the DNA end, followed by other NHEJ proteins for DNA end processing and ligation. Because of the direct ligation of break ends without the need for a homologous template, NHEJ turns out to be an error-prone but efficient repair pathway. Some mechanisms have been proposed of how the efficiency of NHEJ repair is affected. The type of DNA damage is an important factor of NHEJ repair. For instance, the length of DNA fragment may determine the recruitment efficiency of NHEJ protein such as Ku [1], or the complexity of the DNA breaks [2] is accounted for the choice of NHEJ proteins and subpathway of NHEJ repair. On the other hand, the chromatin structure also plays a role of the accessibility of NHEJ protein to the DNA damage site. In this talk, some mathematical models of NHEJ, that consist of series of biochemical reactions complying with the laws of chemical reaction (e.g. mass action, etc.), will be introduced. By mathematical and numerical analysis and parameter estimation, the models are able to capture the qualitative biological features and show good agreement with experimental data. As conclusions, from the viewpoint of modeling, how the NHEJ proteins are recruited will be first discussed for connection between the classical sequential model [4] and recently proposed two-phase model [5]. Then how the NHEJ repair pathway is affected, by the length of DNA fragment [6], the complexity of DNA damage [7] and the chromatin structure [8], will be addressed
Schoolmaster, Donald; Stagg, Camille L.
2018-01-01
A trade-off between competitive ability and stress tolerance has been hypothesized and empirically supported to explain the zonation of species across stress gradients for a number of systems. Since stress often reduces plant productivity, one might expect a pattern of decreasing productivity across the zones of the stress gradient. However, this pattern is often not observed in coastal wetlands that show patterns of zonation along a salinity gradient. To address the potentially complex relationship between stress, zonation, and productivity in coastal wetlands, we developed a model of plant biomass as a function of resource competition and salinity stress. Analysis of the model confirms the conventional wisdom that a trade-off between competitive ability and stress tolerance is a necessary condition for zonation. It also suggests that a negative relationship between salinity and production can be overcome if (1) the supply of the limiting resource increases with greater salinity stress or (2) nutrient use efficiency increases with increasing salinity. We fit the equilibrium solution of the dynamic model to data from Louisiana coastal wetlands to test its ability to explain patterns of production across the landscape gradient and derive predictions that could be tested with independent data. We found support for a number of the model predictions, including patterns of decreasing competitive ability and increasing nutrient use efficiency across a gradient from freshwater to saline wetlands. In addition to providing a quantitative framework to support the mechanistic hypotheses of zonation, these results suggest that this simple model is a useful platform to further build upon, simulate and test mechanistic hypotheses of more complex patterns and phenomena in coastal wetlands.
Mutti, Francesco G.; Pievo, Roberta; Sgobba, Maila; Gullotti, Michele; Santagostini, Laura
2008-01-01
The biomimetic catalytic oxidations of the dinuclear and trinuclear copper(II) complexes versus two catechols, namely, D-(+)-catechin and L-( − )-epicatechin to give the corresponding quinones are reported. The unstable quinones were trapped by the nucleophilic reagent, 3-methyl-2-benzothiazolinone hydrazone (MBTH), and have been calculated the molar absorptivities of the different quinones. The catalytic efficiency is moderate, as inferred by kinetic constants, but the complexes exhibit significant enantio-differentiating ability towards the catechols, albeit for the dinuclear complexes, this enantio-differentiating ability is lower. In all cases, the preferred enantiomeric substrate is D-(+)-catechin to respect the other catechol, because of the spatial disposition of this substrate. PMID:18825268
CCOMP: An efficient algorithm for complex roots computation of determinantal equations
NASA Astrophysics Data System (ADS)
Zouros, Grigorios P.
2018-01-01
In this paper a free Python algorithm, entitled CCOMP (Complex roots COMPutation), is developed for the efficient computation of complex roots of determinantal equations inside a prescribed complex domain. The key to the method presented is the efficient determination of the candidate points inside the domain which, in their close neighborhood, a complex root may lie. Once these points are detected, the algorithm proceeds to a two-dimensional minimization problem with respect to the minimum modulus eigenvalue of the system matrix. In the core of CCOMP exist three sub-algorithms whose tasks are the efficient estimation of the minimum modulus eigenvalues of the system matrix inside the prescribed domain, the efficient computation of candidate points which guarantee the existence of minima, and finally, the computation of minima via bound constrained minimization algorithms. Theoretical results and heuristics support the development and the performance of the algorithm, which is discussed in detail. CCOMP supports general complex matrices, and its efficiency, applicability and validity is demonstrated to a variety of microwave applications.
Towards quantification of vibronic coupling in photosynthetic antenna complexes
NASA Astrophysics Data System (ADS)
Singh, V. P.; Westberg, M.; Wang, C.; Dahlberg, P. D.; Gellen, T.; Gardiner, A. T.; Cogdell, R. J.; Engel, G. S.
2015-06-01
Photosynthetic antenna complexes harvest sunlight and efficiently transport energy to the reaction center where charge separation powers biochemical energy storage. The discovery of existence of long lived quantum coherence during energy transfer has sparked the discussion on the role of quantum coherence on the energy transfer efficiency. Early works assigned observed coherences to electronic states, and theoretical studies showed that electronic coherences could affect energy transfer efficiency—by either enhancing or suppressing transfer. However, the nature of coherences has been fiercely debated as coherences only report the energy gap between the states that generate coherence signals. Recent works have suggested that either the coherences observed in photosynthetic antenna complexes arise from vibrational wave packets on the ground state or, alternatively, coherences arise from mixed electronic and vibrational states. Understanding origin of coherences is important for designing molecules for efficient light harvesting. Here, we give a direct experimental observation from a mutant of LH2, which does not have B800 chromophores, to distinguish between electronic, vibrational, and vibronic coherence. We also present a minimal theoretical model to characterize the coherences both in the two limiting cases of purely vibrational and purely electronic coherence as well as in the intermediate, vibronic regime.
Efficient Recreation of t(11;22) EWSR1-FLI1+ in Human Stem Cells Using CRISPR/Cas9.
Torres-Ruiz, Raul; Martinez-Lage, Marta; Martin, Maria C; Garcia, Aida; Bueno, Clara; Castaño, Julio; Ramirez, Juan C; Menendez, Pablo; Cigudosa, Juan C; Rodriguez-Perales, Sandra
2017-05-09
Efficient methodologies for recreating cancer-associated chromosome translocations are in high demand as tools for investigating how such events initiate cancer. The CRISPR/Cas9 system has been used to reconstruct the genetics of these complex rearrangements at native loci while maintaining the architecture and regulatory elements. However, the CRISPR system remains inefficient in human stem cells. Here, we compared three strategies aimed at enhancing the efficiency of the CRISPR-mediated t(11;22) translocation in human stem cells, including mesenchymal and induced pluripotent stem cells: (1) using end-joining DNA processing factors involved in repair mechanisms, or (2) ssODNs to guide the ligation of the double-strand break ends generated by CRISPR/Cas9; and (3) all-in-one plasmid or ribonucleoprotein complex-based approaches. We report that the generation of targeted t(11;22) is significantly increased by using a combination of ribonucleoprotein complexes and ssODNs. The CRISPR/Cas9-mediated generation of targeted t(11;22) in human stem cells opens up new avenues in modeling Ewing sarcoma. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Enhanced photocurrent production by bio-dyes of photosynthetic macromolecules on designed TiO2 film
Yu, Daoyong; Wang, Mengfei; Zhu, Guoliang; Ge, Baosheng; Liu, Shuang; Huang, Fang
2015-01-01
The macromolecular pigment-protein complex has the merit of high efficiency for light-energy capture and transfer after long-term photosynthetic evolution. Here bio-dyes of A. platensis photosystem I (PSI) and spinach light-harvesting complex II (LHCII) are spontaneously sensitized on three types of designed TiO2 films, to assess the effects of pigment-protein complex on the performance of bio-dye sensitized solar cells (SSC). Adsorption models of bio-dyes are proposed based on the 3D structures of PSI and LHCII, and the size of particles and inner pores in the TiO2 film. PSI shows its merit of high efficiency for captured energy transfer, charge separation and transfer in the electron transfer chain (ETC), and electron injection from FB to the TiO2 conducting band. After optimization, the best short current (JSC) and photoelectric conversion efficiency (η) of PSI-SSC and LHCII-SSC are 1.31 mA cm-2 and 0.47%, and 1.51 mA cm-2 and 0.52%, respectively. The potential for further improvement of this PSI based SSC is significant and could lead to better utilization of solar energy. PMID:25790735
Approaches and possible improvements in the area of multibody dynamics modeling
NASA Technical Reports Server (NTRS)
Lips, K. W.; Singh, R.
1987-01-01
A wide ranging look is taken at issues involved in the dynamic modeling of complex, multibodied orbiting space systems. Capabilities and limitations of two major codes (DISCOS, TREETOPS) are assessed and possible extensions to the CONTOPS software are outlined. In addition, recommendations are made concerning the direction future development should take in order to achieve higher fidelity, more computationally efficient multibody software solutions.
Optimizing complex phenotypes through model-guided multiplex genome engineering
Kuznetsov, Gleb; Goodman, Daniel B.; Filsinger, Gabriel T.; ...
2017-05-25
Here, we present a method for identifying genomic modifications that optimize a complex phenotype through multiplex genome engineering and predictive modeling. We apply our method to identify six single nucleotide mutations that recover 59% of the fitness defect exhibited by the 63-codon E. coli strain C321.ΔA. By introducing targeted combinations of changes in multiplex we generate rich genotypic and phenotypic diversity and characterize clones using whole-genome sequencing and doubling time measurements. Regularized multivariate linear regression accurately quantifies individual allelic effects and overcomes bias from hitchhiking mutations and context-dependence of genome editing efficiency that would confound other strategies.
Optimizing complex phenotypes through model-guided multiplex genome engineering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuznetsov, Gleb; Goodman, Daniel B.; Filsinger, Gabriel T.
Here, we present a method for identifying genomic modifications that optimize a complex phenotype through multiplex genome engineering and predictive modeling. We apply our method to identify six single nucleotide mutations that recover 59% of the fitness defect exhibited by the 63-codon E. coli strain C321.ΔA. By introducing targeted combinations of changes in multiplex we generate rich genotypic and phenotypic diversity and characterize clones using whole-genome sequencing and doubling time measurements. Regularized multivariate linear regression accurately quantifies individual allelic effects and overcomes bias from hitchhiking mutations and context-dependence of genome editing efficiency that would confound other strategies.
Shekhawat, Lalita Kanwar; Sarkar, Jayati; Gupta, Rachit; Hadpe, Sandeep; Rathore, Anurag S
2018-02-10
Centrifugation continues to be one of the most commonly used unit operations for achieving efficient harvest of the product from the mammalian cell culture broth during production of therapeutic monoclonal antibodies (mAbs). Since the mammalian cells are known to be shear sensitive, optimal performance of the centrifuge requires a balance between productivity and shear. In this study, Computational Fluid Dynamics (CFD) has been successfully used as a tool to facilitate efficient optimization. Multiphase Eulerian-Eulerian model coupled with Gidaspow drag model along with Eulerian-Eulerian k-ε mixture turbulence model have been used to quantify the complex hydrodynamics of the centrifuge and thus evaluate the turbulent stresses generated by the centrifugal forces. An empirical model has been developed by statistical analysis of experimentally observed cell lysis data as a function of turbulent stresses. An operating window that offers the optimal balance between high productivity, high separation efficiency, and low cell damage has been identified by use of CFD modeling. Copyright © 2017 Elsevier B.V. All rights reserved.
Structure of a BMI-1-Ring1B Polycomb Group Ubiquitin Ligase Complex
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li,Z.; Cao, R.; Wang, M.
2006-01-01
Polycomb group (PcG) proteins Bmi-1 and Ring1B are core subunits of the PRC1 complex which plays important roles in the regulation of Hox gene expression, X-chromosome inactivation, tumorigenesis and stem cell self-renewal. The RING finger protein Ring1B is an E3 ligase that participates in the ubiquitination of lysine 119 of histone H2A, and the binding of Bmi-1 stimulates the E3 ligase activity. We have mapped the regions of Bmi-1 and Ring1B required for efficient ubiquitin transfer and determined a 2.5 Angstroms structure of the Bmi-1-Ring1B core domain complex. The structure reveals that Ring1B 'hugs' Bmi-1 through extensive RING domain contactsmore » and its N-terminal tail wraps around Bmi-1. The two regions of interaction have a synergistic effect on the E3 ligase activity. Our analyses suggest a model where the Bmi-1-Ring1B complex stabilizes the interaction between the E2 enzyme and the nucleosomal substrate to allow efficient ubiquitin transfer.« less
NASA Astrophysics Data System (ADS)
Choi, Eunsong
Computer simulations are an integral part of research in modern condensed matter physics; they serve as a direct bridge between theory and experiment by systemactically applying a microscopic model to a collection of particles that effectively imitate a macroscopic system. In this thesis, we study two very differnt condensed systems, namely complex fluids and frustrated magnets, primarily by simulating classical dynamics of each system. In the first part of the thesis, we focus on ionic liquids (ILs) and polymers--the two complementary classes of materials that can be combined to provide various unique properties. The properties of polymers/ILs systems, such as conductivity, viscosity, and miscibility, can be fine tuned by choosing an appropriate combination of cations, anions, and polymers. However, designing a system that meets a specific need requires a concrete understanding of physics and chemistry that dictates a complex interplay between polymers and ionic liquids. In this regard, molecular dynamics (MD) simulation is an efficient tool that provides a molecular level picture of such complex systems. We study the behavior of Poly (ethylene oxide) (PEO) and the imidazolium based ionic liquids, using MD simulations and statistical mechanics. We also discuss our efforts to develop reliable and efficient classical force-fields for PEO and the ionic liquids. The second part is devoted to studies on geometrically frustrated magnets. In particular, a microscopic model, which gives rise to an incommensurate spiral magnetic ordering observed in a pyrochlore antiferromagnet is investigated. The validation of the model is made via a comparison of the spin-wave spectra with the neutron scattering data. Since the standard Holstein-Primakoff method is difficult to employ in such a complex ground state structure with a large unit cell, we carry out classical spin dynamics simulations to compute spin-wave spectra directly from the Fourier transform of spin trajectories. We conclude the study by showing an excellent agreement between the simulation and the experiment.
Ikkersheim, David; Tanke, Marit; van Schooten, Gwendy; de Bresser, Niels; Fleuren, Hein
2013-06-16
The majority of curative health care is organized in hospitals. As in most other countries, the current 94 hospital locations in the Netherlands offer almost all treatments, ranging from rather basic to very complex care. Recent studies show that concentration of care can lead to substantial quality improvements for complex conditions and that dispersion of care for chronic conditions may increase quality of care. In previous studies on allocation of hospital infrastructure, the allocation is usually only based on accessibility and/or efficiency of hospital care. In this paper, we explore the possibilities to include a quality function in the objective function, to give global directions to how the 'optimal' hospital infrastructure would be in the Dutch context. To create optimal societal value we have used a mathematical mixed integer programming (MIP) model that balances quality, efficiency and accessibility of care for 30 ICD-9 diagnosis groups. Typical aspects that are taken into account are the volume-outcome relationship, the maximum accepted travel times for diagnosis groups that may need emergency treatment and the minimum use of facilities. The optimal number of hospital locations per diagnosis group varies from 12-14 locations for diagnosis groups which have a strong volume-outcome relationship, such as neoplasms, to 150 locations for chronic diagnosis groups such as diabetes and chronic obstructive pulmonary disease (COPD). In conclusion, our study shows a new approach for allocating hospital infrastructure over a country or certain region that includes quality of care in relation to volume per provider that can be used in various countries or regions. In addition, our model shows that within the Dutch context chronic care may be too concentrated and complex and/or acute care may be too dispersed. Our approach can relatively easily be adopted towards other countries or regions and is very suitable to perform a 'what-if' analysis.
Ziraksaz, Zarrintaj; Nomani, Alireza; Ruponen, Marika; Soleimani, Masoud; Tabbakhian, Majid; Haririan, Ismaeil
2013-01-23
Interaction of cell-surface glycosaminoglycans (GAGs) with non-viral vectors seems to be an important factor which modifies the intracellular destination of the gene complexes. Intracellular kinetics of polyamidoamine (PAMAM) dendrimer as a non-viral vector in cellular uptake, intranuclear delivery and transgene expression of plasmid DNA with regard to the cell-surface GAGs has not been investigated until now. The physicochemical properties of the PAMAM-pDNA complexes were characterized by photon correlation spectroscopy, atomic force microscopy, zeta measurement and agarose gel electrophoresis. The transfection efficiency and toxicity of the complexes at different nitrogen to phosphate (N:P) ratios were determined using various in vitro cell models such as human embryonic kidney cells, chinese hamster ovary cells and its mutants lacking cell-surface GAGs or heparan sulphate proteoglycans (HSPGs). Cellular uptake, nuclear uptake and transfection efficiency of the complexes were determined using flow cytometry and optimized cell-nuclei isolation with quantitative real-time PCR and luciferase assay. Physicochemical studies showed that PAMAM dendrimer binds pDNA efficiently, forms small complexes with high positive zeta potential and transfects cells properly at N:P ratios around 5 and higher. The cytotoxicity could be a problem at N:Ps higher than 10. GAGs elimination caused nearly one order of magnitude higher pDNA nuclear uptake and more than 2.6-fold higher transfection efficiency than CHO parent cells. However, neither AUC of nuclear uptake, nor AUC of transfection affected significantly by only cell-surface HSPGs elimination and interesting data related to the effect of GAGs on intranuclear pDNA using PAMAM as delivery vector have been reported in this study. Presented data shows that the rate-limiting step of PAMAM-pDNA complexes transfection is located after delivery to the cell nucleus and GAGs are regarded as an inhibitor of the intranuclear delivery step, while slightly promotes transgene expression. Copyright © 2012 Elsevier B.V. All rights reserved.
Biocellion: accelerating computer simulation of multicellular biological system models.
Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya
2014-11-01
Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit http://biocellion.com for additional information. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Murad, Havi; Kipnis, Victor; Freedman, Laurence S
2016-10-01
Assessing interactions in linear regression models when covariates have measurement error (ME) is complex.We previously described regression calibration (RC) methods that yield consistent estimators and standard errors for interaction coefficients of normally distributed covariates having classical ME. Here we extend normal based RC (NBRC) and linear RC (LRC) methods to a non-classical ME model, and describe more efficient versions that combine estimates from the main study and internal sub-study. We apply these methods to data from the Observing Protein and Energy Nutrition (OPEN) study. Using simulations we show that (i) for normally distributed covariates efficient NBRC and LRC were nearly unbiased and performed well with sub-study size ≥200; (ii) efficient NBRC had lower MSE than efficient LRC; (iii) the naïve test for a single interaction had type I error probability close to the nominal significance level, whereas efficient NBRC and LRC were slightly anti-conservative but more powerful; (iv) for markedly non-normal covariates, efficient LRC yielded less biased estimators with smaller variance than efficient NBRC. Our simulations suggest that it is preferable to use: (i) efficient NBRC for estimating and testing interaction effects of normally distributed covariates and (ii) efficient LRC for estimating and testing interactions for markedly non-normal covariates. © The Author(s) 2013.
Liu, Jianhui; Jiang, Weina
2012-08-28
Coordination of the pyridyl-attached diiron azadithiolate hexacarbonyl complexes (2 and 3) through the pyridyl nitrogen to the Re on 10-phenanthroline rhenium (5a) and 2,9-diphenyl-1,10-phenanthroline rhenium (5b) forms novel [Re-Fe] complexes 7a, 7b and 8 respectively. Under visible light illumination using triethylamine as a sacrificial electron donor and [Re-Fe] type complexes (7a, 7b or 8) as catalysts, remarkably increased efficiency was observed for photoinduced hydrogen production with a turnover number reaching 11.8 from complex 7a and 8.75 from 7b. To the best of our knowledge, these are the best values compared to other [Re-Fe] photocatalysts reported so far. In contrast to the parent molecules, the turnover number by the intermolecular combination of complexes 6a and 2 showed a value of 5.23, and that from 6b and 2 is 3.8, while no H(2) was detected from 8a and 3 under the same experimental conditions. Obviously, the intramolecular combination of rhenium(I) and [2Fe2S] as a catalyst is promising for efficient H(2) evolution, and it is better than the intermolecular multi-component system.
Rapid Global Fitting of Large Fluorescence Lifetime Imaging Microscopy Datasets
Warren, Sean C.; Margineanu, Anca; Alibhai, Dominic; Kelly, Douglas J.; Talbot, Clifford; Alexandrov, Yuriy; Munro, Ian; Katan, Matilda
2013-01-01
Fluorescence lifetime imaging (FLIM) is widely applied to obtain quantitative information from fluorescence signals, particularly using Förster Resonant Energy Transfer (FRET) measurements to map, for example, protein-protein interactions. Extracting FRET efficiencies or population fractions typically entails fitting data to complex fluorescence decay models but such experiments are frequently photon constrained, particularly for live cell or in vivo imaging, and this leads to unacceptable errors when analysing data on a pixel-wise basis. Lifetimes and population fractions may, however, be more robustly extracted using global analysis to simultaneously fit the fluorescence decay data of all pixels in an image or dataset to a multi-exponential model under the assumption that the lifetime components are invariant across the image (dataset). This approach is often considered to be prohibitively slow and/or computationally expensive but we present here a computationally efficient global analysis algorithm for the analysis of time-correlated single photon counting (TCSPC) or time-gated FLIM data based on variable projection. It makes efficient use of both computer processor and memory resources, requiring less than a minute to analyse time series and multiwell plate datasets with hundreds of FLIM images on standard personal computers. This lifetime analysis takes account of repetitive excitation, including fluorescence photons excited by earlier pulses contributing to the fit, and is able to accommodate time-varying backgrounds and instrument response functions. We demonstrate that this global approach allows us to readily fit time-resolved fluorescence data to complex models including a four-exponential model of a FRET system, for which the FRET efficiencies of the two species of a bi-exponential donor are linked, and polarisation-resolved lifetime data, where a fluorescence intensity and bi-exponential anisotropy decay model is applied to the analysis of live cell homo-FRET data. A software package implementing this algorithm, FLIMfit, is available under an open source licence through the Open Microscopy Environment. PMID:23940626
NASA Astrophysics Data System (ADS)
Sizov, Gennadi Y.
In this dissertation, a model-based multi-objective optimal design of permanent magnet ac machines, supplied by sine-wave current regulated drives, is developed and implemented. The design procedure uses an efficient electromagnetic finite element-based solver to accurately model nonlinear material properties and complex geometric shapes associated with magnetic circuit design. Application of an electromagnetic finite element-based solver allows for accurate computation of intricate performance parameters and characteristics. The first contribution of this dissertation is the development of a rapid computational method that allows accurate and efficient exploration of large multi-dimensional design spaces in search of optimum design(s). The computationally efficient finite element-based approach developed in this work provides a framework of tools that allow rapid analysis of synchronous electric machines operating under steady-state conditions. In the developed modeling approach, major steady-state performance parameters such as, winding flux linkages and voltages, average, cogging and ripple torques, stator core flux densities, core losses, efficiencies and saturated machine winding inductances, are calculated with minimum computational effort. In addition, the method includes means for rapid estimation of distributed stator forces and three-dimensional effects of stator and/or rotor skew on the performance of the machine. The second contribution of this dissertation is the development of the design synthesis and optimization method based on a differential evolution algorithm. The approach relies on the developed finite element-based modeling method for electromagnetic analysis and is able to tackle large-scale multi-objective design problems using modest computational resources. Overall, computational time savings of up to two orders of magnitude are achievable, when compared to current and prevalent state-of-the-art methods. These computational savings allow one to expand the optimization problem to achieve more complex and comprehensive design objectives. The method is used in the design process of several interior permanent magnet industrial motors. The presented case studies demonstrate that the developed finite element-based approach practically eliminates the need for using less accurate analytical and lumped parameter equivalent circuit models for electric machine design optimization. The design process and experimental validation of the case-study machines are detailed in the dissertation.
A computable expression of closure to efficient causation.
Mossio, Matteo; Longo, Giuseppe; Stewart, John
2009-04-07
In this paper, we propose a mathematical expression of closure to efficient causation in terms of lambda-calculus; we argue that this opens up the perspective of developing principled computer simulations of systems closed to efficient causation in an appropriate programming language. An important implication of our formulation is that, by exhibiting an expression in lambda-calculus, which is a paradigmatic formalism for computability and programming, we show that there are no conceptual or principled problems in realizing a computer simulation or model of closure to efficient causation. We conclude with a brief discussion of the question whether closure to efficient causation captures all relevant properties of living systems. We suggest that it might not be the case, and that more complex definitions could indeed create crucial some obstacles to computability.
Identification of cascade water tanks using a PWARX model
NASA Astrophysics Data System (ADS)
Mattsson, Per; Zachariah, Dave; Stoica, Petre
2018-06-01
In this paper we consider the identification of a discrete-time nonlinear dynamical model for a cascade water tank process. The proposed method starts with a nominal linear dynamical model of the system, and proceeds to model its prediction errors using a model that is piecewise affine in the data. As data is observed, the nominal model is refined into a piecewise ARX model which can capture a wide range of nonlinearities, such as the saturation in the cascade tanks. The proposed method uses a likelihood-based methodology which adaptively penalizes model complexity and directly leads to a computationally efficient implementation.
a Quadtree Organization Construction and Scheduling Method for Urban 3d Model Based on Weight
NASA Astrophysics Data System (ADS)
Yao, C.; Peng, G.; Song, Y.; Duan, M.
2017-09-01
The increasement of Urban 3D model precision and data quantity puts forward higher requirements for real-time rendering of digital city model. Improving the organization, management and scheduling of 3D model data in 3D digital city can improve the rendering effect and efficiency. This paper takes the complexity of urban models into account, proposes a Quadtree construction and scheduling rendering method for Urban 3D model based on weight. Divide Urban 3D model into different rendering weights according to certain rules, perform Quadtree construction and schedule rendering according to different rendering weights. Also proposed an algorithm for extracting bounding box extraction based on model drawing primitives to generate LOD model automatically. Using the algorithm proposed in this paper, developed a 3D urban planning&management software, the practice has showed the algorithm is efficient and feasible, the render frame rate of big scene and small scene are both stable at around 25 frames.
Fitting Neuron Models to Spike Trains
Rossant, Cyrille; Goodman, Dan F. M.; Fontaine, Bertrand; Platkiewicz, Jonathan; Magnusson, Anna K.; Brette, Romain
2011-01-01
Computational modeling is increasingly used to understand the function of neural circuits in systems neuroscience. These studies require models of individual neurons with realistic input–output properties. Recently, it was found that spiking models can accurately predict the precisely timed spike trains produced by cortical neurons in response to somatically injected currents, if properly fitted. This requires fitting techniques that are efficient and flexible enough to easily test different candidate models. We present a generic solution, based on the Brian simulator (a neural network simulator in Python), which allows the user to define and fit arbitrary neuron models to electrophysiological recordings. It relies on vectorization and parallel computing techniques to achieve efficiency. We demonstrate its use on neural recordings in the barrel cortex and in the auditory brainstem, and confirm that simple adaptive spiking models can accurately predict the response of cortical neurons. Finally, we show how a complex multicompartmental model can be reduced to a simple effective spiking model. PMID:21415925
NASA Astrophysics Data System (ADS)
Wu, Leyuan
2018-01-01
We present a brief review of gravity forward algorithms in Cartesian coordinate system, including both space-domain and Fourier-domain approaches, after which we introduce a truly general and efficient algorithm, namely the convolution-type Gauss fast Fourier transform (Conv-Gauss-FFT) algorithm, for 2D and 3D modeling of gravity potential and its derivatives due to sources with arbitrary geometry and arbitrary density distribution which are defined either by discrete or by continuous functions. The Conv-Gauss-FFT algorithm is based on the combined use of a hybrid rectangle-Gaussian grid and the fast Fourier transform (FFT) algorithm. Since the gravity forward problem in Cartesian coordinate system can be expressed as continuous convolution-type integrals, we first approximate the continuous convolution by a weighted sum of a series of shifted discrete convolutions, and then each shifted discrete convolution, which is essentially a Toeplitz system, is calculated efficiently and accurately by combining circulant embedding with the FFT algorithm. Synthetic and real model tests show that the Conv-Gauss-FFT algorithm can obtain high-precision forward results very efficiently for almost any practical model, and it works especially well for complex 3D models when gravity fields on large 3D regular grids are needed.
Virtual sensor models for real-time applications
NASA Astrophysics Data System (ADS)
Hirsenkorn, Nils; Hanke, Timo; Rauch, Andreas; Dehlink, Bernhard; Rasshofer, Ralph; Biebl, Erwin
2016-09-01
Increased complexity and severity of future driver assistance systems demand extensive testing and validation. As supplement to road tests, driving simulations offer various benefits. For driver assistance functions the perception of the sensors is crucial. Therefore, sensors also have to be modeled. In this contribution, a statistical data-driven sensor-model, is described. The state-space based method is capable of modeling various types behavior. In this contribution, the modeling of the position estimation of an automotive radar system, including autocorrelations, is presented. For rendering real-time capability, an efficient implementation is presented.
Three-Dimensional High Fidelity Progressive Failure Damage Modeling of NCF Composites
NASA Technical Reports Server (NTRS)
Aitharaju, Venkat; Aashat, Satvir; Kia, Hamid G.; Satyanarayana, Arunkumar; Bogert, Philip B.
2017-01-01
Performance prediction of off-axis laminates is of significant interest in designing composite structures for energy absorption. Phenomenological models available in most of the commercial programs, where the fiber and resin properties are smeared, are very efficient for large scale structural analysis, but lack the ability to model the complex nonlinear behavior of the resin and fail to capture the complex load transfer mechanisms between the fiber and the resin matrix. On the other hand, high fidelity mesoscale models, where the fiber tows and matrix regions are explicitly modeled, have the ability to account for the complex behavior in each of the constituents of the composite. However, creating a finite element model of a larger scale composite component could be very time consuming and computationally very expensive. In the present study, a three-dimensional mesoscale model of non-crimp composite laminates was developed for various laminate schemes. The resin material was modeled as an elastic-plastic material with nonlinear hardening. The fiber tows were modeled with an orthotropic material model with brittle failure. In parallel, new stress based failure criteria combined with several damage evolution laws for matrix stresses were proposed for a phenomenological model. The results from both the mesoscale and phenomenological models were compared with the experiments for a variety of off-axis laminates.
Inactivation of urease by catechol: Kinetics and structure.
Mazzei, Luca; Cianci, Michele; Musiani, Francesco; Lente, Gábor; Palombo, Marta; Ciurli, Stefano
2017-01-01
Urease is a Ni(II)-containing enzyme that catalyzes the hydrolysis of urea to yield ammonia and carbamate at a rate 10 15 times higher than the uncatalyzed reaction. Urease is a virulence factor of several human pathogens, in addition to decreasing the efficiency of soil organic nitrogen fertilization. Therefore, efficient urease inhibitors are actively sought. In this study, we describe a molecular characterization of the interaction between urease from Sporosarcina pasteurii (SPU) and Canavalia ensiformis (jack bean, JBU) with catechol, a model polyphenol. In particular, catechol irreversibly inactivates both SPU and JBU with a complex radical-based autocatalytic multistep mechanism. The crystal structure of the SPU-catechol complex, determined at 1.50Å resolution, reveals the structural details of the enzyme inhibition. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Harder, S. R.; Roulet, N. T.; Strachan, I. B.; Crill, P. M.; Persson, A.; Pelletier, L.; Watt, C.
2014-12-01
Various microforms, created by spatial differential thawing of permafrost, make up the subarctic heterogeneous Stordalen peatland complex (68°22'N, 19°03'E), near Abisko, Sweden. This results in significantly different peatland vegetation communities across short distances, as well as differences in wetness, temperature and peat substrates. We have been measuring the spatially integrated CO2, heat and water vapour fluxes from this peatland complex using eddy covariance and the CO2 exchange from specific plant communities within the EC tower footprint since spring 2008. With this data we are examining if it is possible to derive the spatially integrated ecosystem-wide fluxes from community-level simple light use efficiency (LUE) and ecosystem respiration (ER) models. These models have been developed using several years of continuous autochamber flux measurements for the three major plant functional types (PFTs) as well as knowledge of the spatial variability of the vegetation, water table and active layer depths. LIDAR was used to produce a 1 m resolution digital evaluation model of the complex and the spatial distribution of PFTs was obtained from concurrent high-resolution digital colour air photography trained from vegetation surveys. Continuous water table depths have been measured for four years at over 40 locations in the complex, and peat temperatures and active layer depths are surveyed every 10 days at more than 100 locations. The EC footprint is calculated for every half-hour and the PFT based models are run with the corresponding environmental variables weighted for the PFTs within the EC footprint. Our results show that the Sphagnum, palsa, and sedge PFTs have distinctly different LUE models, and that the tower fluxes are dominated by a blend of the Sphagnum and palsa PFTs. We also see a distinctly different energy partitioning between the fetches containing intact palsa and those with thawed palsa: the evaporative efficiency is higher and the Bowen ration lower for the thawed palsa fetches.
Health economics, equity, and efficiency: are we almost there?
Ferraz, Marcos Bosi
2015-01-01
Health care is a highly complex, dynamic, and creative sector of the economy. While health economics has to continue its efforts to improve its methods and tools to better inform decisions, the application needs to be aligned with the insights and models of other social sciences disciplines. Decisions may be guided by four concept models based on ethical and distributive justice: libertarian, communitarian, egalitarian, and utilitarian. The societal agreement on one model or a defined mix of models is critical to avoid inequity and unfair decisions in a public and/or private insurance-based health care system. The excess use of methods and tools without fully defining the basic goals and philosophical principles of the health care system and without evaluating the fitness of these measures to reaching these goals may not contribute to an efficient improvement of population health.
Health economics, equity, and efficiency: are we almost there?
Ferraz, Marcos Bosi
2015-01-01
Health care is a highly complex, dynamic, and creative sector of the economy. While health economics has to continue its efforts to improve its methods and tools to better inform decisions, the application needs to be aligned with the insights and models of other social sciences disciplines. Decisions may be guided by four concept models based on ethical and distributive justice: libertarian, communitarian, egalitarian, and utilitarian. The societal agreement on one model or a defined mix of models is critical to avoid inequity and unfair decisions in a public and/or private insurance-based health care system. The excess use of methods and tools without fully defining the basic goals and philosophical principles of the health care system and without evaluating the fitness of these measures to reaching these goals may not contribute to an efficient improvement of population health. PMID:25709481
Verification of Functional Fault Models and the Use of Resource Efficient Verification Tools
NASA Technical Reports Server (NTRS)
Bis, Rachael; Maul, William A.
2015-01-01
Functional fault models (FFMs) are a directed graph representation of the failure effect propagation paths within a system's physical architecture and are used to support development and real-time diagnostics of complex systems. Verification of these models is required to confirm that the FFMs are correctly built and accurately represent the underlying physical system. However, a manual, comprehensive verification process applied to the FFMs was found to be error prone due to the intensive and customized process necessary to verify each individual component model and to require a burdensome level of resources. To address this problem, automated verification tools have been developed and utilized to mitigate these key pitfalls. This paper discusses the verification of the FFMs and presents the tools that were developed to make the verification process more efficient and effective.
An Energy-Efficient Compressive Image Coding for Green Internet of Things (IoT).
Li, Ran; Duan, Xiaomeng; Li, Xu; He, Wei; Li, Yanling
2018-04-17
Aimed at a low-energy consumption of Green Internet of Things (IoT), this paper presents an energy-efficient compressive image coding scheme, which provides compressive encoder and real-time decoder according to Compressive Sensing (CS) theory. The compressive encoder adaptively measures each image block based on the block-based gradient field, which models the distribution of block sparse degree, and the real-time decoder linearly reconstructs each image block through a projection matrix, which is learned by Minimum Mean Square Error (MMSE) criterion. Both the encoder and decoder have a low computational complexity, so that they only consume a small amount of energy. Experimental results show that the proposed scheme not only has a low encoding and decoding complexity when compared with traditional methods, but it also provides good objective and subjective reconstruction qualities. In particular, it presents better time-distortion performance than JPEG. Therefore, the proposed compressive image coding is a potential energy-efficient scheme for Green IoT.
Gui, Jiang; Andrew, Angeline S.; Andrews, Peter; Nelson, Heather M.; Kelsey, Karl T.; Karagas, Margaret R.; Moore, Jason H.
2010-01-01
Epistasis or gene-gene interaction is a fundamental component of the genetic architecture of complex traits such as disease susceptibility. Multifactor dimensionality reduction (MDR) was developed as a nonparametric and model-free method to detect epistasis when there are no significant marginal genetic effects. However, in many studies of complex disease, other covariates like age of onset and smoking status could have a strong main effect and may potentially interfere with MDR's ability to achieve its goal. In this paper, we present a simple and computationally efficient sampling method to adjust for covariate effects in MDR. We use simulation to show that after adjustment, MDR has sufficient power to detect true gene-gene interactions. We also compare our method with the state-of-art technique in covariate adjustment. The results suggest that our proposed method performs similarly, but is more computationally efficient. We then apply this new method to an analysis of a population-based bladder cancer study in New Hampshire. PMID:20924193
Epidemics in Complex Networks: The Diversity of Hubs
NASA Astrophysics Data System (ADS)
Kitsak, Maksim; Gallos, Lazaros K.; Havlin, Shlomo; Stanley, H. Eugene; Makse, Hernan A.
2009-03-01
Many complex systems are believed to be vulnerable to spread of viruses and information owing to their high level of interconnectivity. Even viruses of low contagiousness easily proliferate the Internet. Rumors, fads, and innovation ideas are prone to efficient spreading in various social systems. Another commonly accepted standpoint is the importance of the most connected elements (hubs) in the spreading processes. We address following questions. Do all hubs conduct epidemics in the same manner? How does the epidemics spread depend on the structure of the network? What is the most efficient way to spread information over the system? We analyze several large-scale systems in the framework of of the susceptible/infective/removed (SIR) disease spread model which can also be mapped to the problem of rumor or fad spreading. We show that hubs are often ineffective in the transmission of virus or information owing to the highly heterogeneous topology of most networks. We also propose a new tool to evaluate the efficiency of nodes in spreading virus or information.
NASA Astrophysics Data System (ADS)
Vlasayevsky, Stanislav; Klimash, Stepan; Klimash, Vladimir
2017-10-01
A set of mathematical modules was developed for evaluation the energy performance in the research of electrical systems and complexes in the MatLab. In the electrotechnical library SimPowerSystems of the MatLab software, there are no measuring modules of energy coefficients characterizing the quality of electricity and the energy efficiency of electrical apparatus. Modules are designed to calculate energy coefficients characterizing the quality of electricity (current distortion and voltage distortion) and energy efficiency indicators (power factor and efficiency) are presented. There are described the methods and principles of building the modules. The detailed schemes of modules built on the elements of the Simulink Library are presented, in this connection, these modules are compatible with mathematical models of electrical systems and complexes in the MatLab. Also there are presented the results of the testing of the developed modules and the results of their verification on the schemes that have analytical expressions of energy indicators.
Smad Acetylation: A New Level of Regulation in TGF-Beta Signaling
2007-07-01
of Smad2 and Smad3 , resulting in their oligomerization with the common mediator Smad4 (10-11). This Smad2/ Smad3 /Smad4 complex can then translocate...Smad2 and Smad3 , enabling oligomerization with Smad4 and translocation of the entire Smad complex into the nucleus. Once in the nucleus, the...performed prior to DOD funding determined that Smad2 but not Smad3 is efficiently acetylated in a p300 depend manner both in in vivo and in vitro models
Complex absorbing potential based Lorentzian fitting scheme and time dependent quantum transport.
Xie, Hang; Kwok, Yanho; Jiang, Feng; Zheng, Xiao; Chen, GuanHua
2014-10-28
Based on the complex absorbing potential (CAP) method, a Lorentzian expansion scheme is developed to express the self-energy. The CAP-based Lorentzian expansion of self-energy is employed to solve efficiently the Liouville-von Neumann equation of one-electron density matrix. The resulting method is applicable for both tight-binding and first-principles models and is used to simulate the transient currents through graphene nanoribbons and a benzene molecule sandwiched between two carbon-atom chains.
Theoretical and software considerations for nonlinear dynamic analysis
NASA Technical Reports Server (NTRS)
Schmidt, R. J.; Dodds, R. H., Jr.
1983-01-01
In the finite element method for structural analysis, it is generally necessary to discretize the structural model into a very large number of elements to accurately evaluate displacements, strains, and stresses. As the complexity of the model increases, the number of degrees of freedom can easily exceed the capacity of present-day software system. Improvements of structural analysis software including more efficient use of existing hardware and improved structural modeling techniques are discussed. One modeling technique that is used successfully in static linear and nonlinear analysis is multilevel substructuring. This research extends the use of multilevel substructure modeling to include dynamic analysis and defines the requirements for a general purpose software system capable of efficient nonlinear dynamic analysis. The multilevel substructuring technique is presented, the analytical formulations and computational procedures for dynamic analysis and nonlinear mechanics are reviewed, and an approach to the design and implementation of a general purpose structural software system is presented.
Abroudi, Ali; Samarasinghe, Sandhya; Kulasiri, Don
2017-09-21
Not many models of mammalian cell cycle system exist due to its complexity. Some models are too complex and hard to understand, while some others are too simple and not comprehensive enough. Moreover, some essential aspects, such as the response of G1-S and G2-M checkpoints to DNA damage as well as the growth factor signalling, have not been investigated from a systems point of view in current mammalian cell cycle models. To address these issues, we bring a holistic perspective to cell cycle by mathematically modelling it as a complex system consisting of important sub-systems that interact with each other. This retains the functionality of the system and provides a clearer interpretation to the processes within it while reducing the complexity in comprehending these processes. To achieve this, we first update a published ODE mathematical model of cell cycle with current knowledge. Then the part of the mathematical model relevant to each sub-system is shown separately in conjunction with a diagram of the sub-system as part of this representation. The model sub-systems are Growth Factor, DNA damage, G1-S, and G2-M checkpoint signalling. To further simplify the model and better explore the function of sub-systems, they are further divided into modules. Here we also add important new modules of: chk-related rapid cell cycle arrest, p53 modules expanded to seamlessly integrate with the rapid arrest module, Tyrosine phosphatase modules that activate Cyc_Cdk complexes and play a crucial role in rapid and delay arrest at both G1-S and G2-M, Tyrosine Kinase module that is important for inactivating nuclear transport of CycB_cdk1 through Wee1 to resist M phase entry, Plk1-Related module that is crucial in activating Tyrosine phosphatases and inactivating Tyrosine kinase, and APC-Related module to show steps in CycB degradation. This multi-level systems approach incorporating all known aspects of cell cycle allowed us to (i) study, through dynamic simulation of an ODE model, comprehensive details of cell cycle dynamics under normal and DNA damage conditions revealing the role and value of the added new modules and elements, (ii) assess, through a global sensitivity analysis, the most influential sub-systems, modules and parameters on system response, such as G1-S and G2-M transitions, and (iii) probe deeply into the relationship between DNA damage and cell cycle progression and test the biological evidence that G1-S is relatively inefficient in arresting damaged cells compared to G2-M checkpoint. To perform sensitivity analysis, Self-Organizing Map with Correlation Coefficient Analysis (SOMCCA) is developed which shows that Growth Factor and G1-S Checkpoint sub-systems and 13 parameters in the modules within them are crucial for G1-S and G2-M transitions. To study the relative efficiency of DNA damage checkpoints, a Checkpoint Efficiency Evaluator (CEE) is developed based on perturbation studies and statistical Type II error. Accordingly, cell cycle is about 96% efficient in arresting damaged cells with G2-M checkpoint being more efficient than G1-S. Further, both checkpoint systems are near perfect (98.6%) in passing healthy cells. Thus this study has shown the efficacy of the proposed systems approach to gain a better understanding of different aspects of mammalian cell cycle system separately and as an integrated system that will also be useful in investigating targeted therapy in future cancer treatments. Copyright © 2017 Elsevier Ltd. All rights reserved.
Rule-based modeling and simulations of the inner kinetochore structure.
Tschernyschkow, Sergej; Herda, Sabine; Gruenert, Gerd; Döring, Volker; Görlich, Dennis; Hofmeister, Antje; Hoischen, Christian; Dittrich, Peter; Diekmann, Stephan; Ibrahim, Bashar
2013-09-01
Combinatorial complexity is a central problem when modeling biochemical reaction networks, since the association of a few components can give rise to a large variation of protein complexes. Available classical modeling approaches are often insufficient for the analysis of very large and complex networks in detail. Recently, we developed a new rule-based modeling approach that facilitates the analysis of spatial and combinatorially complex problems. Here, we explore for the first time how this approach can be applied to a specific biological system, the human kinetochore, which is a multi-protein complex involving over 100 proteins. Applying our freely available SRSim software to a large data set on kinetochore proteins in human cells, we construct a spatial rule-based simulation model of the human inner kinetochore. The model generates an estimation of the probability distribution of the inner kinetochore 3D architecture and we show how to analyze this distribution using information theory. In our model, the formation of a bridge between CenpA and an H3 containing nucleosome only occurs efficiently for higher protein concentration realized during S-phase but may be not in G1. Above a certain nucleosome distance the protein bridge barely formed pointing towards the importance of chromatin structure for kinetochore complex formation. We define a metric for the distance between structures that allow us to identify structural clusters. Using this modeling technique, we explore different hypothetical chromatin layouts. Applying a rule-based network analysis to the spatial kinetochore complex geometry allowed us to integrate experimental data on kinetochore proteins, suggesting a 3D model of the human inner kinetochore architecture that is governed by a combinatorial algebraic reaction network. This reaction network can serve as bridge between multiple scales of modeling. Our approach can be applied to other systems beyond kinetochores. Copyright © 2013 Elsevier Ltd. All rights reserved.
Yin, Weiwei; Garimalla, Swetha; Moreno, Alberto; Galinski, Mary R; Styczynski, Mark P
2015-08-28
There are increasing efforts to bring high-throughput systems biology techniques to bear on complex animal model systems, often with a goal of learning about underlying regulatory network structures (e.g., gene regulatory networks). However, complex animal model systems typically have significant limitations on cohort sizes, number of samples, and the ability to perform follow-up and validation experiments. These constraints are particularly problematic for many current network learning approaches, which require large numbers of samples and may predict many more regulatory relationships than actually exist. Here, we test the idea that by leveraging the accuracy and efficiency of classifiers, we can construct high-quality networks that capture important interactions between variables in datasets with few samples. We start from a previously-developed tree-like Bayesian classifier and generalize its network learning approach to allow for arbitrary depth and complexity of tree-like networks. Using four diverse sample networks, we demonstrate that this approach performs consistently better at low sample sizes than the Sparse Candidate Algorithm, a representative approach for comparison because it is known to generate Bayesian networks with high positive predictive value. We develop and demonstrate a resampling-based approach to enable the identification of a viable root for the learned tree-like network, important for cases where the root of a network is not known a priori. We also develop and demonstrate an integrated resampling-based approach to the reduction of variable space for the learning of the network. Finally, we demonstrate the utility of this approach via the analysis of a transcriptional dataset of a malaria challenge in a non-human primate model system, Macaca mulatta, suggesting the potential to capture indicators of the earliest stages of cellular differentiation during leukopoiesis. We demonstrate that by starting from effective and efficient approaches for creating classifiers, we can identify interesting tree-like network structures with significant ability to capture the relationships in the training data. This approach represents a promising strategy for inferring networks with high positive predictive value under the constraint of small numbers of samples, meeting a need that will only continue to grow as more high-throughput studies are applied to complex model systems.
Imagery Teaches Elementary Economics Schema Efficiently.
ERIC Educational Resources Information Center
McKenzie, Gary R.
In a complex domain such as economics, elementary school students' knowledge of formal systems beyond their immediate experience is often too incomplete, superficial, and disorganized to function as schema or model. However, visual imagery is a good technique for teaching young children a network of 10 to 20 propositions and the relationships…
A model for Entropy Production, Entropy Decrease and Action Minimization in Self-Organization
NASA Astrophysics Data System (ADS)
Georgiev, Georgi; Chatterjee, Atanu; Vu, Thanh; Iannacchione, Germano
In self-organization energy gradients across complex systems lead to change in the structure of systems, decreasing their internal entropy to ensure the most efficient energy transport and therefore maximum entropy production in the surroundings. This approach stems from fundamental variational principles in physics, such as the principle of least action. It is coupled to the total energy flowing through a system, which leads to increase the action efficiency. We compare energy transport through a fluid cell which has random motion of its molecules, and a cell which can form convection cells. We examine the signs of change of entropy, and the action needed for the motion inside those systems. The system in which convective motion occurs, reduces the time for energy transmission, compared to random motion. For more complex systems, those convection cells form a network of transport channels, for the purpose of obeying the equations of motion in this geometry. Those transport networks are an essential feature of complex systems in biology, ecology, economy and society.
Multiplexed in vivo His-tagging of enzyme pathways for in vitro single-pot multienzyme catalysis.
Wang, Harris H; Huang, Po-Yi; Xu, George; Haas, Wilhelm; Marblestone, Adam; Li, Jun; Gygi, Steven P; Forster, Anthony C; Jewett, Michael C; Church, George M
2012-02-17
Protein pathways are dynamic and highly coordinated spatially and temporally, capable of performing a diverse range of complex chemistries and enzymatic reactions with precision and at high efficiency. Biotechnology aims to harvest these natural systems to construct more advanced in vitro reactions, capable of new chemistries and operating at high yield. Here, we present an efficient Multiplex Automated Genome Engineering (MAGE) strategy to simultaneously modify and co-purify large protein complexes and pathways from the model organism Escherichia coli to reconstitute functional synthetic proteomes in vitro. By application of over 110 MAGE cycles, we successfully inserted hexa-histidine sequences into 38 essential genes in vivo that encode for the entire translation machinery. Streamlined co-purification and reconstitution of the translation protein complex enabled protein synthesis in vitro. Our approach can be applied to a growing area of applications in in vitro one-pot multienzyme catalysis (MEC) to manipulate or enhance in vitro pathways such as natural product or carbohydrate biosynthesis.
Recent experience in simultaneous control-structure optimization
NASA Technical Reports Server (NTRS)
Salama, M.; Ramaker, R.; Milman, M.
1989-01-01
To show the feasibility of simultaneous optimization as design procedure, low order problems were used in conjunction with simple control formulations. The numerical results indicate that simultaneous optimization is not only feasible, but also advantageous. Such advantages come at the expense of introducing complexities beyond those encountered in structure optimization alone, or control optimization alone. Examples include: larger design parameter space, optimization may combine continuous and combinatoric variables, and the combined objective function may be nonconvex. Future extensions to include large order problems, more complex objective functions and constraints, and more sophisticated control formulations will require further research to ensure that the additional complexities do not outweigh the advantages of simultaneous optimization. Some areas requiring more efficient tools than currently available include: multiobjective criteria and nonconvex optimization. Efficient techniques to deal with optimization over combinatoric and continuous variables, and with truncation issues for structure and control parameters of both the model space as well as the design space need to be developed.
Model-order reduction of lumped parameter systems via fractional calculus
NASA Astrophysics Data System (ADS)
Hollkamp, John P.; Sen, Mihir; Semperlotti, Fabio
2018-04-01
This study investigates the use of fractional order differential models to simulate the dynamic response of non-homogeneous discrete systems and to achieve efficient and accurate model order reduction. The traditional integer order approach to the simulation of non-homogeneous systems dictates the use of numerical solutions and often imposes stringent compromises between accuracy and computational performance. Fractional calculus provides an alternative approach where complex dynamical systems can be modeled with compact fractional equations that not only can still guarantee analytical solutions, but can also enable high levels of order reduction without compromising on accuracy. Different approaches are explored in order to transform the integer order model into a reduced order fractional model able to match the dynamic response of the initial system. Analytical and numerical results show that, under certain conditions, an exact match is possible and the resulting fractional differential models have both a complex and frequency-dependent order of the differential operator. The implications of this type of approach for both model order reduction and model synthesis are discussed.
Tamaki, Yusuke; Morimoto, Tatsuki; Koike, Kazuhide; Ishitani, Osamu
2012-09-25
Previously undescribed supramolecules constructed with various ratios of two kinds of Ru(II) complexes-a photosensitizer and a catalyst-were synthesized. These complexes can photocatalyze the reduction of CO(2) to formic acid with high selectivity and durability using a wide range of wavelengths of visible light and NADH model compounds as electron donors in a mixed solution of dimethylformamide-triethanolamine. Using a higher ratio of the photosensitizer unit to the catalyst unit led to a higher yield of formic acid. In particular, of the reported photocatalysts, a trinuclear complex with two photosensitizer units and one catalyst unit photocatalyzed CO(2) reduction (Φ(HCOOH) = 0.061, TON(HCOOH) = 671) with the fastest reaction rate (TOF(HCOOH) = 11.6 min(-1)). On the other hand, photocatalyses of a mixed system containing two kinds of model mononuclear Ru(II) complexes, and supramolecules with a higher ratio of the catalyst unit were much less efficient, and black oligomers and polymers were produced from the Ru complexes during photocatalytic reactions, which reduced the yield of formic acid. The photocatalytic formation of formic acid using the supramolecules described herein proceeds via two sequential processes: the photochemical reduction of the photosensitizer unit by NADH model compounds and intramolecular electron transfer to the catalyst unit.
NASA Astrophysics Data System (ADS)
Kim, Jinyong; Luo, Gang; Wang, Chao-Yang
2017-10-01
3D fine-mesh flow-fields recently developed by Toyota Mirai improved water management and mass transport in proton exchange membrane (PEM) fuel cell stacks, suggesting their potential value for robust and high-power PEM fuel cell stack performance. In such complex flow-fields, Forchheimer's inertial effect is dominant at high current density. In this work, a two-phase flow model of 3D complex flow-fields of PEMFCs is developed by accounting for Forchheimer's inertial effect, for the first time, to elucidate the underlying mechanism of liquid water behavior and mass transport inside 3D complex flow-fields and their adjacent gas diffusion layers (GDL). It is found that Forchheimer's inertial effect enhances liquid water removal from flow-fields and adds additional flow resistance around baffles, which improves interfacial liquid water and mass transport. As a result, substantial improvements in high current density cell performance and operational stability are expected in PEMFCs with 3D complex flow-fields, compared to PEMFCs with conventional flow-fields. Higher current density operation required to further reduce PEMFC stack cost per kW in the future will necessitate optimizing complex flow-field designs using the present model, in order to efficiently remove a large amount of product water and hence minimize the mass transport voltage loss.
Yang, Ruiyue; Huang, Zhongwei; Yu, Wei; Li, Gensheng; Ren, Wenxi; Zuo, Lihua; Tan, Xiaosi; Sepehrnoori, Kamy; Tian, Shouceng; Sheng, Mao
2016-01-01
A complex fracture network is generally generated during the hydraulic fracturing treatment in shale gas reservoirs. Numerous efforts have been made to model the flow behavior of such fracture networks. However, it is still challenging to predict the impacts of various gas transport mechanisms on well performance with arbitrary fracture geometry in a computationally efficient manner. We develop a robust and comprehensive model for real gas transport in shales with complex non-planar fracture network. Contributions of gas transport mechanisms and fracture complexity to well productivity and rate transient behavior are systematically analyzed. The major findings are: simple planar fracture can overestimate gas production than non-planar fracture due to less fracture interference. A “hump” that occurs in the transition period and formation linear flow with a slope less than 1/2 can infer the appearance of natural fractures. The sharpness of the “hump” can indicate the complexity and irregularity of the fracture networks. Gas flow mechanisms can extend the transition flow period. The gas desorption could make the “hump” more profound. The Knudsen diffusion and slippage effect play a dominant role in the later production time. Maximizing the fracture complexity through generating large connected networks is an effective way to increase shale gas production. PMID:27819349
Yang, Ruiyue; Huang, Zhongwei; Yu, Wei; Li, Gensheng; Ren, Wenxi; Zuo, Lihua; Tan, Xiaosi; Sepehrnoori, Kamy; Tian, Shouceng; Sheng, Mao
2016-11-07
A complex fracture network is generally generated during the hydraulic fracturing treatment in shale gas reservoirs. Numerous efforts have been made to model the flow behavior of such fracture networks. However, it is still challenging to predict the impacts of various gas transport mechanisms on well performance with arbitrary fracture geometry in a computationally efficient manner. We develop a robust and comprehensive model for real gas transport in shales with complex non-planar fracture network. Contributions of gas transport mechanisms and fracture complexity to well productivity and rate transient behavior are systematically analyzed. The major findings are: simple planar fracture can overestimate gas production than non-planar fracture due to less fracture interference. A "hump" that occurs in the transition period and formation linear flow with a slope less than 1/2 can infer the appearance of natural fractures. The sharpness of the "hump" can indicate the complexity and irregularity of the fracture networks. Gas flow mechanisms can extend the transition flow period. The gas desorption could make the "hump" more profound. The Knudsen diffusion and slippage effect play a dominant role in the later production time. Maximizing the fracture complexity through generating large connected networks is an effective way to increase shale gas production.
A Pareto-optimal moving average multigene genetic programming model for daily streamflow prediction
NASA Astrophysics Data System (ADS)
Danandeh Mehr, Ali; Kahya, Ercan
2017-06-01
Genetic programming (GP) is able to systematically explore alternative model structures of different accuracy and complexity from observed input and output data. The effectiveness of GP in hydrological system identification has been recognized in recent studies. However, selecting a parsimonious (accurate and simple) model from such alternatives still remains a question. This paper proposes a Pareto-optimal moving average multigene genetic programming (MA-MGGP) approach to develop a parsimonious model for single-station streamflow prediction. The three main components of the approach that take us from observed data to a validated model are: (1) data pre-processing, (2) system identification and (3) system simplification. The data pre-processing ingredient uses a simple moving average filter to diminish the lagged prediction effect of stand-alone data-driven models. The multigene ingredient of the model tends to identify the underlying nonlinear system with expressions simpler than classical monolithic GP and, eventually simplification component exploits Pareto front plot to select a parsimonious model through an interactive complexity-efficiency trade-off. The approach was tested using the daily streamflow records from a station on Senoz Stream, Turkey. Comparing to the efficiency results of stand-alone GP, MGGP, and conventional multi linear regression prediction models as benchmarks, the proposed Pareto-optimal MA-MGGP model put forward a parsimonious solution, which has a noteworthy importance of being applied in practice. In addition, the approach allows the user to enter human insight into the problem to examine evolved models and pick the best performing programs out for further analysis.
NASA Astrophysics Data System (ADS)
Simmons, Daniel; Cools, Kristof; Sewell, Phillip
2016-11-01
Time domain electromagnetic simulation tools have the ability to model transient, wide-band applications, and non-linear problems. The Boundary Element Method (BEM) and the Transmission Line Modeling (TLM) method are both well established numerical techniques for simulating time-varying electromagnetic fields. The former surface based method can accurately describe outwardly radiating fields from piecewise uniform objects and efficiently deals with large domains filled with homogeneous media. The latter volume based method can describe inhomogeneous and non-linear media and has been proven to be unconditionally stable. Furthermore, the Unstructured TLM (UTLM) enables modelling of geometrically complex objects by using triangular meshes which removes staircasing and unnecessary extensions of the simulation domain. The hybridization of BEM and UTLM which is described in this paper is named the Boundary Element Unstructured Transmission-line (BEUT) method. It incorporates the advantages of both methods. The theory and derivation of the 2D BEUT method is described in this paper, along with any relevant implementation details. The method is corroborated by studying its correctness and efficiency compared to the traditional UTLM method when applied to complex problems such as the transmission through a system of Luneburg lenses and the modelling of antenna radomes for use in wireless communications.
Tertiary structure-based analysis of microRNA–target interactions
Gan, Hin Hark; Gunsalus, Kristin C.
2013-01-01
Current computational analysis of microRNA interactions is based largely on primary and secondary structure analysis. Computationally efficient tertiary structure-based methods are needed to enable more realistic modeling of the molecular interactions underlying miRNA-mediated translational repression. We incorporate algorithms for predicting duplex RNA structures, ionic strength effects, duplex entropy and free energy, and docking of duplex–Argonaute protein complexes into a pipeline to model and predict miRNA–target duplex binding energies. To ensure modeling accuracy and computational efficiency, we use an all-atom description of RNA and a continuum description of ionic interactions using the Poisson–Boltzmann equation. Our method predicts the conformations of two constructs of Caenorhabditis elegans let-7 miRNA–target duplexes to an accuracy of ∼3.8 Å root mean square distance of their NMR structures. We also show that the computed duplex formation enthalpies, entropies, and free energies for eight miRNA–target duplexes agree with titration calorimetry data. Analysis of duplex–Argonaute docking shows that structural distortions arising from single-base-pair mismatches in the seed region influence the activity of the complex by destabilizing both duplex hybridization and its association with Argonaute. Collectively, these results demonstrate that tertiary structure-based modeling of miRNA interactions can reveal structural mechanisms not accessible with current secondary structure-based methods. PMID:23417009
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simmons, Daniel, E-mail: daniel.simmons@nottingham.ac.uk; Cools, Kristof; Sewell, Phillip
Time domain electromagnetic simulation tools have the ability to model transient, wide-band applications, and non-linear problems. The Boundary Element Method (BEM) and the Transmission Line Modeling (TLM) method are both well established numerical techniques for simulating time-varying electromagnetic fields. The former surface based method can accurately describe outwardly radiating fields from piecewise uniform objects and efficiently deals with large domains filled with homogeneous media. The latter volume based method can describe inhomogeneous and non-linear media and has been proven to be unconditionally stable. Furthermore, the Unstructured TLM (UTLM) enables modelling of geometrically complex objects by using triangular meshes which removesmore » staircasing and unnecessary extensions of the simulation domain. The hybridization of BEM and UTLM which is described in this paper is named the Boundary Element Unstructured Transmission-line (BEUT) method. It incorporates the advantages of both methods. The theory and derivation of the 2D BEUT method is described in this paper, along with any relevant implementation details. The method is corroborated by studying its correctness and efficiency compared to the traditional UTLM method when applied to complex problems such as the transmission through a system of Luneburg lenses and the modelling of antenna radomes for use in wireless communications. - Graphical abstract:.« less
Darabi, Aubteen; Arrastia-Lloyd, Meagan C; Nelson, David W; Liang, Xinya; Farrell, Jennifer
2015-12-01
In order to develop an expert-like mental model of complex systems, causal reasoning is essential. This study examines the differences between forward and backward instructional strategies' in terms of efficiency, students' learning and progression of their mental models of the electronic transport chain in an undergraduate metabolism course (n = 151). Additionally, the participants' cognitive flexibility, prior knowledge, and mental effort in the learning process are also investigated. The data were analyzed using a series of general linear models to compare the strategies. Although the two strategies did not differ significantly in terms of mental model progression and learning outcomes, both groups' mental models progressed significantly. Mental effort and prior knowledge were identified as significant predictors of mental model progression. An interaction between instructional strategy and cognitive flexibility revealed that the backward instruction was more efficient than the conventional (forward) strategy for students with lower cognitive flexibility, whereas the conventional instruction was more efficient for students with higher cognitive flexibility. The results are discussed and suggestions for future research on the possible moderating role of cognitive flexibility in the area of health education are presented.
Poulain, Christophe A.; Finlayson, Bruce A.; Bassingthwaighte, James B.
2010-01-01
The analysis of experimental data obtained by the multiple-indicator method requires complex mathematical models for which capillary blood-tissue exchange (BTEX) units are the building blocks. This study presents a new, nonlinear, two-region, axially distributed, single capillary, BTEX model. A facilitated transporter model is used to describe mass transfer between plasma and intracellular spaces. To provide fast and accurate solutions, numerical techniques suited to nonlinear convection-dominated problems are implemented. These techniques are the random choice method, an explicit Euler-Lagrange scheme, and the MacCormack method with and without flux correction. The accuracy of the numerical techniques is demonstrated, and their efficiencies are compared. The random choice, Euler-Lagrange and plain MacCormack method are the best numerical techniques for BTEX modeling. However, the random choice and Euler-Lagrange methods are preferred over the MacCormack method because they allow for the derivation of a heuristic criterion that makes the numerical methods stable without degrading their efficiency. Numerical solutions are also used to illustrate some nonlinear behaviors of the model and to show how the new BTEX model can be used to estimate parameters from experimental data. PMID:9146808
NASA Technical Reports Server (NTRS)
Rothhaar, Paul M.; Murphy, Patrick C.; Bacon, Barton J.; Gregory, Irene M.; Grauer, Jared A.; Busan, Ronald C.; Croom, Mark A.
2014-01-01
Control of complex Vertical Take-Off and Landing (VTOL) aircraft traversing from hovering to wing born flight mode and back poses notoriously difficult modeling, simulation, control, and flight-testing challenges. This paper provides an overview of the techniques and advances required to develop the GL-10 tilt-wing, tilt-tail, long endurance, VTOL aircraft control system. The GL-10 prototype's unusual and complex configuration requires application of state-of-the-art techniques and some significant advances in wind tunnel infrastructure automation, efficient Design Of Experiments (DOE) tunnel test techniques, modeling, multi-body equations of motion, multi-body actuator models, simulation, control algorithm design, and flight test avionics, testing, and analysis. The following compendium surveys key disciplines required to develop an effective control system for this challenging vehicle in this on-going effort.
Cognitive engineering models in space systems
NASA Technical Reports Server (NTRS)
Mitchell, Christine M.
1992-01-01
NASA space systems, including mission operations on the ground and in space, are complex, dynamic, predominantly automated systems in which the human operator is a supervisory controller. The human operator monitors and fine-tunes computer-based control systems and is responsible for ensuring safe and efficient system operation. In such systems, the potential consequences of human mistakes and errors may be very large, and low probability of such events is likely. Thus, models of cognitive functions in complex systems are needed to describe human performance and form the theoretical basis of operator workstation design, including displays, controls, and decision support aids. The operator function model represents normative operator behavior-expected operator activities given current system state. The extension of the theoretical structure of the operator function model and its application to NASA Johnson mission operations and space station applications is discussed.
Lattice Boltzmann simulations of immiscible displacement process with large viscosity ratios
NASA Astrophysics Data System (ADS)
Rao, Parthib; Schaefer, Laura
2017-11-01
Immiscible displacement is a key physical mechanism involved in enhanced oil recovery and carbon sequestration processes. This multiphase flow phenomenon involves a complex interplay of viscous, capillary, inertial and wettability effects. The lattice Boltzmann (LB) method is an accurate and efficient technique for modeling and simulating multiphase/multicomponent flows especially in complex flow configurations and media. In this presentation we present numerical simulation results of displacement process in thin long channels. The results are based on a new psuedo-potential multicomponent LB model with multiple relaxation time collision (MRT) model and explicit forcing scheme. We demonstrate that the proposed model is capable of accurately simulating the displacement process involving fluids with a wider range of viscosity ratios (>100) and which also leads to viscosity-independent interfacial tension and reduction of some important numerical artifacts.
Application of Δ- and λ-isomerism of octahedral metal complexes for inducing chiral nematic phases.
Sato, Hisako; Yamagishi, Akihiko
2009-11-20
The Delta- and Lambda-isomerism of octahedral metal complexes is employed as a source of chirality for inducing chiral nematic phases. By applying a wide range of chiral metal complexes as a dopant, it has been found that tris(beta-diketonato)metal(III) complexes exhibit an extremely high value of helical twisting power. The mechanism of induction of the chiral nematic phase is postulated on the basis of a surface chirality model. The strategy for designing an efficient dopant is described, together with the results using a number of examples of Co(III), Cr(III) and Ru(III) complexes with C(2) symmetry. The development of photo-responsive dopants to achieve the photo-induced structural change of liquid crystal by use of photo-isomerization of chiral metal complexes is also described.
Application of Δ- and Λ-Isomerism of Octahedral Metal Complexes for Inducing Chiral Nematic Phases
Sato, Hisako; Yamagishi, Akihiko
2009-01-01
The Δ- and Λ-isomerism of octahedral metal complexes is employed as a source of chirality for inducing chiral nematic phases. By applying a wide range of chiral metal complexes as a dopant, it has been found that tris(β-diketonato)metal(III) complexes exhibit an extremely high value of helical twisting power. The mechanism of induction of the chiral nematic phase is postulated on the basis of a surface chirality model. The strategy for designing an efficient dopant is described, together with the results using a number of examples of Co(III), Cr(III) and Ru(III) complexes with C2 symmetry. The development of photo-responsive dopants to achieve the photo-induced structural change of liquid crystal by use of photo-isomerization of chiral metal complexes is also described. PMID:20057959
NASA Astrophysics Data System (ADS)
Hosseinalipour, S. M.; Raja, A.; Hajikhani, S.
2012-06-01
A full three dimensional Navier - Stokes numerical simulation has been performed for performance analysis of a Kaplan turbine which is installed in one of the Irans south dams. No simplifications have been enforced in the simulation. The numerical results have been evaluated using some integral parameters such as the turbine efficiency via comparing the results with existing experimental data from the prototype Hill chart. In part of this study the numerical simulations were performed in order to calculate the prototype turbine efficiencies in some specific points which comes from the scaling up of the model efficiency that are available in the model experimental Hill chart. The results are very promising which shows the good ability of the numerical techniques for resolving the flow characteristics in these kind of complex geometries. A parametric study regarding the evaluation of turbine performance in three different runner angles of the prototype is also performed and the results are cited in this paper.
NASA Astrophysics Data System (ADS)
Gong, Jun; Zhu, Qing
2006-10-01
As the special case of VGE in the fields of AEC (architecture, engineering and construction), Virtual Building Environment (VBE) has been broadly concerned. Highly complex, large-scale 3d spatial data is main bottleneck of VBE applications, so 3d spatial data organization and management certainly becomes the core technology for VBE. This paper puts forward 3d spatial data model for VBE, and the performance to implement it is very high. Inherent storage method of CAD data makes data redundant, and doesn't concern efficient visualization, which is a practical bottleneck to integrate CAD model, so An Efficient Method to Integrate CAD Model Data is put forward. Moreover, Since the 3d spatial indices based on R-tree are usually limited by their weakness of low efficiency due to the severe overlap of sibling nodes and the uneven size of nodes, a new node-choosing algorithm of R-tree are proposed.
Asymmetry in Signal Oscillations Contributes to Efficiency of Periodic Systems.
Bae, Seul-A; Acevedo, Alison; Androulakis, Ioannis P
2016-01-01
Oscillations are an important feature of cellular signaling that result from complex combinations of positive- and negative-feedback loops. The encoding and decoding mechanisms of oscillations based on amplitude and frequency have been extensively discussed in the literature in the context of intercellular and intracellular signaling. However, the fundamental questions of whether and how oscillatory signals offer any competitive advantages-and, if so, what-have not been fully answered. We investigated established oscillatory mechanisms and designed a study to analyze the oscillatory characteristics of signaling molecules and system output in an effort to answer these questions. Two classic oscillators, Goodwin and PER, were selected as the model systems, and corresponding no-feedback models were created for each oscillator to discover the advantage of oscillating signals. Through simulating the original oscillators and the matching no-feedback models, we show that oscillating systems have the capability to achieve better resource-to-output efficiency, and we identify oscillatory characteristics that lead to improved efficiency.
Supersonic projectile models for asynchronous shooter localization
NASA Astrophysics Data System (ADS)
Kozick, Richard J.; Whipps, Gene T.; Ash, Joshua N.
2011-06-01
In this work we consider the localization of a gunshot using a distributed sensor network measuring time differences of arrival between a firearm's muzzle blast and the shockwave induced by a supersonic bullet. This so-called MB-SW approach is desirable because time synchronization is not required between the sensors, however it suffers from increased computational complexity and requires knowledge of the bullet's velocity at all points along its trajectory. While the actual velocity profile of a particular gunshot is unknown, one may use a parameterized model for the velocity profile and simultaneously fit the model and localize the shooter. In this paper we study efficient solutions for the localization problem and identify deceleration models that trade off localization accuracy and computational complexity. We also develop a statistical analysis that includes bias due to mismatch between the true and actual deceleration models and covariance due to additive noise.
NASA Astrophysics Data System (ADS)
Feng, Cheng; Zhang, Yijun; Qian, Yunsheng; Wang, Ziheng; Liu, Jian; Chang, Benkang; Shi, Feng; Jiao, Gangcheng
2018-04-01
A theoretical emission model for AlxGa1-xAs/GaAs cathode with complex structure based on photon-enhanced thermionic emission is developed by utilizing one-dimensional steady-state continuity equations. The cathode structure comprises a graded-composition AlxGa1-xAs window layer and an exponential-doping GaAs absorber layer. In the deduced model, the physical properties changing with the Al composition are taken into consideration. Simulated current-voltage characteristics are presented and some important factors affecting the conversion efficiency are also illustrated. Compared with the graded-composition and uniform-doping cathode structure, and the uniform-composition and uniform-doping cathode structure, the graded-composition and exponential-doping cathode structure can effectively improve the conversion efficiency, which is ascribed to the twofold built-in electric fields. More strikingly, this graded bandgap structure is especially suitable for photon-enhanced thermionic emission devices since a higher conversion efficiency can be achieved at a lower temperature.
CHENG, JIANLIN; EICKHOLT, JESSE; WANG, ZHENG; DENG, XIN
2013-01-01
After decades of research, protein structure prediction remains a very challenging problem. In order to address the different levels of complexity of structural modeling, two types of modeling techniques — template-based modeling and template-free modeling — have been developed. Template-based modeling can often generate a moderate- to high-resolution model when a similar, homologous template structure is found for a query protein but fails if no template or only incorrect templates are found. Template-free modeling, such as fragment-based assembly, may generate models of moderate resolution for small proteins of low topological complexity. Seldom have the two techniques been integrated together to improve protein modeling. Here we develop a recursive protein modeling approach to selectively and collaboratively apply template-based and template-free modeling methods to model template-covered (i.e. certain) and template-free (i.e. uncertain) regions of a protein. A preliminary implementation of the approach was tested on a number of hard modeling cases during the 9th Critical Assessment of Techniques for Protein Structure Prediction (CASP9) and successfully improved the quality of modeling in most of these cases. Recursive modeling can signicantly reduce the complexity of protein structure modeling and integrate template-based and template-free modeling to improve the quality and efficiency of protein structure prediction. PMID:22809379
NASA Astrophysics Data System (ADS)
Peace, Andrew J.; May, Nicholas E.; Pocock, Mark F.; Shaw, Jonathon A.
1994-04-01
This paper is concerned with the flow modelling capabilities of an advanced CFD simulation system known by the acronym SAUNA. This system is aimed primarily at complex aircraft configurations and possesses a unique grid generation strategy in its use of block-structured, unstructured or hybrid grids, depending on the geometric complexity of the addressed configuration. The main focus of the paper is in demonstrating the recently developed multi-grid, block-structured grid, viscous flow capability of SAUNA, through its evaluation on a number of configurations. Inviscid predictions are also presented, both as a means of interpreting the viscous results and with a view to showing more completely the capabilities of SAUNA. It is shown that accuracy and flexibility are combined in an efficient manner, thus demonstrating the value of SAUNA in aerodynamic design.
High Intensity Organic Light-emitting Diodes
NASA Astrophysics Data System (ADS)
Qi, Xiangfei
This thesis is dedicated to the fabrication, modeling, and characterization to achieve high efficiency organic light-emitting diodes (OLEDs) for illumination applications. Compared to conventional lighting sources, OLEDs enabled the direct conversion of electrical energy into light emission and have intrigued the world's lighting designers with the long-lasting, highly efficient illumination. We begin with a brief overview of organic technology, from basic organic semiconductor physics, to its application in optoelectronics, i.e. light-emitting diodes, photovoltaics, photodetectors and thin-film transistors. Due to the importance of phosphorescent materials, we will focus on the photophysics of metal complexes that is central to high efficiency OLED technology, followed by a transient study to examine the radiative decay dynamics in a series of phosphorescent platinum binuclear complexes. The major theme of this thesis is the design and optimization of a novel architecture where individual red, green and blue phosphorescent OLEDs are vertically stacked and electrically interconnected by the compound charge generation layers. We modeled carrier generation from the metal-oxide/doped organic interface based on a thermally assisted tunneling mechanism. The model provides insights to the optimization of a stacked OLED from both electrical and optical point of view. To realize the high intensity white lighting source, the efficient removal of heat is of a particular concern, especially in large-area devices. A fundamental transfer matrix analysis is introduced to predict the thermal properties in the devices. The analysis employs Laplace transforms to determine the response of the system to the combined effects of conduction, convection, and radiation. This perspective of constructing transmission matrices greatly facilitates the calculation of transient coupled heat transfer in a general multi-layer composite. It converts differential equations to algebraic forms, and can be expanded to study other thermal issues in more sophisticated structures.
2010-01-01
Background The assembly and spatial organization of enzymes in naturally occurring multi-protein complexes is of paramount importance for the efficient degradation of complex polymers and biosynthesis of valuable products. The degradation of cellulose into fermentable sugars by Clostridium thermocellum is achieved by means of a multi-protein "cellulosome" complex. Assembled via dockerin-cohesin interactions, the cellulosome is associated with the cell surface during cellulose hydrolysis, forming ternary cellulose-enzyme-microbe complexes for enhanced activity and synergy. The assembly of recombinant cell surface displayed cellulosome-inspired complexes in surrogate microbes is highly desirable. The model organism Lactococcus lactis is of particular interest as it has been metabolically engineered to produce a variety of commodity chemicals including lactic acid and bioactive compounds, and can efficiently secrete an array of recombinant proteins and enzymes of varying sizes. Results Fragments of the scaffoldin protein CipA were functionally displayed on the cell surface of Lactococcus lactis. Scaffolds were engineered to contain a single cohesin module, two cohesin modules, one cohesin and a cellulose-binding module, or only a cellulose-binding module. Cell toxicity from over-expression of the proteins was circumvented by use of the nisA inducible promoter, and incorporation of the C-terminal anchor motif of the streptococcal M6 protein resulted in the successful surface-display of the scaffolds. The facilitated detection of successfully secreted scaffolds was achieved by fusion with the export-specific reporter staphylococcal nuclease (NucA). Scaffolds retained their ability to associate in vivo with an engineered hybrid reporter enzyme, E. coli β-glucuronidase fused to the type 1 dockerin motif of the cellulosomal enzyme CelS. Surface-anchored complexes exhibited dual enzyme activities (nuclease and β-glucuronidase), and were displayed with efficiencies approaching 104 complexes/cell. Conclusions We report the successful display of cellulosome-inspired recombinant complexes on the surface of Lactococcus lactis. Significant differences in display efficiency among constructs were observed and attributed to their structural characteristics including protein conformation and solubility, scaffold size, and the inclusion and exclusion of non-cohesin modules. The surface-display of functional scaffold proteins described here represents a key step in the development of recombinant microorganisms capable of carrying out a variety of metabolic processes including the direct conversion of cellulosic substrates into fuels and chemicals. PMID:20840763
Adapting APSIM to model the physiology and genetics of complex adaptive traits in field crops.
Hammer, Graeme L; van Oosterom, Erik; McLean, Greg; Chapman, Scott C; Broad, Ian; Harland, Peter; Muchow, Russell C
2010-05-01
Progress in molecular plant breeding is limited by the ability to predict plant phenotype based on its genotype, especially for complex adaptive traits. Suitably constructed crop growth and development models have the potential to bridge this predictability gap. A generic cereal crop growth and development model is outlined here. It is designed to exhibit reliable predictive skill at the crop level while also introducing sufficient physiological rigour for complex phenotypic responses to become emergent properties of the model dynamics. The approach quantifies capture and use of radiation, water, and nitrogen within a framework that predicts the realized growth of major organs based on their potential and whether the supply of carbohydrate and nitrogen can satisfy that potential. The model builds on existing approaches within the APSIM software platform. Experiments on diverse genotypes of sorghum that underpin the development and testing of the adapted crop model are detailed. Genotypes differing in height were found to differ in biomass partitioning among organs and a tall hybrid had significantly increased radiation use efficiency: a novel finding in sorghum. Introducing these genetic effects associated with plant height into the model generated emergent simulated phenotypic differences in green leaf area retention during grain filling via effects associated with nitrogen dynamics. The relevance to plant breeding of this capability in complex trait dissection and simulation is discussed.
Inverse finite-size scaling for high-dimensional significance analysis
NASA Astrophysics Data System (ADS)
Xu, Yingying; Puranen, Santeri; Corander, Jukka; Kabashima, Yoshiyuki
2018-06-01
We propose an efficient procedure for significance determination in high-dimensional dependence learning based on surrogate data testing, termed inverse finite-size scaling (IFSS). The IFSS method is based on our discovery of a universal scaling property of random matrices which enables inference about signal behavior from much smaller scale surrogate data than the dimensionality of the original data. As a motivating example, we demonstrate the procedure for ultra-high-dimensional Potts models with order of 1010 parameters. IFSS reduces the computational effort of the data-testing procedure by several orders of magnitude, making it very efficient for practical purposes. This approach thus holds considerable potential for generalization to other types of complex models.
Increasing market efficiency in the stock markets
NASA Astrophysics Data System (ADS)
Yang, Jae-Suk; Kwak, Wooseop; Kaizoji, Taisei; Kim, In-Mook
2008-01-01
We study the temporal evolutions of three stock markets; Standard and Poor's 500 index, Nikkei 225 Stock Average, and the Korea Composite Stock Price Index. We observe that the probability density function of the log-return has a fat tail but the tail index has been increasing continuously in recent years. We have also found that the variance of the autocorrelation function, the scaling exponent of the standard deviation, and the statistical complexity decrease, but that the entropy density increases as time goes over time. We introduce a modified microscopic spin model and simulate the model to confirm such increasing and decreasing tendencies in statistical quantities. These findings indicate that these three stock markets are becoming more efficient.
Turner-Stokes, Lynne; Bavikatte, Ganesh; Williams, Heather; Bill, Alan; Sephton, Keith
2016-09-08
To evaluate functional outcomes, care needs and cost-efficiency of hyperacute (HA) rehabilitation for a cohort of in-patients with complex neurological disability and unstable medical/surgical conditions. A multicentre cohort analysis of prospectively collected clinical data from the UK Rehabilitation Outcomes Collaborative (UKROC) national clinical database, 2012-2015. Two HA specialist rehabilitation services in England, providing different service models for HA rehabilitation. All patients admitted to each of the units with an admission rehabilitation complexity M score of ≥3 (N=190; mean age 46 (SD16) years; males:females 63:37%). Diagnoses were acquired brain injury (n=166; 87%), spinal cord injury (n=9; 5%), peripheral neurological conditions (n=9; 5%) and other (n=6; 3%). Specialist in-patient multidisciplinary rehabilitation combined with management and stabilisation of intercurrent medical and surgical problems. Rehabilitation complexity and medical acuity: Rehabilitation Complexity Scale-version 13. Dependency and care costs: Northwick Park Dependency Scale/Care Needs Assessment (NPDS/NPCNA). Functional independence: UK Functional Assessment Measure (UK FIM+FAM). (1) reduction in dependency and (2) cost-efficiency, measured as the time taken to offset rehabilitation costs by savings in NPCNA-estimated costs of on-going care in the community. The mean length of stay was 103 (SD66) days. Some differences were observed between the two units, which were in keeping with the different service models. However, both units showed a significant reduction in dependency and acuity between admission and discharge on all measures (Wilcoxon: p<0.001). For the 180 (95%) patients with complete NPCNA data, the mean episode cost was £77 119 (bootstrapped 95% CI £70 614 to £83 894) and the mean reduction in 'weekly care costs' was £462/week (95% CI 349 to 582). The mean time to offset the cost of rehabilitation was 27.6 months (95% CI 13.2 to 43.8). Despite its relatively high initial cost, specialist HA rehabilitation can be highly cost-efficient, producing substantial savings in on-going care costs, and relieving pressure in the acute care services. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Turner-Stokes, Lynne; Bavikatte, Ganesh; Williams, Heather; Bill, Alan; Sephton, Keith
2016-01-01
Objectives To evaluate functional outcomes, care needs and cost-efficiency of hyperacute (HA) rehabilitation for a cohort of in-patients with complex neurological disability and unstable medical/surgical conditions. Design A multicentre cohort analysis of prospectively collected clinical data from the UK Rehabilitation Outcomes Collaborative (UKROC) national clinical database, 2012–2015. Setting Two HA specialist rehabilitation services in England, providing different service models for HA rehabilitation. Participants All patients admitted to each of the units with an admission rehabilitation complexity M score of ≥3 (N=190; mean age 46 (SD16) years; males:females 63:37%). Diagnoses were acquired brain injury (n=166; 87%), spinal cord injury (n=9; 5%), peripheral neurological conditions (n=9; 5%) and other (n=6; 3%). Intervention Specialist in-patient multidisciplinary rehabilitation combined with management and stabilisation of intercurrent medical and surgical problems. Outcome measures Rehabilitation complexity and medical acuity: Rehabilitation Complexity Scale—version 13. Dependency and care costs: Northwick Park Dependency Scale/Care Needs Assessment (NPDS/NPCNA). Functional independence: UK Functional Assessment Measure (UK FIM+FAM). Primary outcomes: (1) reduction in dependency and (2) cost-efficiency, measured as the time taken to offset rehabilitation costs by savings in NPCNA-estimated costs of on-going care in the community. Results The mean length of stay was 103 (SD66) days. Some differences were observed between the two units, which were in keeping with the different service models. However, both units showed a significant reduction in dependency and acuity between admission and discharge on all measures (Wilcoxon: p<0.001). For the 180 (95%) patients with complete NPCNA data, the mean episode cost was £77 119 (bootstrapped 95% CI £70 614 to £83 894) and the mean reduction in ‘weekly care costs’ was £462/week (95% CI 349 to 582). The mean time to offset the cost of rehabilitation was 27.6 months (95% CI 13.2 to 43.8). Conclusions Despite its relatively high initial cost, specialist HA rehabilitation can be highly cost-efficient, producing substantial savings in on-going care costs, and relieving pressure in the acute care services. PMID:27609852
Structural identifiability of cyclic graphical models of biological networks with latent variables.
Wang, Yulin; Lu, Na; Miao, Hongyu
2016-06-13
Graphical models have long been used to describe biological networks for a variety of important tasks such as the determination of key biological parameters, and the structure of graphical model ultimately determines whether such unknown parameters can be unambiguously obtained from experimental observations (i.e., the identifiability problem). Limited by resources or technical capacities, complex biological networks are usually partially observed in experiment, which thus introduces latent variables into the corresponding graphical models. A number of previous studies have tackled the parameter identifiability problem for graphical models such as linear structural equation models (SEMs) with or without latent variables. However, the limited resolution and efficiency of existing approaches necessarily calls for further development of novel structural identifiability analysis algorithms. An efficient structural identifiability analysis algorithm is developed in this study for a broad range of network structures. The proposed method adopts the Wright's path coefficient method to generate identifiability equations in forms of symbolic polynomials, and then converts these symbolic equations to binary matrices (called identifiability matrix). Several matrix operations are introduced for identifiability matrix reduction with system equivalency maintained. Based on the reduced identifiability matrices, the structural identifiability of each parameter is determined. A number of benchmark models are used to verify the validity of the proposed approach. Finally, the network module for influenza A virus replication is employed as a real example to illustrate the application of the proposed approach in practice. The proposed approach can deal with cyclic networks with latent variables. The key advantage is that it intentionally avoids symbolic computation and is thus highly efficient. Also, this method is capable of determining the identifiability of each single parameter and is thus of higher resolution in comparison with many existing approaches. Overall, this study provides a basis for systematic examination and refinement of graphical models of biological networks from the identifiability point of view, and it has a significant potential to be extended to more complex network structures or high-dimensional systems.
Karamintziou, Sofia D; Custódio, Ana Luísa; Piallat, Brigitte; Polosan, Mircea; Chabardès, Stéphan; Stathis, Pantelis G; Tagaris, George A; Sakas, Damianos E; Polychronaki, Georgia E; Tsirogiannis, George L; David, Olivier; Nikita, Konstantina S
2017-01-01
Advances in the field of closed-loop neuromodulation call for analysis and modeling approaches capable of confronting challenges related to the complex neuronal response to stimulation and the presence of strong internal and measurement noise in neural recordings. Here we elaborate on the algorithmic aspects of a noise-resistant closed-loop subthalamic nucleus deep brain stimulation system for advanced Parkinson's disease and treatment-refractory obsessive-compulsive disorder, ensuring remarkable performance in terms of both efficiency and selectivity of stimulation, as well as in terms of computational speed. First, we propose an efficient method drawn from dynamical systems theory, for the reliable assessment of significant nonlinear coupling between beta and high-frequency subthalamic neuronal activity, as a biomarker for feedback control. Further, we present a model-based strategy through which optimal parameters of stimulation for minimum energy desynchronizing control of neuronal activity are being identified. The strategy integrates stochastic modeling and derivative-free optimization of neural dynamics based on quadratic modeling. On the basis of numerical simulations, we demonstrate the potential of the presented modeling approach to identify, at a relatively low computational cost, stimulation settings potentially associated with a significantly higher degree of efficiency and selectivity compared with stimulation settings determined post-operatively. Our data reinforce the hypothesis that model-based control strategies are crucial for the design of novel stimulation protocols at the backstage of clinical applications.
NASA Astrophysics Data System (ADS)
Velazquez, Antonio; Swartz, R. Andrew
2013-04-01
Renewable energy sources like wind are important technologies, useful to alleviate for the current fossil-fuel crisis. Capturing wind energy in a more efficient way has resulted in the emergence of more sophisticated designs of wind turbines, particularly Horizontal-Axis Wind Turbines (HAWTs). To promote efficiency, traditional finite element methods have been widely used to characterize the aerodynamics of these types of multi-body systems and improve their design. Given their aeroelastic behavior, tapered-swept blades offer the potential to optimize energy capture and decrease fatigue loads. Nevertheless, modeling special complex geometries requires huge computational efforts necessitating tradeoffs between faster computation times at lower cost, and reliability and numerical accuracy. Indeed, the computational cost and the numerical effort invested, using traditional FE methods, to reproduce dependable aerodynamics of these complex-shape beams are sometimes prohibitive. A condensed Spinning Finite Element (SFE) method scheme is presented in this study aimed to alleviate this issue by means of modeling wind-turbine rotor blades properly with tapered-swept cross-section variations of arbitrary order via Lagrangian equations. Axial-flexural-torsional coupling is carried out on axial deformation, torsion, in-plane bending and out-of-plane bending using super-convergent elements. In this study, special attention is paid for the case of damped yaw effects, expressed within the described skew-symmetric damped gyroscopic matrix. Dynamics of the model are analyzed by achieving modal analysis with complex-number eigen-frequencies. By means of mass, damped gyroscopic, and stiffness (axial-flexural-torsional coupling) matrix condensation (order reduction), numerical analysis is carried out for several prototypes with different tapered, swept, and curved variation intensities, and for a practical range of spinning velocities at different rotation angles. A convergence study for the resulting natural frequencies is performed to evaluate the dynamic collateral effects of tapered-swept blade profiles in spinning motion using this new model. Stability analysis in boundary conditions of the postulated model is achieved to test the convergence and integrity of the mathematical model. The proposed framework presumes to be particularly suitable to characterize models with complex-shape cross-sections at low computation cost.
Pflock, Tobias; Dezi, Manuela; Venturoli, Giovanni; Cogdell, Richard J; Köhler, Jürgen; Oellerich, Silke
2008-01-01
Picosecond time-resolved fluorescence spectroscopy has been used in order to compare the fluorescence kinetics of detergent-solubilized and membrane-reconstituted light-harvesting 2 (LH2) complexes from the purple bacteria Rhodopseudomonas (Rps.) acidophila and Rhodobacter (Rb.) sphaeroides. LH2 complexes were reconstituted in phospholipid model membranes at different lipid:protein-ratios and all samples were studied exciting with a wide range of excitation densities. While the detergent-solubilized LH2 complexes from Rps. acidophila showed monoexponential decay kinetics (tau(f )= 980 ps) for excitation densities of up to 3.10(13) photons/(pulse.cm(2)), the membrane-reconstituted LH2 complexes showed multiexponential kinetics even at low excitation densities and high lipid:protein-ratios. The latter finding indicates an efficient clustering of LH2 complexes in the phospholipid membranes. Similar results were obtained for the LH2 complexes from Rb. sphaeroides.
Li, Hua; Zheng, Xiangtao; Koren, Viktoria; Vashist, Yogesh Kumar; Tsui, Tung Yu
2014-07-20
Small interfering RNAs (siRNAs) delivery remains a bottleneck for RNA interference (RNAi) - based therapies in the clinic. In the present study, a fusion protein with two cell-penetrating peptides (CPP), Hph1-Hph1, and a double-stranded RNA binding domain (dsRBD), was constructed for the siRNA delivery: dsRBD was designed to bind siRNA, and CPP would subsequently transport the dsRBD/siRNA complex into cells. We assessed the efficiency of the fusion protein, Hph1-Hph1-dsRBD, as a siRNA carrier. Calcium-condensed effects were assessed on GAPDH and green fluorescent protein (GFP) genes by western blot, real time polymerase chain reaction (RT-PCR), and flow cytometry analysis in vitro. Evaluations were also made in an in vivo heart transplantation model. The results demonstrated that the fusion protein, Hph1-Hph1-dsRBD, is highly efficient at delivering siRNA in vitro, and exhibits efficiency on GAPDH and GFP genes similar to or greater than lipofectamine. Interestingly, the calcium-condensed effects dramatically enhanced cellular uptake of the protein-siRNA complex. In vivo, Hph1-Hph1-dsRBD transferred and distributed ^ targeted siRNA throughout the whole mouse heart graft. Together, these results indicate that Hph1-Hph1-dsRBD has potential as an siRNA carrier for applications in the clinic or in biomedical research. Copyright © 2014 Elsevier B.V. All rights reserved.
Ramírez, Ana; Ruggiero, Melina; Aranaga, Carlos; Cataldi, Angel; Gutkind, Gabriel; de Waard, Jacobus H; Araque, María; Power, Pablo
2017-04-01
The objectives of this study were to determine the kinetic parameters of purified recombinant Bla Mab and Bla Mmas by spectrophotometry, analyze the genetic environment of the bla Mab and bla Mmas genes in both species by polymerase chain reaction and sequencing, furthermore, in silico models of both enzymes in complex with imipenem were obtained by modeling tools. Our results showed that Bla Mab and Bla Mmas have a similar hydrolysis behavior, displaying high catalytic efficiencies toward penams, cephalothin, and nitrocefin; none of the enzymes are well inhibited by clavulanate. Bla Mmas hydrolyzes imipenem at higher efficiency than cefotaxime and aztreonam. Bla Mab and Bla Mmas showed that their closest structural homologs are KPC-2 and SFC-1, which correlate to the mild carbapenemase activity toward imipenem observed at least for BlaMmas. They also seem to differ from other class A β-lactamases by the presence of a more flexible Ω loop, which could impact in the hydrolysis efficiency against some antibiotics. A -35 consensus sequence (TCGACA) and embedded at the 3' end of MAB_2874, which may constitute the bla Mab and bla Mmas promoter. Our results suggest that the resistance mechanisms in fast-growing mycobacteria could be probably evolving toward the production of β-lactamases that have improved catalytic efficiencies against some of the drugs commonly used for the treatment of mycobacterial infections, endangering the use of important drugs like the carbapenems.
Hierarchical modeling and robust synthesis for the preliminary design of large scale complex systems
NASA Astrophysics Data System (ADS)
Koch, Patrick Nathan
Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: (1) Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis, (2) Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration, and (3) Noise modeling techniques for implementing robust preliminary design when approximate models are employed. The method developed and associated approaches are illustrated through their application to the preliminary design of a commercial turbofan turbine propulsion system; the turbofan system-level problem is partitioned into engine cycle and configuration design and a compressor module is integrated for more detailed subsystem-level design exploration, improving system evaluation.
Microsimulation Modeling for Health Decision Sciences Using R: A Tutorial.
Krijkamp, Eline M; Alarid-Escudero, Fernando; Enns, Eva A; Jalal, Hawre J; Hunink, M G Myriam; Pechlivanoglou, Petros
2018-04-01
Microsimulation models are becoming increasingly common in the field of decision modeling for health. Because microsimulation models are computationally more demanding than traditional Markov cohort models, the use of computer programming languages in their development has become more common. R is a programming language that has gained recognition within the field of decision modeling. It has the capacity to perform microsimulation models more efficiently than software commonly used for decision modeling, incorporate statistical analyses within decision models, and produce more transparent models and reproducible results. However, no clear guidance for the implementation of microsimulation models in R exists. In this tutorial, we provide a step-by-step guide to build microsimulation models in R and illustrate the use of this guide on a simple, but transferable, hypothetical decision problem. We guide the reader through the necessary steps and provide generic R code that is flexible and can be adapted for other models. We also show how this code can be extended to address more complex model structures and provide an efficient microsimulation approach that relies on vectorization solutions.
Models of service delivery for cancer genetic risk assessment and counseling.
Trepanier, Angela M; Allain, Dawn C
2014-04-01
Increasing awareness of and the potentially concomitant increasing demand for cancer genetic services is driving the need to explore more efficient models of service delivery. The aims of this study were to determine which service delivery models are most commonly used by genetic counselors, assess how often they are used, compare the efficiency of each model as well as impact on access to services, and investigate the perceived benefits and barriers of each. Full members of the NSGC Familial Cancer Special Interest Group who subscribe to its listserv were invited to participate in a web-based survey. Eligible respondents were asked which of ten defined service delivery models they use and specific questions related to aspects of model use. One-hundred ninety-two of the approximately 450 members of the listserv responded (42.7%); 177 (92.2%) had provided clinical service in the last year and were eligible to complete all sections of the survey. The four direct care models most commonly used were the (traditional) face-to-face pre- and post-test model (92.2%), the face-to-face pretest without face-to-face post-test model (86.5%), the post-test counseling only for complex results model (36.2%), and the post test counseling for all results model (18.3%). Those using the face-to-face pretest only, post-test all, and post-test complex models reported seeing more new patients than when they used the traditional model and these differences were statistically significantly. There were no significant differences in appointment wait times or distances traveled by patients when comparing use of the traditional model to the other three models. Respondents recognize that a benefit of using alternative service delivery models is increased access to services; however, some are concerned that this may affect quality of care.
Simulation Study of CO2-EOR in Tight Oil Reservoirs with Complex Fracture Geometries
Zuloaga-Molero, Pavel; Yu, Wei; Xu, Yifei; Sepehrnoori, Kamy; Li, Baozhen
2016-01-01
The recent development of tight oil reservoirs has led to an increase in oil production in the past several years due to the progress in horizontal drilling and hydraulic fracturing. However, the expected oil recovery factor from these reservoirs is still very low. CO2-based enhanced oil recovery is a suitable solution to improve the recovery. One challenge of the estimation of the recovery is to properly model complex hydraulic fracture geometries which are often assumed to be planar due to the limitation of local grid refinement approach. More flexible methods like the use of unstructured grids can significantly increase the computational demand. In this study, we introduce an efficient methodology of the embedded discrete fracture model to explicitly model complex fracture geometries. We build a compositional reservoir model to investigate the effects of complex fracture geometries on performance of CO2 Huff-n-Puff and CO2 continuous injection. The results confirm that the appropriate modelling of the fracture geometry plays a critical role in the estimation of the incremental oil recovery. This study also provides new insights into the understanding of the impacts of CO2 molecular diffusion, reservoir permeability, and natural fractures on the performance of CO2-EOR processes in tight oil reservoirs. PMID:27628131
SIM_EXPLORE: Software for Directed Exploration of Complex Systems
NASA Technical Reports Server (NTRS)
Burl, Michael; Wang, Esther; Enke, Brian; Merline, William J.
2013-01-01
Physics-based numerical simulation codes are widely used in science and engineering to model complex systems that would be infeasible to study otherwise. While such codes may provide the highest- fidelity representation of system behavior, they are often so slow to run that insight into the system is limited. Trying to understand the effects of inputs on outputs by conducting an exhaustive grid-based sweep over the input parameter space is simply too time-consuming. An alternative approach called "directed exploration" has been developed to harvest information from numerical simulators more efficiently. The basic idea is to employ active learning and supervised machine learning to choose cleverly at each step which simulation trials to run next based on the results of previous trials. SIM_EXPLORE is a new computer program that uses directed exploration to explore efficiently complex systems represented by numerical simulations. The software sequentially identifies and runs simulation trials that it believes will be most informative given the results of previous trials. The results of new trials are incorporated into the software's model of the system behavior. The updated model is then used to pick the next round of new trials. This process, implemented as a closed-loop system wrapped around existing simulation code, provides a means to improve the speed and efficiency with which a set of simulations can yield scientifically useful results. The software focuses on the case in which the feedback from the simulation trials is binary-valued, i.e., the learner is only informed of the success or failure of the simulation trial to produce a desired output. The software offers a number of choices for the supervised learning algorithm (the method used to model the system behavior given the results so far) and a number of choices for the active learning strategy (the method used to choose which new simulation trials to run given the current behavior model). The software also makes use of the LEGION distributed computing framework to leverage the power of a set of compute nodes. The approach has been demonstrated on a planetary science application in which numerical simulations are used to study the formation of asteroid families.
A Spectral Method for Spatial Downscaling
Reich, Brian J.; Chang, Howard H.; Foley, Kristen M.
2014-01-01
Summary Complex computer models play a crucial role in air quality research. These models are used to evaluate potential regulatory impacts of emission control strategies and to estimate air quality in areas without monitoring data. For both of these purposes, it is important to calibrate model output with monitoring data to adjust for model biases and improve spatial prediction. In this article, we propose a new spectral method to study and exploit complex relationships between model output and monitoring data. Spectral methods allow us to estimate the relationship between model output and monitoring data separately at different spatial scales, and to use model output for prediction only at the appropriate scales. The proposed method is computationally efficient and can be implemented using standard software. We apply the method to compare Community Multiscale Air Quality (CMAQ) model output with ozone measurements in the United States in July 2005. We find that CMAQ captures large-scale spatial trends, but has low correlation with the monitoring data at small spatial scales. PMID:24965037
Eigenvalue Tests and Distributions for Small Sample Order Determination for Complex Wishart Matrices
1994-08-13
theoretic order determination criteria for ARMA(p, q) models can be expressed in the form of equation 4.2. The word ARIMA should not be a distractor...subjectivity is not necessarily bad. It enables us to build tractable models and efficiently achieve reasonable results. The charge of "subjectivity" lodged...signal processing studies because it simplifies the mathematics involved and it is not a bad model for a wide range of situations. Wooding [293] is
The use of experimental design to find the operating maximum power point of PEM fuel cells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crăciunescu, Aurelian; Pătularu, Laurenţiu; Ciumbulea, Gloria
2015-03-10
Proton Exchange Membrane (PEM) Fuel Cells are difficult to model due to their complex nonlinear nature. In this paper, the development of a PEM Fuel Cells mathematical model based on the Design of Experiment methodology is described. The Design of Experiment provides a very efficient methodology to obtain a mathematical model for the studied multivariable system with only a few experiments. The obtained results can be used for optimization and control of the PEM Fuel Cells systems.
NASA Technical Reports Server (NTRS)
Platt, M. E.; Lewis, E. E.; Boehm, F.
1991-01-01
A Monte Carlo Fortran computer program was developed that uses two variance reduction techniques for computing system reliability applicable to solving very large highly reliable fault-tolerant systems. The program is consistent with the hybrid automated reliability predictor (HARP) code which employs behavioral decomposition and complex fault-error handling models. This new capability is called MC-HARP which efficiently solves reliability models with non-constant failures rates (Weibull). Common mode failure modeling is also a specialty.
On unified modeling, theory, and method for solving multi-scale global optimization problems
NASA Astrophysics Data System (ADS)
Gao, David Yang
2016-10-01
A unified model is proposed for general optimization problems in multi-scale complex systems. Based on this model and necessary assumptions in physics, the canonical duality theory is presented in a precise way to include traditional duality theories and popular methods as special applications. Two conjectures on NP-hardness are proposed, which should play important roles for correctly understanding and efficiently solving challenging real-world problems. Applications are illustrated for both nonconvex continuous optimization and mixed integer nonlinear programming.
Model of brain activation predicts the neural collective influence map of the brain
Morone, Flaviano; Roth, Kevin; Min, Byungjoon; Makse, Hernán A.
2017-01-01
Efficient complex systems have a modular structure, but modularity does not guarantee robustness, because efficiency also requires an ingenious interplay of the interacting modular components. The human brain is the elemental paradigm of an efficient robust modular system interconnected as a network of networks (NoN). Understanding the emergence of robustness in such modular architectures from the interconnections of its parts is a longstanding challenge that has concerned many scientists. Current models of dependencies in NoN inspired by the power grid express interactions among modules with fragile couplings that amplify even small shocks, thus preventing functionality. Therefore, we introduce a model of NoN to shape the pattern of brain activations to form a modular environment that is robust. The model predicts the map of neural collective influencers (NCIs) in the brain, through the optimization of the influence of the minimal set of essential nodes responsible for broadcasting information to the whole-brain NoN. Our results suggest intervention protocols to control brain activity by targeting influential neural nodes predicted by network theory. PMID:28351973
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mehrez, Loujaine; Ghanem, Roger; Aitharaju, Venkat
Design of non-crimp fabric (NCF) composites entails major challenges pertaining to (1) the complex fine-scale morphology of the constituents, (2) the manufacturing-produced inconsistency of this morphology spatially, and thus (3) the ability to build reliable, robust, and efficient computational surrogate models to account for this complex nature. Traditional approaches to construct computational surrogate models have been to average over the fluctuations of the material properties at different scale lengths. This fails to account for the fine-scale features and fluctuations in morphology, material properties of the constituents, as well as fine-scale phenomena such as damage and cracks. In addition, it failsmore » to accurately predict the scatter in macroscopic properties, which is vital to the design process and behavior prediction. In this work, funded in part by the Department of Energy, we present an approach for addressing these challenges by relying on polynomial chaos representations of both input parameters and material properties at different scales. Moreover, we emphasize the efficiency and robustness of integrating the polynomial chaos expansion with multiscale tools to perform multiscale assimilation, characterization, propagation, and prediction, all of which are necessary to construct the data-driven surrogate models required to design under the uncertainty of composites. These data-driven constructions provide an accurate map from parameters (and their uncertainties) at all scales and the system-level behavior relevant for design. While this perspective is quite general and applicable to all multiscale systems, NCF composites present a particular hierarchy of scales that permits the efficient implementation of these concepts.« less
NASA Astrophysics Data System (ADS)
Juhász, Imre Benedek; Csurgay, Árpád I.
2018-04-01
In recent years, the role of molecular vibrations in exciton energy transfer taking place during the first stage of photosynthesis attracted increasing interest. Here, we present a model formulated as a Lindblad-type master equation that enables us to investigate the impact of undamped and especially damped intramolecular vibrational modes on the exciton energy transfer, particularly its efficiency. Our simulations confirm the already reported effects that the presence of an intramolecular vibrational mode can compensate the energy detuning of electronic states, thus promoting the energy transfer; and, moreover, that the damping of such a vibrational mode (in other words, vibrational relaxation) can further enhance the efficiency of the process by generating directionality in the energy flow. As a novel result, we show that this enhancement surpasses the one caused by pure dephasing, and we present its dependence on various system parameters (time constants of the environment-induced relaxation and excitation processes, detuning of the electronic energy levels, frequency of the intramolecular vibrational modes, Huang-Rhys factors, temperature) in dimer model systems. We demonstrate that vibrational-relaxation-enhanced exciton energy transfer (VREEET) is robust against the change of these characteristics of the system and occurs in wide ranges of the investigated parameters. With simulations performed on a heptamer model inspired by the Fenna-Matthews-Olson (FMO) complex, we show that this mechanism can be even more significant in larger systems at T = 300 K. Our results suggests that VREEET might be prevalent in light-harvesting complexes.
NASA Astrophysics Data System (ADS)
Siade, Adam J.; Hall, Joel; Karelse, Robert N.
2017-11-01
Regional groundwater flow models play an important role in decision making regarding water resources; however, the uncertainty embedded in model parameters and model assumptions can significantly hinder the reliability of model predictions. One way to reduce this uncertainty is to collect new observation data from the field. However, determining where and when to obtain such data is not straightforward. There exist a number of data-worth and experimental design strategies developed for this purpose. However, these studies often ignore issues related to real-world groundwater models such as computational expense, existing observation data, high-parameter dimension, etc. In this study, we propose a methodology, based on existing methods and software, to efficiently conduct such analyses for large-scale, complex regional groundwater flow systems for which there is a wealth of available observation data. The method utilizes the well-established d-optimality criterion, and the minimax criterion for robust sampling strategies. The so-called Null-Space Monte Carlo method is used to reduce the computational burden associated with uncertainty quantification. And, a heuristic methodology, based on the concept of the greedy algorithm, is proposed for developing robust designs with subsets of the posterior parameter samples. The proposed methodology is tested on a synthetic regional groundwater model, and subsequently applied to an existing, complex, regional groundwater system in the Perth region of Western Australia. The results indicate that robust designs can be obtained efficiently, within reasonable computational resources, for making regional decisions regarding groundwater level sampling.
NASA Astrophysics Data System (ADS)
Kwiatkowski, L.; Yool, A.; Allen, J. I.; Anderson, T. R.; Barciela, R.; Buitenhuis, E. T.; Butenschön, M.; Enright, C.; Halloran, P. R.; Le Quéré, C.; de Mora, L.; Racault, M.-F.; Sinha, B.; Totterdell, I. J.; Cox, P. M.
2014-07-01
Ocean biogeochemistry (OBGC) models span a wide range of complexities from highly simplified, nutrient-restoring schemes, through nutrient-phytoplankton-zooplankton-detritus (NPZD) models that crudely represent the marine biota, through to models that represent a broader trophic structure by grouping organisms as plankton functional types (PFT) based on their biogeochemical role (Dynamic Green Ocean Models; DGOM) and ecosystem models which group organisms by ecological function and trait. OBGC models are now integral components of Earth System Models (ESMs), but they compete for computing resources with higher resolution dynamical setups and with other components such as atmospheric chemistry and terrestrial vegetation schemes. As such, the choice of OBGC in ESMs needs to balance model complexity and realism alongside relative computing cost. Here, we present an inter-comparison of six OBGC models that were candidates for implementation within the next UK Earth System Model (UKESM1). The models cover a large range of biological complexity (from 7 to 57 tracers) but all include representations of at least the nitrogen, carbon, alkalinity and oxygen cycles. Each OBGC model was coupled to the Nucleus for the European Modelling of the Ocean (NEMO) ocean general circulation model (GCM), and results from physically identical hindcast simulations were compared. Model skill was evaluated for biogeochemical metrics of global-scale bulk properties using conventional statistical techniques. The computing cost of each model was also measured in standardised tests run at two resource levels. No model is shown to consistently outperform or underperform all other models across all metrics. Nonetheless, the simpler models that are easier to tune are broadly closer to observations across a number of fields, and thus offer a high-efficiency option for ESMs that prioritise high resolution climate dynamics. However, simpler models provide limited insight into more complex marine biogeochemical processes and ecosystem pathways, and a parallel approach of low resolution climate dynamics and high complexity biogeochemistry is desirable in order to provide additional insights into biogeochemistry-climate interactions.
NASA Astrophysics Data System (ADS)
Kwiatkowski, L.; Yool, A.; Allen, J. I.; Anderson, T. R.; Barciela, R.; Buitenhuis, E. T.; Butenschön, M.; Enright, C.; Halloran, P. R.; Le Quéré, C.; de Mora, L.; Racault, M.-F.; Sinha, B.; Totterdell, I. J.; Cox, P. M.
2014-12-01
Ocean biogeochemistry (OBGC) models span a wide variety of complexities, including highly simplified nutrient-restoring schemes, nutrient-phytoplankton-zooplankton-detritus (NPZD) models that crudely represent the marine biota, models that represent a broader trophic structure by grouping organisms as plankton functional types (PFTs) based on their biogeochemical role (dynamic green ocean models) and ecosystem models that group organisms by ecological function and trait. OBGC models are now integral components of Earth system models (ESMs), but they compete for computing resources with higher resolution dynamical setups and with other components such as atmospheric chemistry and terrestrial vegetation schemes. As such, the choice of OBGC in ESMs needs to balance model complexity and realism alongside relative computing cost. Here we present an intercomparison of six OBGC models that were candidates for implementation within the next UK Earth system model (UKESM1). The models cover a large range of biological complexity (from 7 to 57 tracers) but all include representations of at least the nitrogen, carbon, alkalinity and oxygen cycles. Each OBGC model was coupled to the ocean general circulation model Nucleus for European Modelling of the Ocean (NEMO) and results from physically identical hindcast simulations were compared. Model skill was evaluated for biogeochemical metrics of global-scale bulk properties using conventional statistical techniques. The computing cost of each model was also measured in standardised tests run at two resource levels. No model is shown to consistently outperform all other models across all metrics. Nonetheless, the simpler models are broadly closer to observations across a number of fields and thus offer a high-efficiency option for ESMs that prioritise high-resolution climate dynamics. However, simpler models provide limited insight into more complex marine biogeochemical processes and ecosystem pathways, and a parallel approach of low-resolution climate dynamics and high-complexity biogeochemistry is desirable in order to provide additional insights into biogeochemistry-climate interactions.
Maximizing mutagenesis with solubilized CRISPR-Cas9 ribonucleoprotein complexes.
Burger, Alexa; Lindsay, Helen; Felker, Anastasia; Hess, Christopher; Anders, Carolin; Chiavacci, Elena; Zaugg, Jonas; Weber, Lukas M; Catena, Raul; Jinek, Martin; Robinson, Mark D; Mosimann, Christian
2016-06-01
CRISPR-Cas9 enables efficient sequence-specific mutagenesis for creating somatic or germline mutants of model organisms. Key constraints in vivo remain the expression and delivery of active Cas9-sgRNA ribonucleoprotein complexes (RNPs) with minimal toxicity, variable mutagenesis efficiencies depending on targeting sequence, and high mutation mosaicism. Here, we apply in vitro assembled, fluorescent Cas9-sgRNA RNPs in solubilizing salt solution to achieve maximal mutagenesis efficiency in zebrafish embryos. MiSeq-based sequence analysis of targeted loci in individual embryos using CrispRVariants, a customized software tool for mutagenesis quantification and visualization, reveals efficient bi-allelic mutagenesis that reaches saturation at several tested gene loci. Such virtually complete mutagenesis exposes loss-of-function phenotypes for candidate genes in somatic mutant embryos for subsequent generation of stable germline mutants. We further show that targeting of non-coding elements in gene regulatory regions using saturating mutagenesis uncovers functional control elements in transgenic reporters and endogenous genes in injected embryos. Our results establish that optimally solubilized, in vitro assembled fluorescent Cas9-sgRNA RNPs provide a reproducible reagent for direct and scalable loss-of-function studies and applications beyond zebrafish experiments that require maximal DNA cutting efficiency in vivo. © 2016. Published by The Company of Biologists Ltd.
NASA Technical Reports Server (NTRS)
Torres-Pomales, Wilfredo
2015-01-01
This report documents a case study on the application of Reliability Engineering techniques to achieve an optimal balance between performance and robustness by tuning the functional parameters of a complex non-linear control system. For complex systems with intricate and non-linear patterns of interaction between system components, analytical derivation of a mathematical model of system performance and robustness in terms of functional parameters may not be feasible or cost-effective. The demonstrated approach is simple, structured, effective, repeatable, and cost and time efficient. This general approach is suitable for a wide range of systems.
Generalized Correlation Coefficient for Non-Parametric Analysis of Microarray Time-Course Data.
Tan, Qihua; Thomassen, Mads; Burton, Mark; Mose, Kristian Fredløv; Andersen, Klaus Ejner; Hjelmborg, Jacob; Kruse, Torben
2017-06-06
Modeling complex time-course patterns is a challenging issue in microarray study due to complex gene expression patterns in response to the time-course experiment. We introduce the generalized correlation coefficient and propose a combinatory approach for detecting, testing and clustering the heterogeneous time-course gene expression patterns. Application of the method identified nonlinear time-course patterns in high agreement with parametric analysis. We conclude that the non-parametric nature in the generalized correlation analysis could be an useful and efficient tool for analyzing microarray time-course data and for exploring the complex relationships in the omics data for studying their association with disease and health.
Testing alternative ground water models using cross-validation and other methods
Foglia, L.; Mehl, S.W.; Hill, M.C.; Perona, P.; Burlando, P.
2007-01-01
Many methods can be used to test alternative ground water models. Of concern in this work are methods able to (1) rank alternative models (also called model discrimination) and (2) identify observations important to parameter estimates and predictions (equivalent to the purpose served by some types of sensitivity analysis). Some of the measures investigated are computationally efficient; others are computationally demanding. The latter are generally needed to account for model nonlinearity. The efficient model discrimination methods investigated include the information criteria: the corrected Akaike information criterion, Bayesian information criterion, and generalized cross-validation. The efficient sensitivity analysis measures used are dimensionless scaled sensitivity (DSS), composite scaled sensitivity, and parameter correlation coefficient (PCC); the other statistics are DFBETAS, Cook's D, and observation-prediction statistic. Acronyms are explained in the introduction. Cross-validation (CV) is a computationally intensive nonlinear method that is used for both model discrimination and sensitivity analysis. The methods are tested using up to five alternative parsimoniously constructed models of the ground water system of the Maggia Valley in southern Switzerland. The alternative models differ in their representation of hydraulic conductivity. A new method for graphically representing CV and sensitivity analysis results for complex models is presented and used to evaluate the utility of the efficient statistics. The results indicate that for model selection, the information criteria produce similar results at much smaller computational cost than CV. For identifying important observations, the only obviously inferior linear measure is DSS; the poor performance was expected because DSS does not include the effects of parameter correlation and PCC reveals large parameter correlations. ?? 2007 National Ground Water Association.
Thrust and efficiency model for electron-driven magnetic nozzles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Little, Justin M.; Choueiri, Edgar Y.
2013-10-15
A performance model is presented for magnetic nozzle plasmas driven by electron thermal expansion to investigate how the thrust coefficient and beam divergence efficiency scale with the incoming plasma flow and magnetic field geometry. Using a transformation from cylindrical to magnetic coordinates, an approximate analytical solution is derived to the axisymmetric two-fluid equations for a collisionless plasma flow along an applied magnetic field. This solution yields an expression for the half-width at half-maximum of the plasma density profile in the far-downstream region, from which simple scaling relations for the thrust coefficient and beam divergence efficiency are derived. It is foundmore » that the beam divergence efficiency is most sensitive to the density profile of the flow into the nozzle throat, with the highest efficiencies occurring for plasmas concentrated along the nozzle axis. Increasing the expansion ratio of the magnetic field leads to efficiency improvements that are more pronounced for incoming plasmas that are not concentrated along the axis. This implies that the additional magnet required to increase the expansion ratio may be worth the added complexity for plasma sources that exhibit poor confinement.« less
Viral RNAi suppressor reversibly binds siRNA to outcompete Dicer and RISC via multiple-turnover
Rawlings, Renata A.; Krishnan, Vishalakshi; Walter, Nils G.
2011-01-01
RNA interference (RNAi) is a conserved gene regulatory mechanism employed by most eukaryotes as a key component of their innate immune response against viruses and retrotransposons. During viral infection, the RNase III-type endonuclease Dicer cleaves viral double-stranded RNA into small interfering RNAs (siRNAs), 21–24 nucleotides in length, and helps load them into the RNA-induced silencing complex (RISC) to guide cleavage of complementary viral RNA. As a countermeasure, many viruses have evolved viral RNA silencing suppressor (RSS) proteins that tightly, and presumably quantitatively, bind siRNAs to thwart RNAi-mediated degradation. Viral RSS proteins also act across kingdoms as potential immunosuppressors in gene therapeutic applications. Here we report fluorescence quenching and electrophoretic mobility shift assays that probe siRNA binding by the dimeric RSS p19 from Carnation Italian Ringspot Virus (CIRV), as well as by human Dicer and RISC assembly complexes. We find that the siRNA:p19 interaction is readily reversible, characterized by rapid binding ((1.69 ± 0.07)×108 M−1s−1) and marked dissociation (koff = 0.062 ± 0.002 s−1). We also observe that p19 efficiently competes with recombinant Dicer and inhibits formation of RISC-related assembly complexes found in human cell extract. Computational modeling based on these results provides evidence for the transient formation of a ternary complex between siRNA, human Dicer, and p19. An expanded model of RNA silencing indicates that multiple-turnover by reversible binding of siRNAs potentiates the efficiency of the suppressor protein. Our predictive model is expected to be applicable to the dosing of p19 as a silencing suppressor in viral gene therapy. PMID:21354178
Viral RNAi suppressor reversibly binds siRNA to outcompete Dicer and RISC via multiple turnover.
Rawlings, Renata A; Krishnan, Vishalakshi; Walter, Nils G
2011-04-29
RNA interference is a conserved gene regulatory mechanism employed by most eukaryotes as a key component of their innate immune response to viruses and retrotransposons. During viral infection, the RNase-III-type endonuclease Dicer cleaves viral double-stranded RNA into small interfering RNAs (siRNAs) 21-24 nucleotides in length and helps load them into the RNA-induced silencing complex (RISC) to guide the cleavage of complementary viral RNA. As a countermeasure, many viruses have evolved viral RNA silencing suppressors (RSS) that tightly, and presumably quantitatively, bind siRNAs to thwart RNA-interference-mediated degradation. Viral RSS proteins also act across kingdoms as potential immunosuppressors in gene therapeutic applications. Here we report fluorescence quenching and electrophoretic mobility shift assays that probe siRNA binding by the dimeric RSS p19 from Carnation Italian Ringspot Virus, as well as by human Dicer and RISC assembly complexes. We find that the siRNA:p19 interaction is readily reversible, characterized by rapid binding [(1.69 ± 0.07) × 10(8) M(-)(1) s(-1)] and marked dissociation (k(off)=0.062 ± 0.002 s(-1)). We also observe that p19 efficiently competes with recombinant Dicer and inhibits the formation of RISC-related assembly complexes found in human cell extract. Computational modeling based on these results provides evidence for the transient formation of a ternary complex between siRNA, human Dicer, and p19. An expanded model of RNA silencing indicates that multiple turnover by reversible binding of siRNAs potentiates the efficiency of the suppressor protein. Our predictive model is expected to be applicable to the dosing of p19 as a silencing suppressor in viral gene therapy. Copyright © 2011 Elsevier Ltd. All rights reserved.
White, Corey J; Speelman, Amy L; Kupper, Claudia; Demeshko, Serhiy; Meyer, Franc; Shanahan, James P; Alp, E Ercan; Hu, Michael; Zhao, Jiyong; Lehnert, Nicolai
2018-02-21
Flavodiiron nitric oxide reductases (FNORs) are a subclass of flavodiiron proteins (FDPs) capable of preferential binding and subsequent reduction of NO to N 2 O. FNORs are found in certain pathogenic bacteria, equipping them with resistance to nitrosative stress, generated as a part of the immune defense in humans, and allowing them to proliferate. Here, we report the spectroscopic characterization and detailed reactivity studies of the diiron dinitrosyl model complex [Fe 2 (BPMP)(OPr)(NO) 2 ](OTf) 2 for the FNOR active site that is capable of reducing NO to N 2 O [Zheng et al., J. Am. Chem. Soc. 2013, 135, 4902-4905]. Using UV-vis spectroscopy, cyclic voltammetry, and spectro-electrochemistry, we show that one reductive equivalent is in fact sufficient for the quantitative generation of N 2 O, following a semireduced reaction mechanism. This reaction is very efficient and produces N 2 O with a first-order rate constant k > 10 2 s -1 . Further isotope labeling studies confirm an intramolecular N-N coupling mechanism, consistent with the rapid time scale of the reduction and a very low barrier for N-N bond formation. Accordingly, the reaction proceeds at -80 °C, allowing for the direct observation of the mixed-valent product of the reaction. At higher temperatures, the initial reaction product is unstable and decays, ultimately generating the diferrous complex [Fe 2 (BPMP)(OPr) 2 ](OTf) and an unidentified ferric product. These results combined offer deep insight into the mechanism of NO reduction by the relevant model complex [Fe 2 (BPMP)(OPr)(NO) 2 ] 2+ and provide direct evidence that the semireduced mechanism would constitute a highly efficient pathway to accomplish NO reduction to N 2 O in FNORs and in synthetic catalysts.
NASA Astrophysics Data System (ADS)
Fassi, F.; Achille, C.; Mandelli, A.; Rechichi, F.; Parri, S.
2015-02-01
The work is the final part of a multi-year research project on the Milan Cathedral, which focused on the complete survey and threedimensional modeling of the Great Spire (Fassi et al., 2011) and the two altars in the transept. The main purpose of the job was to prepare support data for the maintenance operations involving the cathedral since 2009 and still in progress. The research job had begun addressing our efforts to identify which methods would allow an expeditious but comprehensive measure of complex architectural structure as a whole. (Achille et al., 2012) The following research works were focused mainly to find an efficient method to visualize, use and share the realized 3D model.
Margaria, Tiziana; Kubczak, Christian; Steffen, Bernhard
2008-04-25
With Bio-jETI, we introduce a service platform for interdisciplinary work on biological application domains and illustrate its use in a concrete application concerning statistical data processing in R and xcms for an LC/MS analysis of FAAH gene knockout. Bio-jETI uses the jABC environment for service-oriented modeling and design as a graphical process modeling tool and the jETI service integration technology for remote tool execution. As a service definition and provisioning platform, Bio-jETI has the potential to become a core technology in interdisciplinary service orchestration and technology transfer. Domain experts, like biologists not trained in computer science, directly define complex service orchestrations as process models and use efficient and complex bioinformatics tools in a simple and intuitive way.
Stabilization of Large Generalized Lotka-Volterra Foodwebs By Evolutionary Feedback
NASA Astrophysics Data System (ADS)
Ackland, G. J.; Gallagher, I. D.
2004-10-01
Conventional ecological models show that complexity destabilizes foodwebs, suggesting that foodwebs should have neither large numbers of species nor a large number of interactions. However, in nature the opposite appears to be the case. Here we show that if the interactions between species are allowed to evolve within a generalized Lotka-Volterra model such stabilizing feedbacks and weak interactions emerge automatically. Moreover, we show that trophic levels also emerge spontaneously from the evolutionary approach, and the efficiency of the unperturbed ecosystem increases with time. The key to stability in large foodwebs appears to arise not from complexity perse but from evolution at the level of the ecosystem which favors stabilizing (negative) feedbacks.
Stabilization of large generalized Lotka-Volterra foodwebs by evolutionary feedback.
Ackland, G J; Gallagher, I D
2004-10-08
Conventional ecological models show that complexity destabilizes foodwebs, suggesting that foodwebs should have neither large numbers of species nor a large number of interactions. However, in nature the opposite appears to be the case. Here we show that if the interactions between species are allowed to evolve within a generalized Lotka-Volterra model such stabilizing feedbacks and weak interactions emerge automatically. Moreover, we show that trophic levels also emerge spontaneously from the evolutionary approach, and the efficiency of the unperturbed ecosystem increases with time. The key to stability in large foodwebs appears to arise not from complexity per se but from evolution at the level of the ecosystem which favors stabilizing (negative) feedbacks.
The quest for solvable multistate Landau-Zener models
Sinitsyn, Nikolai A.; Chernyak, Vladimir Y.
2017-05-24
Recently, integrability conditions (ICs) in mutistate Landau-Zener (MLZ) theory were proposed. They describe common properties of all known solved systems with linearly time-dependent Hamiltonians. Here we show that ICs enable efficient computer assisted search for new solvable MLZ models that span complexity range from several interacting states to mesoscopic systems with many-body dynamics and combinatorially large phase space. This diversity suggests that nontrivial solvable MLZ models are numerous. Additionally, we refine the formulation of ICs and extend the class of solvable systems to models with points of multiple diabatic level crossing.
Modeling and simulation of high dimensional stochastic multiscale PDE systems at the exascale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zabaras, Nicolas J.
2016-11-08
Predictive Modeling of multiscale and Multiphysics systems requires accurate data driven characterization of the input uncertainties, and understanding of how they propagate across scales and alter the final solution. This project develops a rigorous mathematical framework and scalable uncertainty quantification algorithms to efficiently construct realistic low dimensional input models, and surrogate low complexity systems for the analysis, design, and control of physical systems represented by multiscale stochastic PDEs. The work can be applied to many areas including physical and biological processes, from climate modeling to systems biology.
Complex food webs prevent competitive exclusion among producer species.
Brose, Ulrich
2008-11-07
Herbivorous top-down forces and bottom-up competition for nutrients determine the coexistence and relative biomass patterns of producer species. Combining models of predator-prey and producer-nutrient interactions with a structural model of complex food webs, I investigated these two aspects in a dynamic food-web model. While competitive exclusion leads to persistence of only one producer species in 99.7% of the simulated simple producer communities without consumers, embedding the same producer communities in complex food webs generally yields producer coexistence. In simple producer communities, the producers with the most efficient nutrient-intake rates increase in biomass until they competitively exclude inferior producers. In food webs, herbivory predominantly reduces the biomass density of those producers that dominated in producer communities, which yields a more even biomass distribution. In contrast to prior analyses of simple modules, this facilitation of producer coexistence by herbivory does not require a trade-off between the nutrient-intake efficiency and the resistance to herbivory. The local network structure of food webs (top-down effects of the number of herbivores and the herbivores' maximum consumption rates) and the nutrient supply (bottom-up effect) interactively determine the relative biomass densities of the producer species. A strong negative feedback loop emerges in food webs: factors that increase producer biomasses also increase herbivory, which reduces producer biomasses. This negative feedback loop regulates the coexistence and biomass patterns of the producers by balancing biomass increases of producers and biomass fluxes to herbivores, which prevents competitive exclusion.
Multiscale Mathematics for Biomass Conversion to Renewable Hydrogen
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plechac, Petr; Vlachos, Dionisios; Katsoulakis, Markos
2013-09-05
The overall objective of this project is to develop multiscale models for understanding and eventually designing complex processes for renewables. To the best of our knowledge, our work is the first attempt at modeling complex reacting systems, whose performance relies on underlying multiscale mathematics. Our specific application lies at the heart of biofuels initiatives of DOE and entails modeling of catalytic systems, to enable economic, environmentally benign, and efficient conversion of biomass into either hydrogen or valuable chemicals. Specific goals include: (i) Development of rigorous spatio-temporal coarse-grained kinetic Monte Carlo (KMC) mathematics and simulation for microscopic processes encountered in biomassmore » transformation. (ii) Development of hybrid multiscale simulation that links stochastic simulation to a deterministic partial differential equation (PDE) model for an entire reactor. (iii) Development of hybrid multiscale simulation that links KMC simulation with quantum density functional theory (DFT) calculations. (iv) Development of parallelization of models of (i)-(iii) to take advantage of Petaflop computing and enable real world applications of complex, multiscale models. In this NCE period, we continued addressing these objectives and completed the proposed work. Main initiatives, key results, and activities are outlined.« less
The use of discrete-event simulation modelling to improve radiation therapy planning processes.
Werker, Greg; Sauré, Antoine; French, John; Shechter, Steven
2009-07-01
The planning portion of the radiation therapy treatment process at the British Columbia Cancer Agency is efficient but nevertheless contains room for improvement. The purpose of this study is to show how a discrete-event simulation (DES) model can be used to represent this complex process and to suggest improvements that may reduce the planning time and ultimately reduce overall waiting times. A simulation model of the radiation therapy (RT) planning process was constructed using the Arena simulation software, representing the complexities of the system. Several types of inputs feed into the model; these inputs come from historical data, a staff survey, and interviews with planners. The simulation model was validated against historical data and then used to test various scenarios to identify and quantify potential improvements to the RT planning process. Simulation modelling is an attractive tool for describing complex systems, and can be used to identify improvements to the processes involved. It is possible to use this technique in the area of radiation therapy planning with the intent of reducing process times and subsequent delays for patient treatment. In this particular system, reducing the variability and length of oncologist-related delays contributes most to improving the planning time.
Hormann, Jan; Malina, Jaroslav; Lemke, Oliver; Hülsey, Max J; Wedepohl, Stefanie; Potthoff, Jan; Schmidt, Claudia; Ott, Ingo; Keller, Bettina G; Brabec, Viktor; Kulak, Nora
2018-05-07
Many drugs that are applied in anticancer therapy such as the anthracycline doxorubicin contain DNA-intercalating 9,10-anthraquinone (AQ) moieties. When Cu(II) cyclen complexes were functionalized with up to three (2-anthraquinonyl)methyl substituents, they efficiently inhibited DNA and RNA synthesis resulting in high cytotoxicity (selective for cancer cells) accompanied by DNA condensation/aggregation phenomena. Molecular modeling suggests an unusual bisintercalation mode with only one base pair between the two AQ moieties and the metal complex as a linker. A regioisomer, in which the AQ moieties point in directions unfavorable for such an interaction, had a much weaker biological activity. The ligands alone and corresponding Zn(II) complexes (used as redox inert control compounds) also exhibited lower activity.
Mapping the developmental constraints on working memory span performance.
Bayliss, Donna M; Jarrold, Christopher; Baddeley, Alan D; Gunn, Deborah M; Leigh, Eleanor
2005-07-01
This study investigated the constraints underlying developmental improvements in complex working memory span performance among 120 children of between 6 and 10 years of age. Independent measures of processing efficiency, storage capacity, rehearsal speed, and basic speed of processing were assessed to determine their contribution to age-related variance in complex span. Results showed that developmental improvements in complex span were driven by 2 age-related but separable factors: 1 associated with general speed of processing and 1 associated with storage ability. In addition, there was an age-related contribution shared between working memory, processing speed, and storage ability that was important for higher level cognition. These results pose a challenge for models of complex span performance that emphasize the importance of processing speed alone.
Shen, Ying; Li, Huan; Zhu, Wenzhe; Ho, Shih-Hsin; Yuan, Wenqiao; Chen, Jianfeng; Xie, Youping
2017-11-01
The feasibility of the bioremediation of cadmium (Cd) using microalgal-biochar immobilized complex (MBIC) was investigated. Major operating parameters (e.g., pH, biosorbent dosage, initial Cd(II) concentration and microalgal-biochar ratio) were varied to compare the treatability of viable algae (Chlorella sp.), biochar and MBIC. The biosorption isotherms obtained by using algae or biochar were found to have satisfactory Langmuir predictions, while the best fitting adsorption isotherm model for MBIC was the Sips model. The maximum Cd(II) adsorption capacity of MBIC with a Chlorella sp.: biochar ratio of 2:3 (217.41mgg -1 ) was higher than that of Chlorella sp. (169.92mgg -1 ) or biochar (95.82mgg -1 ) alone. The pseudo-second-order model fitted the biosorption process of MBIC well (R 2 >0.999). Moreover, zeta potential, SEM and FTIR studies revealed that electrostatic attraction, ion exchange and surface complexation were the main mechanisms responsible for Cd removal when using MBIC. Copyright © 2017 Elsevier Ltd. All rights reserved.
Feng, S; Ng, C W W; Leung, A K; Liu, H W
2017-10-01
Microbial aerobic methane oxidation in unsaturated landfill cover involves coupled water, gas and heat reactive transfer. The coupled process is complex and its influence on methane oxidation efficiency is not clear, especially in steep covers where spatial variations of water, gas and heat are significant. In this study, two-dimensional finite element numerical simulations were carried out to evaluate the performance of unsaturated sloping cover. The numerical model was calibrated using a set of flume model test data, and was then subsequently used for parametric study. A new method that considers transient changes of methane concentration during the estimation of the methane oxidation efficiency was proposed and compared against existing methods. It was found that a steeper cover had a lower oxidation efficiency due to enhanced downslope water flow, during which desaturation of soil promoted gas transport and hence landfill gas emission. This effect was magnified as the cover angle and landfill gas generation rate at the bottom of the cover increased. Assuming the steady-state methane concentration in a cover would result in a non-conservative overestimation of oxidation efficiency, especially when a steep cover was subjected to rainfall infiltration. By considering the transient methane concentration, the newly-modified method can give a more accurate oxidation efficiency. Copyright © 2017. Published by Elsevier Ltd.
Wu, Zheng-Guang; Jing, Yi-Ming; Lu, Guang-Zhao; Zhou, Jie; Zheng, You-Xuan; Zhou, Liang; Wang, Yi; Pan, Yi
2016-01-01
Due to the high quantum efficiency and wide scope of emission colors, iridium (Ir) (III) complexes have been widely applied as guest materials for OLEDs (organic light-emitting diodes). Contrary to well-developed Ir(III)-based red and green phosphorescent complexes, the efficient blue emitters are rare reported. Like the development of the LED, the absence of efficient and stable blue materials hinders the widely practical application of the OLEDs. Inspired by this, we designed two novel ancillary ligands of phenyl(pyridin-2-yl)phosphinate (ppp) and dipyridinylphosphinate (dpp) for efficient blue phosphorescent iridium complexes (dfppy)2Ir(ppp) and (dfppy)2Ir(dpp) (dfppy = 2-(2,4-difluorophenyl)pyridine) with good electron transport property. The devices using the new iridium phosphors display excellent electroluminescence (EL) performances with a peak current efficiency of 58.78 cd/A, a maximum external quantum efficiency of 28.3%, a peak power efficiency of 52.74 lm/W and negligible efficiency roll-off ratios. The results demonstrated that iridium complexes with pyridinylphosphinate ligands are potential blue phosphorescent materials for OLEDs. PMID:27929124
Learning to Predict Combinatorial Structures
NASA Astrophysics Data System (ADS)
Vembu, Shankar
2009-12-01
The major challenge in designing a discriminative learning algorithm for predicting structured data is to address the computational issues arising from the exponential size of the output space. Existing algorithms make different assumptions to ensure efficient, polynomial time estimation of model parameters. For several combinatorial structures, including cycles, partially ordered sets, permutations and other graph classes, these assumptions do not hold. In this thesis, we address the problem of designing learning algorithms for predicting combinatorial structures by introducing two new assumptions: (i) The first assumption is that a particular counting problem can be solved efficiently. The consequence is a generalisation of the classical ridge regression for structured prediction. (ii) The second assumption is that a particular sampling problem can be solved efficiently. The consequence is a new technique for designing and analysing probabilistic structured prediction models. These results can be applied to solve several complex learning problems including but not limited to multi-label classification, multi-category hierarchical classification, and label ranking.
Hybrid deterministic/stochastic simulation of complex biochemical systems.
Lecca, Paola; Bagagiolo, Fabio; Scarpa, Marina
2017-11-21
In a biological cell, cellular functions and the genetic regulatory apparatus are implemented and controlled by complex networks of chemical reactions involving genes, proteins, and enzymes. Accurate computational models are indispensable means for understanding the mechanisms behind the evolution of a complex system, not always explored with wet lab experiments. To serve their purpose, computational models, however, should be able to describe and simulate the complexity of a biological system in many of its aspects. Moreover, it should be implemented by efficient algorithms requiring the shortest possible execution time, to avoid enlarging excessively the time elapsing between data analysis and any subsequent experiment. Besides the features of their topological structure, the complexity of biological networks also refers to their dynamics, that is often non-linear and stiff. The stiffness is due to the presence of molecular species whose abundance fluctuates by many orders of magnitude. A fully stochastic simulation of a stiff system is computationally time-expensive. On the other hand, continuous models are less costly, but they fail to capture the stochastic behaviour of small populations of molecular species. We introduce a new efficient hybrid stochastic-deterministic computational model and the software tool MoBioS (MOlecular Biology Simulator) implementing it. The mathematical model of MoBioS uses continuous differential equations to describe the deterministic reactions and a Gillespie-like algorithm to describe the stochastic ones. Unlike the majority of current hybrid methods, the MoBioS algorithm divides the reactions' set into fast reactions, moderate reactions, and slow reactions and implements a hysteresis switching between the stochastic model and the deterministic model. Fast reactions are approximated as continuous-deterministic processes and modelled by deterministic rate equations. Moderate reactions are those whose reaction waiting time is greater than the fast reaction waiting time but smaller than the slow reaction waiting time. A moderate reaction is approximated as a stochastic (deterministic) process if it was classified as a stochastic (deterministic) process at the time at which it crosses the threshold of low (high) waiting time. A Gillespie First Reaction Method is implemented to select and execute the slow reactions. The performances of MoBios were tested on a typical example of hybrid dynamics: that is the DNA transcription regulation. The simulated dynamic profile of the reagents' abundance and the estimate of the error introduced by the fully deterministic approach were used to evaluate the consistency of the computational model and that of the software tool.
Parallel goal-oriented adaptive finite element modeling for 3D electromagnetic exploration
NASA Astrophysics Data System (ADS)
Zhang, Y.; Key, K.; Ovall, J.; Holst, M.
2014-12-01
We present a parallel goal-oriented adaptive finite element method for accurate and efficient electromagnetic (EM) modeling of complex 3D structures. An unstructured tetrahedral mesh allows this approach to accommodate arbitrarily complex 3D conductivity variations and a priori known boundaries. The total electric field is approximated by the lowest order linear curl-conforming shape functions and the discretized finite element equations are solved by a sparse LU factorization. Accuracy of the finite element solution is achieved through adaptive mesh refinement that is performed iteratively until the solution converges to the desired accuracy tolerance. Refinement is guided by a goal-oriented error estimator that uses a dual-weighted residual method to optimize the mesh for accurate EM responses at the locations of the EM receivers. As a result, the mesh refinement is highly efficient since it only targets the elements where the inaccuracy of the solution corrupts the response at the possibly distant locations of the EM receivers. We compare the accuracy and efficiency of two approaches for estimating the primary residual error required at the core of this method: one uses local element and inter-element residuals and the other relies on solving a global residual system using a hierarchical basis. For computational efficiency our method follows the Bank-Holst algorithm for parallelization, where solutions are computed in subdomains of the original model. To resolve the load-balancing problem, this approach applies a spectral bisection method to divide the entire model into subdomains that have approximately equal error and the same number of receivers. The finite element solutions are then computed in parallel with each subdomain carrying out goal-oriented adaptive mesh refinement independently. We validate the newly developed algorithm by comparison with controlled-source EM solutions for 1D layered models and with 2D results from our earlier 2D goal oriented adaptive refinement code named MARE2DEM. We demonstrate the performance and parallel scaling of this algorithm on a medium-scale computing cluster with a marine controlled-source EM example that includes a 3D array of receivers located over a 3D model that includes significant seafloor bathymetry variations and a heterogeneous subsurface.
Efficient Geometric Sound Propagation Using Visibility Culling
NASA Astrophysics Data System (ADS)
Chandak, Anish
2011-07-01
Simulating propagation of sound can improve the sense of realism in interactive applications such as video games and can lead to better designs in engineering applications such as architectural acoustics. In this thesis, we present geometric sound propagation techniques which are faster than prior methods and map well to upcoming parallel multi-core CPUs. We model specular reflections by using the image-source method and model finite-edge diffraction by using the well-known Biot-Tolstoy-Medwin (BTM) model. We accelerate the computation of specular reflections by applying novel visibility algorithms, FastV and AD-Frustum, which compute visibility from a point. We accelerate finite-edge diffraction modeling by applying a novel visibility algorithm which computes visibility from a region. Our visibility algorithms are based on frustum tracing and exploit recent advances in fast ray-hierarchy intersections, data-parallel computations, and scalable, multi-core algorithms. The AD-Frustum algorithm adapts its computation to the scene complexity and allows small errors in computing specular reflection paths for higher computational efficiency. FastV and our visibility algorithm from a region are general, object-space, conservative visibility algorithms that together significantly reduce the number of image sources compared to other techniques while preserving the same accuracy. Our geometric propagation algorithms are an order of magnitude faster than prior approaches for modeling specular reflections and two to ten times faster for modeling finite-edge diffraction. Our algorithms are interactive, scale almost linearly on multi-core CPUs, and can handle large, complex, and dynamic scenes. We also compare the accuracy of our sound propagation algorithms with other methods. Once sound propagation is performed, it is desirable to listen to the propagated sound in interactive and engineering applications. We can generate smooth, artifact-free output audio signals by applying efficient audio-processing algorithms. We also present the first efficient audio-processing algorithm for scenarios with simultaneously moving source and moving receiver (MS-MR) which incurs less than 25% overhead compared to static source and moving receiver (SS-MR) or moving source and static receiver (MS-SR) scenario.
NASA Astrophysics Data System (ADS)
Allen, J. Icarus; Holt, Jason T.; Blackford, Jerry; Proctor, Roger
2007-12-01
Marine systems models are becoming increasingly complex and sophisticated, but far too little attention has been paid to model errors and the extent to which model outputs actually relate to ecosystem processes. Here we describe the application of summary error statistics to a complex 3D model (POLCOMS-ERSEM) run for the period 1988-1989 in the southern North Sea utilising information from the North Sea Project, which collected a wealth of observational data. We demonstrate that to understand model data misfit and the mechanisms creating errors, we need to use a hierarchy of techniques, including simple correlations, model bias, model efficiency, binary discriminator analysis and the distribution of model errors to assess model errors spatially and temporally. We also demonstrate that a linear cost function is an inappropriate measure of misfit. This analysis indicates that the model has some skill for all variables analysed. A summary plot of model performance indicates that model performance deteriorates as we move through the ecosystem from the physics, to the nutrients and plankton.
A new mathematical modeling for pure parsimony haplotyping problem.
Feizabadi, R; Bagherian, M; Vaziri, H R; Salahi, M
2016-11-01
Pure parsimony haplotyping (PPH) problem is important in bioinformatics because rational haplotyping inference plays important roles in analysis of genetic data, mapping complex genetic diseases such as Alzheimer's disease, heart disorders and etc. Haplotypes and genotypes are m-length sequences. Although several integer programing models have already been presented for PPH problem, its NP-hardness characteristic resulted in ineffectiveness of those models facing the real instances especially instances with many heterozygous sites. In this paper, we assign a corresponding number to each haplotype and genotype and based on those numbers, we set a mixed integer programing model. Using numbers, instead of sequences, would lead to less complexity of the new model in comparison with previous models in a way that there are neither constraints nor variables corresponding to heterozygous nucleotide sites in it. Experimental results approve the efficiency of the new model in producing better solution in comparison to two state-of-the art haplotyping approaches. Copyright © 2016 Elsevier Inc. All rights reserved.
An efficient platform for genetic selection and screening of gene switches in Escherichia coli
Muranaka, Norihito; Sharma, Vandana; Nomura, Yoko; Yokobayashi, Yohei
2009-01-01
Engineered gene switches and circuits that can sense various biochemical and physical signals, perform computation, and produce predictable outputs are expected to greatly advance our ability to program complex cellular behaviors. However, rational design of gene switches and circuits that function in living cells is challenging due to the complex intracellular milieu. Consequently, most successful designs of gene switches and circuits have relied, to some extent, on high-throughput screening and/or selection from combinatorial libraries of gene switch and circuit variants. In this study, we describe a generic and efficient platform for selection and screening of gene switches and circuits in Escherichia coli from large libraries. The single-gene dual selection marker tetA was translationally fused to green fluorescent protein (gfpuv) via a flexible peptide linker and used as a dual selection and screening marker for laboratory evolution of gene switches. Single-cycle (sequential positive and negative selections) enrichment efficiencies of >7000 were observed in mock selections of model libraries containing functional riboswitches in liquid culture. The technique was applied to optimize various parameters affecting the selection outcome, and to isolate novel thiamine pyrophosphate riboswitches from a complex library. Artificial riboswitches with excellent characteristics were isolated that exhibit up to 58-fold activation as measured by fluorescent reporter gene assay. PMID:19190095
Weinberg, Marc S.; Michod, Richard E.
2017-01-01
In the RNA world hypothesis complex, self-replicating ribozymes were essential. For the emergence of an RNA world, less is known about the early processes that accounted for the formation of complex, long catalysts from small passively formed molecules. The functional role of small sequences has not been fully explored and, here, a possible role for smaller ligases is demonstrated. An established RNA polymerase model, the R18, was truncated from the 3′ end to generate smaller molecules. All the molecules were investigated for self-ligation functions with a set of oligonucleotide substrates without predesigned base pairing. The smallest molecule that exhibited self-ligation activity was a 40-nucleotide RNA. It also demonstrated the greatest functional flexibility as it was more general in the kinds of substrates it ligated to itself although its catalytic efficiency was the lowest. The largest ribozyme (R18) ligated substrates more selectively and with greatest efficiency. With increase in size and predicted structural stability, self-ligation efficiency improved, while functional flexibility decreased. These findings reveal that molecular size could have increased from the activity of small ligases joining oligonucleotides to their own end. In addition, there is a size-associated molecular-level trade-off that could have impacted the evolution of RNA-based life. PMID:28989747
NASA Astrophysics Data System (ADS)
Dhar, Nisha; Weinberg, Marc S.; Michod, Richard E.; Durand, Pierre M.
2017-09-01
In the RNA world hypothesis complex, self-replicating ribozymes were essential. For the emergence of an RNA world, less is known about the early processes that accounted for the formation of complex, long catalysts from small passively formed molecules. The functional role of small sequences has not been fully explored and, here, a possible role for smaller ligases is demonstrated. An established RNA polymerase model, the R18, was truncated from the 3' end to generate smaller molecules. All the molecules were investigated for self-ligation functions with a set of oligonucleotide substrates without predesigned base pairing. The smallest molecule that exhibited self-ligation activity was a 40-nucleotide RNA. It also demonstrated the greatest functional flexibility as it was more general in the kinds of substrates it ligated to itself although its catalytic efficiency was the lowest. The largest ribozyme (R18) ligated substrates more selectively and with greatest efficiency. With increase in size and predicted structural stability, self-ligation efficiency improved, while functional flexibility decreased. These findings reveal that molecular size could have increased from the activity of small ligases joining oligonucleotides to their own end. In addition, there is a size-associated molecular-level trade-off that could have impacted the evolution of RNA-based life.
Regional Patterns of Stress Transfer in the Ablation Zone of the Western Greenland Ice Sheet
NASA Astrophysics Data System (ADS)
Andrews, L. C.; Hoffman, M. J.; Neumann, T.; Catania, G. A.; Luethi, M. P.; Hawley, R. L.
2016-12-01
Current understanding of the subglacial system indicates that the seasonal evolution of ice flow is strongly controlled by the gradual upstream progression of an inefficient - efficient transition within the subglacial hydrologic system followed by the reduction of melt and a downstream collapse of the efficient system. Using a spatiotemporally dense network of GPS-derived surface velocities from the Pâkitsoq Region of the western Greenland Ice Sheet, we find that this pattern of subglacial development is complicated by heterogeneous bed topography, resulting in complex patterns of ice flow. Following low elevation melt onset, early melt season strain rate anomalies are dominated by regional extension, which then gives way to spatially expansive compression. However, once daily minimum ice velocities fall below the observed winter background velocities, an alternating spatial pattern of extension and compression prevails. This pattern of strain rate anomalies is correlated with changing basal topography and differences in the magnitude of diurnal surface ice speeds. Along subglacial ridges, diurnal variability in ice speed is large, suggestive of a mature, efficient subglacial system. In regions of subglacial lows, diurnal variability in ice velocity is relatively low, likely associated with a less developed efficient subglacial system. The observed pattern suggests that borehole observations and modeling results demonstrating the importance of longitudinal stress transfer at a single field location are likely widely applicable in our study area and other regions of the Greenland Ice Sheet with highly variable bed topography. Further, the complex pattern of ice flow and evidence of spatially extensive longitudinal stress transfer add to the body of work indicating that the bed character plays an important role in the development of the subglacial system; closely matching diurnal ice velocity patterns with subglacial models may be difficult without coupling these models to high order ice flow models.
Kalman and particle filtering methods for full vehicle and tyre identification
NASA Astrophysics Data System (ADS)
Bogdanski, Karol; Best, Matthew C.
2018-05-01
This paper considers identification of all significant vehicle handling dynamics of a test vehicle, including identification of a combined-slip tyre model, using only those sensors currently available on most vehicle controller area network buses. Using an appropriately simple but efficient model structure, all of the independent parameters are found from test vehicle data, with the resulting model accuracy demonstrated on independent validation data. The paper extends previous work on augmented Kalman Filter state estimators to concentrate wholly on parameter identification. It also serves as a review of three alternative filtering methods; identifying forms of the unscented Kalman filter, extended Kalman filter and particle filter are proposed and compared for effectiveness, complexity and computational efficiency. All three filters are suited to applications of system identification and the Kalman Filters can also operate in real-time in on-line model predictive controllers or estimators.
Input-output identification of controlled discrete manufacturing systems
NASA Astrophysics Data System (ADS)
Estrada-Vargas, Ana Paula; López-Mellado, Ernesto; Lesage, Jean-Jacques
2014-03-01
The automated construction of discrete event models from observations of external system's behaviour is addressed. This problem, often referred to as system identification, allows obtaining models of ill-known (or even unknown) systems. In this article, an identification method for discrete event systems (DESs) controlled by a programmable logic controller is presented. The method allows processing a large quantity of observed long sequences of input/output signals generated by the controller and yields an interpreted Petri net model describing the closed-loop behaviour of the automated DESs. The proposed technique allows the identification of actual complex systems because it is sufficiently efficient and well adapted to cope with both the technological characteristics of industrial controllers and data collection requirements. Based on polynomial-time algorithms, the method is implemented as an efficient software tool which constructs and draws the model automatically; an overview of this tool is given through a case study dealing with an automated manufacturing system.
Parallelization of a Fully-Distributed Hydrologic Model using Sub-basin Partitioning
NASA Astrophysics Data System (ADS)
Vivoni, E. R.; Mniszewski, S.; Fasel, P.; Springer, E.; Ivanov, V. Y.; Bras, R. L.
2005-12-01
A primary obstacle towards advances in watershed simulations has been the limited computational capacity available to most models. The growing trend of model complexity, data availability and physical representation has not been matched by adequate developments in computational efficiency. This situation has created a serious bottleneck which limits existing distributed hydrologic models to small domains and short simulations. In this study, we present novel developments in the parallelization of a fully-distributed hydrologic model. Our work is based on the TIN-based Real-time Integrated Basin Simulator (tRIBS), which provides continuous hydrologic simulation using a multiple resolution representation of complex terrain based on a triangulated irregular network (TIN). While the use of TINs reduces computational demand, the sequential version of the model is currently limited over large basins (>10,000 km2) and long simulation periods (>1 year). To address this, a parallel MPI-based version of the tRIBS model has been implemented and tested using high performance computing resources at Los Alamos National Laboratory. Our approach utilizes domain decomposition based on sub-basin partitioning of the watershed. A stream reach graph based on the channel network structure is used to guide the sub-basin partitioning. Individual sub-basins or sub-graphs of sub-basins are assigned to separate processors to carry out internal hydrologic computations (e.g. rainfall-runoff transformation). Routed streamflow from each sub-basin forms the major hydrologic data exchange along the stream reach graph. Individual sub-basins also share subsurface hydrologic fluxes across adjacent boundaries. We demonstrate how the sub-basin partitioning provides computational feasibility and efficiency for a set of test watersheds in northeastern Oklahoma. We compare the performance of the sequential and parallelized versions to highlight the efficiency gained as the number of processors increases. We also discuss how the coupled use of TINs and parallel processing can lead to feasible long-term simulations in regional watersheds while preserving basin properties at high-resolution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, Lewis A.; Habershon, Scott, E-mail: S.Habershon@warwick.ac.uk
Pigment-protein complexes (PPCs) play a central role in facilitating excitation energy transfer (EET) from light-harvesting antenna complexes to reaction centres in photosynthetic systems; understanding molecular organisation in these biological networks is key to developing better artificial light-harvesting systems. In this article, we combine quantum-mechanical simulations and a network-based picture of transport to investigate how chromophore organization and protein environment in PPCs impacts on EET efficiency and robustness. In a prototypical PPC model, the Fenna-Matthews-Olson (FMO) complex, we consider the impact on EET efficiency of both disrupting the chromophore network and changing the influence of (local and global) environmental dephasing. Surprisingly,more » we find a large degree of resilience to changes in both chromophore network and protein environmental dephasing, the extent of which is greater than previously observed; for example, FMO maintains EET when 50% of the constituent chromophores are removed, or when environmental dephasing fluctuations vary over two orders-of-magnitude relative to the in vivo system. We also highlight the fact that the influence of local dephasing can be strongly dependent on the characteristics of the EET network and the initial excitation; for example, initial excitations resulting in rapid coherent decay are generally insensitive to the environment, whereas the incoherent population decay observed following excitation at weakly coupled chromophores demonstrates a more pronounced dependence on dephasing rate as a result of the greater possibility of local exciton trapping. Finally, we show that the FMO electronic Hamiltonian is not particularly optimised for EET; instead, it is just one of many possible chromophore organisations which demonstrate a good level of EET transport efficiency following excitation at different chromophores. Overall, these robustness and efficiency characteristics are attributed to the highly connected nature of the chromophore network and the presence of multiple EET pathways, features which might easily be built into artificial photosynthetic systems.« less
An Improved Nested Sampling Algorithm for Model Selection and Assessment
NASA Astrophysics Data System (ADS)
Zeng, X.; Ye, M.; Wu, J.; WANG, D.
2017-12-01
Multimodel strategy is a general approach for treating model structure uncertainty in recent researches. The unknown groundwater system is represented by several plausible conceptual models. Each alternative conceptual model is attached with a weight which represents the possibility of this model. In Bayesian framework, the posterior model weight is computed as the product of model prior weight and marginal likelihood (or termed as model evidence). As a result, estimating marginal likelihoods is crucial for reliable model selection and assessment in multimodel analysis. Nested sampling estimator (NSE) is a new proposed algorithm for marginal likelihood estimation. The implementation of NSE comprises searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm and its variants are often used for local sampling in NSE. However, M-H is not an efficient sampling algorithm for high-dimensional or complex likelihood function. For improving the performance of NSE, it could be feasible to integrate more efficient and elaborated sampling algorithm - DREAMzs into the local sampling. In addition, in order to overcome the computation burden problem of large quantity of repeating model executions in marginal likelihood estimation, an adaptive sparse grid stochastic collocation method is used to build the surrogates for original groundwater model.
NASA Astrophysics Data System (ADS)
Katchasuwanmanee, Kanet; Cheng, Kai; Bateman, Richard
2016-09-01
As energy efficiency is one of the key essentials towards sustainability, the development of an energy-resource efficient manufacturing system is among the great challenges facing the current industry. Meanwhile, the availability of advanced technological innovation has created more complex manufacturing systems that involve a large variety of processes and machines serving different functions. To extend the limited knowledge on energy-efficient scheduling, the research presented in this paper attempts to model the production schedule at an operation process by considering the balance of energy consumption reduction in production, production work flow (productivity) and quality. An innovative systematic approach to manufacturing energy-resource efficiency is proposed with the virtual simulation as a predictive modelling enabler, which provides real-time manufacturing monitoring, virtual displays and decision-makings and consequentially an analytical and multidimensional correlation analysis on interdependent relationships among energy consumption, work flow and quality errors. The regression analysis results demonstrate positive relationships between the work flow and quality errors and the work flow and energy consumption. When production scheduling is controlled through optimization of work flow, quality errors and overall energy consumption, the energy-resource efficiency can be achieved in the production. Together, this proposed multidimensional modelling and analysis approach provides optimal conditions for the production scheduling at the manufacturing system by taking account of production quality, energy consumption and resource efficiency, which can lead to the key competitive advantages and sustainability of the system operations in the industry.
Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics.
Arampatzis, Georgios; Katsoulakis, Markos A; Rey-Bellet, Luc
2016-03-14
We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systems with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.
Cocco, Simona; Leibler, Stanislas; Monasson, Rémi
2009-01-01
Complexity of neural systems often makes impracticable explicit measurements of all interactions between their constituents. Inverse statistical physics approaches, which infer effective couplings between neurons from their spiking activity, have been so far hindered by their computational complexity. Here, we present 2 complementary, computationally efficient inverse algorithms based on the Ising and “leaky integrate-and-fire” models. We apply those algorithms to reanalyze multielectrode recordings in the salamander retina in darkness and under random visual stimulus. We find strong positive couplings between nearby ganglion cells common to both stimuli, whereas long-range couplings appear under random stimulus only. The uncertainty on the inferred couplings due to limitations in the recordings (duration, small area covered on the retina) is discussed. Our methods will allow real-time evaluation of couplings for large assemblies of neurons. PMID:19666487
Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics
NASA Astrophysics Data System (ADS)
Arampatzis, Georgios; Katsoulakis, Markos A.; Rey-Bellet, Luc
2016-03-01
We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systems with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.
Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arampatzis, Georgios; Katsoulakis, Markos A.; Rey-Bellet, Luc
2016-03-14
We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systemsmore » with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.« less
Remontet, Laurent; Uhry, Zoé; Bossard, Nadine; Iwaz, Jean; Belot, Aurélien; Danieli, Coraline; Charvat, Hadrien; Roche, Laurent
2018-01-01
Cancer survival trend analyses are essential to describe accurately the way medical practices impact patients' survival according to the year of diagnosis. To this end, survival models should be able to account simultaneously for non-linear and non-proportional effects and for complex interactions between continuous variables. However, in the statistical literature, there is no consensus yet on how to build such models that should be flexible but still provide smooth estimates of survival. In this article, we tackle this challenge by smoothing the complex hypersurface (time since diagnosis, age at diagnosis, year of diagnosis, and mortality hazard) using a multidimensional penalized spline built from the tensor product of the marginal bases of time, age, and year. Considering this penalized survival model as a Poisson model, we assess the performance of this approach in estimating the net survival with a comprehensive simulation study that reflects simple and complex realistic survival trends. The bias was generally small and the root mean squared error was good and often similar to that of the true model that generated the data. This parametric approach offers many advantages and interesting prospects (such as forecasting) that make it an attractive and efficient tool for survival trend analyses.
A model of axonal transport drug delivery
NASA Astrophysics Data System (ADS)
Kuznetsov, Andrey V.
2012-04-01
In this paper a model of targeted drug delivery by means of active (motor-driven) axonal transport is developed. The model is motivated by recent experimental research by Filler et al. (A.G. Filler, G.T. Whiteside, M. Bacon, M. Frederickson, F.A. Howe, M.D. Rabinowitz, A.J. Sokoloff, T.W. Deacon, C. Abell, R. Munglani, J.R. Griffiths, B.A. Bell, A.M.L. Lever, Tri-partite complex for axonal transport drug delivery achieves pharmacological effect, Bmc Neuroscience 11 (2010) 8) that reported synthesis and pharmacological efficiency tests of a tri-partite complex designed for axonal transport drug delivery. The developed model accounts for two populations of pharmaceutical agent complexes (PACs): PACs that are transported retrogradely by dynein motors and PACs that are accumulated in the axon at the Nodes of Ranvier. The transitions between these two populations of PACs are described by first-order reactions. An analytical solution of the coupled system of transient equations describing conservations of these two populations of PACs is obtained by using Laplace transform. Numerical results for various combinations of parameter values are presented and their physical significance is discussed.
BoolNet--an R package for generation, reconstruction and analysis of Boolean networks.
Müssel, Christoph; Hopfensitz, Martin; Kestler, Hans A
2010-05-15
As the study of information processing in living cells moves from individual pathways to complex regulatory networks, mathematical models and simulation become indispensable tools for analyzing the complex behavior of such networks and can provide deep insights into the functioning of cells. The dynamics of gene expression, for example, can be modeled with Boolean networks (BNs). These are mathematical models of low complexity, but have the advantage of being able to capture essential properties of gene-regulatory networks. However, current implementations of BNs only focus on different sub-aspects of this model and do not allow for a seamless integration into existing preprocessing pipelines. BoolNet efficiently integrates methods for synchronous, asynchronous and probabilistic BNs. This includes reconstructing networks from time series, generating random networks, robustness analysis via perturbation, Markov chain simulations, and identification and visualization of attractors. The package BoolNet is freely available from the R project at http://cran.r-project.org/ or http://www.informatik.uni-ulm.de/ni/mitarbeiter/HKestler/boolnet/ under Artistic License 2.0. hans.kestler@uni-ulm.de Supplementary data are available at Bioinformatics online.
Zhang, Kaihua; Zhang, Lei; Yang, Ming-Hsuan
2014-10-01
It is a challenging task to develop effective and efficient appearance models for robust object tracking due to factors such as pose variation, illumination change, occlusion, and motion blur. Existing online tracking algorithms often update models with samples from observations in recent frames. Despite much success has been demonstrated, numerous issues remain to be addressed. First, while these adaptive appearance models are data-dependent, there does not exist sufficient amount of data for online algorithms to learn at the outset. Second, online tracking algorithms often encounter the drift problems. As a result of self-taught learning, misaligned samples are likely to be added and degrade the appearance models. In this paper, we propose a simple yet effective and efficient tracking algorithm with an appearance model based on features extracted from a multiscale image feature space with data-independent basis. The proposed appearance model employs non-adaptive random projections that preserve the structure of the image feature space of objects. A very sparse measurement matrix is constructed to efficiently extract the features for the appearance model. We compress sample images of the foreground target and the background using the same sparse measurement matrix. The tracking task is formulated as a binary classification via a naive Bayes classifier with online update in the compressed domain. A coarse-to-fine search strategy is adopted to further reduce the computational complexity in the detection procedure. The proposed compressive tracking algorithm runs in real-time and performs favorably against state-of-the-art methods on challenging sequences in terms of efficiency, accuracy and robustness.
A mixed integer bi-level DEA model for bank branch performance evaluation by Stackelberg approach
NASA Astrophysics Data System (ADS)
Shafiee, Morteza; Lotfi, Farhad Hosseinzadeh; Saleh, Hilda; Ghaderi, Mehdi
2016-03-01
One of the most complicated decision making problems for managers is the evaluation of bank performance, which involves various criteria. There are many studies about bank efficiency evaluation by network DEA in the literature review. These studies do not focus on multi-level network. Wu (Eur J Oper Res 207:856-864, 2010) proposed a bi-level structure for cost efficiency at the first time. In this model, multi-level programming and cost efficiency were used. He used a nonlinear programming to solve the model. In this paper, we have focused on multi-level structure and proposed a bi-level DEA model. We then used a liner programming to solve our model. In other hand, we significantly improved the way to achieve the optimum solution in comparison with the work by Wu (2010) by converting the NP-hard nonlinear programing into a mixed integer linear programming. This study uses a bi-level programming data envelopment analysis model that embodies internal structure with Stackelberg-game relationships to evaluate the performance of banking chain. The perspective of decentralized decisions is taken in this paper to cope with complex interactions in banking chain. The results derived from bi-level programming DEA can provide valuable insights and detailed information for managers to help them evaluate the performance of the banking chain as a whole using Stackelberg-game relationships. Finally, this model was applied in the Iranian bank to evaluate cost efficiency.
Signal transmission competing with noise in model excitable brains
NASA Astrophysics Data System (ADS)
Marro, J.; Mejias, J. F.; Pinamonti, G.; Torres, J. J.
2013-01-01
This is a short review of recent studies in our group on how weak signals may efficiently propagate in a system with noise-induced excitation-inhibition competition which adapts to the activity at short-time scales and thus induces excitable conditions. Our numerical results on simple mathematical models should hold for many complex networks in nature, including some brain cortical areas. In particular, they serve us here to interpret available psycho-technical data.
From isolated light-harvesting complexes to the thylakoid membrane: a single-molecule perspective
NASA Astrophysics Data System (ADS)
Gruber, J. Michael; Malý, Pavel; Krüger, Tjaart P. J.; Grondelle, Rienk van
2018-01-01
The conversion of solar radiation to chemical energy in plants and green algae takes place in the thylakoid membrane. This amphiphilic environment hosts a complex arrangement of light-harvesting pigment-protein complexes that absorb light and transfer the excitation energy to photochemically active reaction centers. This efficient light-harvesting capacity is moreover tightly regulated by a photoprotective mechanism called non-photochemical quenching to avoid the stress-induced destruction of the catalytic reaction center. In this review we provide an overview of single-molecule fluorescence measurements on plant light-harvesting complexes (LHCs) of varying sizes with the aim of bridging the gap between the smallest isolated complexes, which have been well-characterized, and the native photosystem. The smallest complexes contain only a small number (10-20) of interacting chlorophylls, while the native photosystem contains dozens of protein subunits and many hundreds of connected pigments. We discuss the functional significance of conformational dynamics, the lipid environment, and the structural arrangement of this fascinating nano-machinery. The described experimental results can be utilized to build mathematical-physical models in a bottom-up approach, which can then be tested on larger in vivo systems. The results also clearly showcase the general property of biological systems to utilize the same system properties for different purposes. In this case it is the regulated conformational flexibility that allows LHCs to switch between efficient light-harvesting and a photoprotective function.
Creating a Complex Measurement Model Using Evidence Centered Design.
ERIC Educational Resources Information Center
Williamson, David M.; Bauer, Malcom; Steinberg, Linda S.; Mislevy, Robert J.; Behrens, John T.
In computer-based simulations meant to support learning, students must bring a wide range of relevant knowledge, skills, and abilities to bear jointly as they solve meaningful problems in a learning domain. To function efficiently as an assessment, a simulation system must also be able to evoke and interpret observable evidence about targeted…
USDA-ARS?s Scientific Manuscript database
Increasing water use efficiency (WUE) is one of the oldest goals in agricultural sciences, yet it is still not fully understood and achieved due to the complexity of soil-weather-management interactions. System models that quantify these interactions are increasingly used for optimizing crop WUE, es...
Human Systems Integration at NASA Ames Research Center
NASA Technical Reports Server (NTRS)
McCandless, Jeffrey
2017-01-01
The Human Systems Integration Division focuses on the design and operations of complex aerospace systems through analysis, experimentation and modeling. With over a dozen labs and over 120 people, the division conducts research to improve safety, efficiency and mission success. Areas of investigation include applied vision research which will be discussed during this seminar.
A Costing Model for Project-Based Information and Communication Technology Systems
ERIC Educational Resources Information Center
Stewart, Brian; Hrenewich, Dave
2009-01-01
A major difficulty facing IT departments is ensuring that the projects and activities to which information and communications technologies (ICT) resources are committed represent an effective, economic, and efficient use of those resources. This complex problem has no single answer. To determine effective use requires, at the least, a…
Chen, Nan; Majda, Andrew J
2017-12-05
Solving the Fokker-Planck equation for high-dimensional complex dynamical systems is an important issue. Recently, the authors developed efficient statistically accurate algorithms for solving the Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures, which contain many strong non-Gaussian features such as intermittency and fat-tailed probability density functions (PDFs). The algorithms involve a hybrid strategy with a small number of samples [Formula: see text], where a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious Gaussian kernel density estimation in the remaining low-dimensional subspace. In this article, two effective strategies are developed and incorporated into these algorithms. The first strategy involves a judicious block decomposition of the conditional covariance matrix such that the evolutions of different blocks have no interactions, which allows an extremely efficient parallel computation due to the small size of each individual block. The second strategy exploits statistical symmetry for a further reduction of [Formula: see text] The resulting algorithms can efficiently solve the Fokker-Planck equation with strongly non-Gaussian PDFs in much higher dimensions even with orders in the millions and thus beat the curse of dimension. The algorithms are applied to a [Formula: see text]-dimensional stochastic coupled FitzHugh-Nagumo model for excitable media. An accurate recovery of both the transient and equilibrium non-Gaussian PDFs requires only [Formula: see text] samples! In addition, the block decomposition facilitates the algorithms to efficiently capture the distinct non-Gaussian features at different locations in a [Formula: see text]-dimensional two-layer inhomogeneous Lorenz 96 model, using only [Formula: see text] samples. Copyright © 2017 the Author(s). Published by PNAS.
NASA Astrophysics Data System (ADS)
Tortora, Maxime M. C.; Doye, Jonathan P. K.
2017-12-01
We detail the application of bounding volume hierarchies to accelerate second-virial evaluations for arbitrary complex particles interacting through hard and soft finite-range potentials. This procedure, based on the construction of neighbour lists through the combined use of recursive atom-decomposition techniques and binary overlap search schemes, is shown to scale sub-logarithmically with particle resolution in the case of molecular systems with high aspect ratios. Its implementation within an efficient numerical and theoretical framework based on classical density functional theory enables us to investigate the cholesteric self-assembly of a wide range of experimentally relevant particle models. We illustrate the method through the determination of the cholesteric behavior of hard, structurally resolved twisted cuboids, and report quantitative evidence of the long-predicted phase handedness inversion with increasing particle thread angles near the phenomenological threshold value of 45°. Our results further highlight the complex relationship between microscopic structure and helical twisting power in such model systems, which may be attributed to subtle geometric variations of their chiral excluded-volume manifold.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lipnikov, Konstantin; Moulton, David; Svyatskiy, Daniil
2016-04-29
We develop a new approach for solving the nonlinear Richards’ equation arising in variably saturated flow modeling. The growing complexity of geometric models for simulation of subsurface flows leads to the necessity of using unstructured meshes and advanced discretization methods. Typically, a numerical solution is obtained by first discretizing PDEs and then solving the resulting system of nonlinear discrete equations with a Newton-Raphson-type method. Efficiency and robustness of the existing solvers rely on many factors, including an empiric quality control of intermediate iterates, complexity of the employed discretization method and a customized preconditioner. We propose and analyze a new preconditioningmore » strategy that is based on a stable discretization of the continuum Jacobian. We will show with numerical experiments for challenging problems in subsurface hydrology that this new preconditioner improves convergence of the existing Jacobian-free solvers 3-20 times. Furthermore, we show that the Picard method with this preconditioner becomes a more efficient nonlinear solver than a few widely used Jacobian-free solvers.« less
Dissecting a complex chemical stress: chemogenomic profiling of plant hydrolysates
Skerker, Jeffrey M; Leon, Dacia; Price, Morgan N; Mar, Jordan S; Tarjan, Daniel R; Wetmore, Kelly M; Deutschbauer, Adam M; Baumohl, Jason K; Bauer, Stefan; Ibáñez, Ana B; Mitchell, Valerie D; Wu, Cindy H; Hu, Ping; Hazen, Terry; Arkin, Adam P
2013-01-01
The efficient production of biofuels from cellulosic feedstocks will require the efficient fermentation of the sugars in hydrolyzed plant material. Unfortunately, plant hydrolysates also contain many compounds that inhibit microbial growth and fermentation. We used DNA-barcoded mutant libraries to identify genes that are important for hydrolysate tolerance in both Zymomonas mobilis (44 genes) and Saccharomyces cerevisiae (99 genes). Overexpression of a Z. mobilis tolerance gene of unknown function (ZMO1875) improved its specific ethanol productivity 2.4-fold in the presence of miscanthus hydrolysate. However, a mixture of 37 hydrolysate-derived inhibitors was not sufficient to explain the fitness profile of plant hydrolysate. To deconstruct the fitness profile of hydrolysate, we profiled the 37 inhibitors against a library of Z. mobilis mutants and we modeled fitness in hydrolysate as a mixture of fitness in its components. By examining outliers in this model, we identified methylglyoxal as a previously unknown component of hydrolysate. Our work provides a general strategy to dissect how microbes respond to a complex chemical stress and should enable further engineering of hydrolysate tolerance. PMID:23774757
A new framework for comprehensive, robust, and efficient global sensitivity analysis: 1. Theory
NASA Astrophysics Data System (ADS)
Razavi, Saman; Gupta, Hoshin V.
2016-01-01
Computer simulation models are continually growing in complexity with increasingly more factors to be identified. Sensitivity Analysis (SA) provides an essential means for understanding the role and importance of these factors in producing model responses. However, conventional approaches to SA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we present a new and general sensitivity analysis framework (called VARS), based on an analogy to "variogram analysis," that provides an intuitive and comprehensive characterization of sensitivity across the full spectrum of scales in the factor space. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are special cases of VARS, and that their SA indices can be computed as by-products of the VARS framework. Synthetic functions that resemble actual model response surfaces are used to illustrate the concepts, and show VARS to be as much as two orders of magnitude more computationally efficient than the state-of-the-art Sobol approach. In a companion paper, we propose a practical implementation strategy, and demonstrate the effectiveness, efficiency, and reliability (robustness) of the VARS framework on real-data case studies.
NASA Astrophysics Data System (ADS)
Hartin, C.; Lynch, C.; Kravitz, B.; Link, R. P.; Bond-Lamberty, B. P.
2017-12-01
Typically, uncertainty quantification of internal variability relies on large ensembles of climate model runs under multiple forcing scenarios or perturbations in a parameter space. Computationally efficient, standard pattern scaling techniques only generate one realization and do not capture the complicated dynamics of the climate system (i.e., stochastic variations with a frequency-domain structure). In this study, we generate large ensembles of climate data with spatially and temporally coherent variability across a subselection of Coupled Model Intercomparison Project Phase 5 (CMIP5) models. First, for each CMIP5 model we apply a pattern emulation approach to derive the model response to external forcing. We take all the spatial and temporal variability that isn't explained by the emulator and decompose it into non-physically based structures through use of empirical orthogonal functions (EOFs). Then, we perform a Fourier decomposition of the EOF projection coefficients to capture the input fields' temporal autocorrelation so that our new emulated patterns reproduce the proper timescales of climate response and "memory" in the climate system. Through this 3-step process, we derive computationally efficient climate projections consistent with CMIP5 model trends and modes of variability, which address a number of deficiencies inherent in the ability of pattern scaling to reproduce complex climate model behavior.
Optimization of controlled processes in combined-cycle plant (new developments and researches)
NASA Astrophysics Data System (ADS)
Tverskoy, Yu S.; Muravev, I. K.
2017-11-01
All modern complex technical systems, including power units of TPP and nuclear power plants, work in the system-forming structure of multifunctional APCS. The development of the modern APCS mathematical support allows bringing the automation degree to the solution of complex optimization problems of equipment heat-mass-exchange processes in real time. The difficulty of efficient management of a binary power unit is related to the need to solve jointly at least three problems. The first problem is related to the physical issues of combined-cycle technologies. The second problem is determined by the criticality of the CCGT operation to changes in the regime and climatic factors. The third problem is related to a precise description of a vector of controlled coordinates of a complex technological object. To obtain a joint solution of this complex of interconnected problems, the methodology of generalized thermodynamic analysis, methods of the theory of automatic control and mathematical modeling are used. In the present report, results of new developments and studies are shown. These results allow improving the principles of process control and the automatic control systems structural synthesis of power units with combined-cycle plants that provide attainable technical and economic efficiency and operational reliability of equipment.
Learning Efficient Sparse and Low Rank Models.
Sprechmann, P; Bronstein, A M; Sapiro, G
2015-09-01
Parsimony, including sparsity and low rank, has been shown to successfully model data in numerous machine learning and signal processing tasks. Traditionally, such modeling approaches rely on an iterative algorithm that minimizes an objective function with parsimony-promoting terms. The inherently sequential structure and data-dependent complexity and latency of iterative optimization constitute a major limitation in many applications requiring real-time performance or involving large-scale data. Another limitation encountered by these modeling techniques is the difficulty of their inclusion in discriminative learning scenarios. In this work, we propose to move the emphasis from the model to the pursuit algorithm, and develop a process-centric view of parsimonious modeling, in which a learned deterministic fixed-complexity pursuit process is used in lieu of iterative optimization. We show a principled way to construct learnable pursuit process architectures for structured sparse and robust low rank models, derived from the iteration of proximal descent algorithms. These architectures learn to approximate the exact parsimonious representation at a fraction of the complexity of the standard optimization methods. We also show that appropriate training regimes allow to naturally extend parsimonious models to discriminative settings. State-of-the-art results are demonstrated on several challenging problems in image and audio processing with several orders of magnitude speed-up compared to the exact optimization algorithms.
Multi-scale modeling of tsunami flows and tsunami-induced forces
NASA Astrophysics Data System (ADS)
Qin, X.; Motley, M. R.; LeVeque, R. J.; Gonzalez, F. I.
2016-12-01
The modeling of tsunami flows and tsunami-induced forces in coastal communities with the incorporation of the constructed environment is challenging for many numerical modelers because of the scale and complexity of the physical problem. A two-dimensional (2D) depth-averaged model can be efficient for modeling of waves offshore but may not be accurate enough to predict the complex flow with transient variance in vertical direction around constructed environments on land. On the other hand, using a more complex three-dimensional model is much more computational expensive and can become impractical due to the size of the problem and the meshing requirements near the built environment. In this study, a 2D depth-integrated model and a 3D Reynolds Averaged Navier-Stokes (RANS) model are built to model a 1:50 model-scale, idealized community, representative of Seaside, OR, USA, for which existing experimental data is available for comparison. Numerical results from the two numerical models are compared with each other as well as experimental measurement. Both models predict the flow parameters (water level, velocity, and momentum flux in the vicinity of the buildings) accurately, in general, except for time period near the initial impact, where the depth-averaged models can fail to capture the complexities in the flow. Forces predicted using direct integration of predicted pressure on structural surfaces from the 3D model and using momentum flux from the 2D model with constructed environment are compared, which indicates that force prediction from the 2D model is not always reliable in such a complicated case. Force predictions from integration of the pressure are also compared with forces predicted from bare earth momentum flux calculations to reveal the importance of incorporating the constructed environment in force prediction models.
Numerical investigation of cryogen re-gasification in a plate heat exchanger
NASA Astrophysics Data System (ADS)
Malecha, Ziemowit; Płuszka, Paweł; Brenk, Arkadiusz
2017-12-01
The efficient re-gasification of cryogen is a crucial process in many cryogenic installations. It is especially important in the case of LNG evaporators used in stationary and mobile applications (e.g. marine and land transport). Other gases, like nitrogen or argon can be obtained at highest purity after re-gasification from their liquid states. Plate heat exchangers (PHE) are characterized by a high efficiency. Application of PHE for liquid gas vaporization processes can be beneficial. PHE design and optimization can be significantly supported by numerical modelling. Such calculations are very challenging due to very high computational demands and complexity related to phase change modelling. In the present work, a simplified mathematical model of a two phase flow with phase change was introduced. To ensure fast calculations a simplified two-dimensional (2D) numerical model of a real PHE was developed. It was validated with experimental measurements and finally used for LNG re-gasification modelling. The proposed numerical model showed to be orders of magnitude faster than its full 3D original.
Vegter, Riemer J K; Hartog, Johanneke; de Groot, Sonja; Lamoth, Claudine J; Bekker, Michel J; van der Scheer, Jan W; van der Woude, Lucas H V; Veeger, Dirkjan H E J
2015-03-10
To propel in an energy-efficient manner, handrim wheelchair users must learn to control the bimanually applied forces onto the rims, preserving both speed and direction of locomotion. Previous studies have found an increase in mechanical efficiency due to motor learning associated with changes in propulsion technique, but it is unclear in what way the propulsion technique impacts the load on the shoulder complex. The purpose of this study was to evaluate mechanical efficiency, propulsion technique and load on the shoulder complex during the initial stage of motor learning. 15 naive able-bodied participants received 12-minutes uninstructed wheelchair practice on a motor driven treadmill, consisting of three 4-minute blocks separated by two minutes rest. Practice was performed at a fixed belt speed (v = 1.1 m/s) and constant low-intensity power output (0.2 W/kg). Energy consumption, kinematics and kinetics of propulsion technique were continuously measured. The Delft Shoulder Model was used to calculate net joint moments, muscle activity and glenohumeral reaction force. With practice mechanical efficiency increased and propulsion technique changed, reflected by a reduced push frequency and increased work per push, performed over a larger contact angle, with more tangentially applied force and reduced power losses before and after each push. Contrary to our expectations, the above mentioned propulsion technique changes were found together with an increased load on the shoulder complex reflected by higher net moments, a higher total muscle power and higher peak and mean glenohumeral reaction forces. It appears that the early stages of motor learning in handrim wheelchair propulsion are indeed associated with improved technique and efficiency due to optimization of the kinematics and dynamics of the upper extremity. This process goes at the cost of an increased muscular effort and mechanical loading of the shoulder complex. This seems to be associated with an unchanged stable function of the trunk and could be due to the early learning phase where participants still have to learn to effectively use the full movement amplitude available within the wheelchair-user combination. Apparently whole body energy efficiency has priority over mechanical loading in the early stages of learning to propel a handrim wheelchair.
Pinto, Paula S; Lanza, Giovani D; Souza, Mayra N; Ardisson, José D; Lago, Rochel M
2018-03-01
In this work, iron oxide in the red mud (RM) waste was restructured to produce mesopores with surface [FeO x (OH) y ] sites for the efficient complexation/adsorption of β-lactam antibiotics. Red mud composed mainly by hematite was restructured by an acid/base process followed by a thermal treatment at 150-450 °C (MRM150, MRM200, MRM300, and MRM450) and fully characterized by Mössbauer, XRD, FTIR, BET, SEM, CHN, and thermogravimetric analyses. The characterization data showed a highly dispersed Fe 3+ oxyhydroxy phase, which was thermally dehydrated to a mesoporous α-Fe 2 O 3 with surface areas in the range of 141-206 m 2 g -1 . These materials showed high efficiencies (21-29 mg g -1 ) for the adsorption of β-lactam antibiotics, amoxicillin, cephalexin, and ceftriaxone, and the data was better fitted by the Langmuir model isotherm (R 2 = 0.9993) with monolayer adsorption capacity of ca. 39 mg g -1 for amoxicillin. Experiments such as competitive adsorption in the presence of phosphate and H 2 O 2 decomposition suggested that the β-lactamic antibiotics might be interacting with surface [FeO x (OH) y ] species by a complexation process. Moreover, the OH/Fe ratio, BET surface area and porosity indicated that this complexation is occurring especially on [FeO x (OH) y ] surf sites contained in the mesopore space.
Highly Physical Solar Radiation Pressure Modeling During Penumbra Transitions
NASA Astrophysics Data System (ADS)
Robertson, Robert V.
Solar radiation pressure (SRP) is one of the major non-gravitational forces acting on spacecraft. Acceleration by radiation pressure depends on the radiation flux; on spacecraft shape, attitude, and mass; and on the optical properties of the spacecraft surfaces. Precise modeling of SRP is needed for dynamic satellite orbit determination, space mission design and control, and processing of data from space-based science instruments. During Earth penumbra transitions, sunlight is passing through Earth's lower atmosphere and, in the process, its path, intensity, spectral composition, and shape are significantly affected. This dissertation presents a new method for highly physical SRP modeling in Earth's penumbra called Solar radiation pressure with Oblateness and Lower Atmospheric Absorption, Refraction, and Scattering (SOLAARS). The fundamental geometry and approach mirrors past work, where the solar radiation field is modeled using a number of light rays, rather than treating the Sun as a single point source. This dissertation aims to clarify this approach, simplify its implementation, and model previously overlooked factors. The complex geometries involved in modeling penumbra solar radiation fields are described in a more intuitive and complete way to simplify implementation. Atmospheric effects due to solar radiation passing through the troposphere and stratosphere are modeled, and the results are tabulated to significantly reduce computational cost. SOLAARS includes new, more efficient and accurate approaches to modeling atmospheric effects which allow us to consider the spatial and temporal variability in lower atmospheric conditions. A new approach to modeling the influence of Earth's polar flattening draws on past work to provide a relatively simple but accurate method for this important effect. Previous penumbra SRP models tend to lie at two extremes of complexity and computational cost, and so the significant improvement in accuracy provided by the complex models has often been lost in the interest of convenience and efficiency. This dissertation presents a simple model which provides an accurate alternative to the full, high precision SOLAARS model with reduced complexity and computational cost. This simpler method is based on curve fitting to results of the full SOLAARS model and is called SOLAARS Curve Fit (SOLAARS-CF). Both the high precision SOLAARS model and the simpler SOLAARS-CF model are applied to the Gravity Recovery and Climate Experiment (GRACE) satellites. Modeling results are compared to the sub-nm/s2 precision GRACE accelerometer data and the results of a traditional penumbra SRP model. These comparisons illustrate the improved accuracy of the SOLAARS and SOLAARS-CF models. A sensitivity analyses for the GRACE orbit illustrates the significance of various input parameters and features of the SOLAARS model on results. The SOLAARS-CF model is applied to a study of penumbra SRP and the Earth flyby anomaly. Beyond the value of its results to the scientific community, this study provides an application example where the computational efficiency of the simplified SOLAARS-CF model is necessary. The Earth flyby anomaly is an open question in orbit determination which has gone unsolved for over 20 years. This study quantifies the influence of penumbra SRP modeling errors on the observed anomalies from the Galileo, Cassini, and Rosetta Earth flybys. The results of this study prove that penumbra SRP is not an explanation for or significant contributor to the Earth flyby anomaly.
NASA Astrophysics Data System (ADS)
Koch, Julian; Cüneyd Demirel, Mehmet; Stisen, Simon
2018-05-01
The process of model evaluation is not only an integral part of model development and calibration but also of paramount importance when communicating modelling results to the scientific community and stakeholders. The modelling community has a large and well-tested toolbox of metrics to evaluate temporal model performance. In contrast, spatial performance evaluation does not correspond to the grand availability of spatial observations readily available and to the sophisticate model codes simulating the spatial variability of complex hydrological processes. This study makes a contribution towards advancing spatial-pattern-oriented model calibration by rigorously testing a multiple-component performance metric. The promoted SPAtial EFficiency (SPAEF) metric reflects three equally weighted components: correlation, coefficient of variation and histogram overlap. This multiple-component approach is found to be advantageous in order to achieve the complex task of comparing spatial patterns. SPAEF, its three components individually and two alternative spatial performance metrics, i.e. connectivity analysis and fractions skill score, are applied in a spatial-pattern-oriented model calibration of a catchment model in Denmark. Results suggest the importance of multiple-component metrics because stand-alone metrics tend to fail to provide holistic pattern information. The three SPAEF components are found to be independent, which allows them to complement each other in a meaningful way. In order to optimally exploit spatial observations made available by remote sensing platforms, this study suggests applying bias insensitive metrics which further allow for a comparison of variables which are related but may differ in unit. This study applies SPAEF in the hydrological context using the mesoscale Hydrologic Model (mHM; version 5.8), but we see great potential across disciplines related to spatially distributed earth system modelling.
Pyrazole bridged dinuclear Cu(II) and Zn(II) complexes as phosphatase models: Synthesis and activity
NASA Astrophysics Data System (ADS)
Naik, Krishna; Nevrekar, Anupama; Kokare, Dhoolesh Gangaram; Kotian, Avinash; Kamat, Vinayak; Revankar, Vidyanand K.
2016-12-01
Present work describes synthesis of dibridged dinuclear [Cu2L2(μ2-NN pyr)(NO3)2(H2O)2] and [Zn2L(μ-OH)(μ-NNpyr)(H2O)2] complexes derived from a pyrazole based ligand bis(2-hydroxy-3-methoxybenzylidene)-1H-pyrazole-3,5-dicarbohydrazide. The ligand shows dimeric chelate behaviour towards copper against monomeric for zinc counterpart. Spectroscopic evidences affirm octahedral environment around the metal ions in solution state and non-electrolytic nature of the complexes. Both the complexes are active catalysts towards phosphomonoester hydrolysis with first order kcat values in the range of 2 × 10-3s-1. Zinc complex exhibited promising catalytic efficiency for the hydrolysis. The dinuclear complexes hydrolyse via Lewis acid activation, whereby the phosphate esters are preferentially bound in a bidentate bridging fashion and subsequent nucleophilic attack to release phosphate group.
NASA Astrophysics Data System (ADS)
Suppachoknirun, Theerapat; Tutuncu, Azra N.
2017-12-01
With increasing production from shale gas and tight oil reservoirs, horizontal drilling and multistage hydraulic fracturing processes have become a routine procedure in unconventional field development efforts. Natural fractures play a critical role in hydraulic fracture growth, subsequently affecting stimulated reservoir volume and the production efficiency. Moreover, the existing fractures can also contribute to the pressure-dependent fluid leak-off during the operations. Hence, a reliable identification of the discrete fracture network covering the zone of interest prior to the hydraulic fracturing design needs to be incorporated into the hydraulic fracturing and reservoir simulations for realistic representation of the in situ reservoir conditions. In this research study, an integrated 3-D fracture and fluid flow model have been developed using a new approach to simulate the fluid flow and deliver reliable production forecasting in naturally fractured and hydraulically stimulated tight reservoirs. The model was created with three key modules. A complex 3-D discrete fracture network model introduces realistic natural fracture geometry with the associated fractured reservoir characteristics. A hydraulic fracturing model is created utilizing the discrete fracture network for simulation of the hydraulic fracture and flow in the complex discrete fracture network. Finally, a reservoir model with the production grid system is used allowing the user to efficiently perform the fluid flow simulation in tight formations with complex fracture networks. The complex discrete natural fracture model, the integrated discrete fracture model for the hydraulic fracturing, the fluid flow model, and the input dataset have been validated against microseismic fracture mapping and commingled production data obtained from a well pad with three horizontal production wells located in the Eagle Ford oil window in south Texas. Two other fracturing geometries were also evaluated to optimize the cumulative production and for the three wells individually. Significant reduction in the production rate in early production times is anticipated in tight reservoirs regardless of the fracturing techniques implemented. The simulations conducted using the alternating fracturing technique led to more oil production than when zipper fracturing was used for a 20-year production period. Yet, due to the decline experienced, the differences in cumulative production get smaller, and the alternating fracturing is not practically implementable while field application of zipper fracturing technique is more practical and widely used.
NASA Astrophysics Data System (ADS)
McLarty, Dustin Fogle
Distributed energy systems are a promising means by which to reduce both emissions and costs. Continuous generators must be responsive and highly efficiency to support building dynamics and intermittent on-site renewable power. Fuel cell -- gas turbine hybrids (FC/GT) are fuel-flexible generators capable of ultra-high efficiency, ultra-low emissions, and rapid power response. This work undertakes a detailed study of the electrochemistry, chemistry and mechanical dynamics governing the complex interaction between the individual systems in such a highly coupled hybrid arrangement. The mechanisms leading to the compressor stall/surge phenomena are studied for the increased risk posed to particular hybrid configurations. A novel fuel cell modeling method introduced captures various spatial resolutions, flow geometries, stack configurations and novel heat transfer pathways. Several promising hybrid configurations are analyzed throughout the work and a sensitivity analysis of seven design parameters is conducted. A simple estimating method is introduced for the combined system efficiency of a fuel cell and a turbine using component performance specifications. Existing solid oxide fuel cell technology is capable of hybrid efficiencies greater than 75% (LHV) operating on natural gas, and existing molten carbonate systems greater than 70% (LHV). A dynamic model is calibrated to accurately capture the physical coupling of a FC/GT demonstrator tested at UC Irvine. The 2900 hour experiment highlighted the sensitivity to small perturbations and a need for additional control development. Further sensitivity studies outlined the responsiveness and limits of different control approaches. The capability for substantial turn-down and load following through speed control and flow bypass with minimal impact on internal fuel cell thermal distribution is particularly promising to meet local demands or provide dispatchable support for renewable power. Advanced control and dispatch heuristics are discussed using a case study of the UCI central plant. Thermal energy storage introduces a time horizon into the dispatch optimization which requires novel solution strategies. Highly efficient and responsive generators are required to meet the increasingly dynamic loads of today's efficient buildings and intermittent local renewable wind and solar power. Fuel cell gas turbine hybrids will play an integral role in the complex and ever-changing solution to local electricity production.
2.5D complex resistivity modeling and inversion using unstructured grids
NASA Astrophysics Data System (ADS)
Xu, Kaijun; Sun, Jie
2016-04-01
The characteristic of complex resistivity on rock and ore has been recognized by people for a long time. Generally we have used the Cole-Cole Model(CCM) to describe complex resistivity. It has been proved that the electrical anomaly of geologic body can be quantitative estimated by CCM parameters such as direct resistivity(ρ0), chargeability(m), time constant(τ) and frequency dependence(c). Thus it is very important to obtain the complex parameters of geologic body. It is difficult to approximate complex structures and terrain using traditional rectangular grid. In order to enhance the numerical accuracy and rationality of modeling and inversion, we use an adaptive finite-element algorithm for forward modeling of the frequency-domain 2.5D complex resistivity and implement the conjugate gradient algorithm in the inversion of 2.5D complex resistivity. An adaptive finite element method is applied for solving the 2.5D complex resistivity forward modeling of horizontal electric dipole source. First of all, the CCM is introduced into the Maxwell's equations to calculate the complex resistivity electromagnetic fields. Next, the pseudo delta function is used to distribute electric dipole source. Then the electromagnetic fields can be expressed in terms of the primary fields caused by layered structure and the secondary fields caused by inhomogeneities anomalous conductivity. At last, we calculated the electromagnetic fields response of complex geoelectric structures such as anticline, syncline, fault. The modeling results show that adaptive finite-element methods can automatically improve mesh generation and simulate complex geoelectric models using unstructured grids. The 2.5D complex resistivity invertion is implemented based the conjugate gradient algorithm.The conjugate gradient algorithm doesn't need to compute the sensitivity matrix but directly computes the sensitivity matrix or its transpose multiplying vector. In addition, the inversion target zones are segmented with fine grids and the background zones are segmented with big grid, the method can reduce the grid amounts of inversion, it is very helpful to improve the computational efficiency. The inversion results verify the validity and stability of conjugate gradient inversion algorithm. The results of theoretical calculation indicate that the modeling and inversion of 2.5D complex resistivity using unstructured grids are feasible. Using unstructured grids can improve the accuracy of modeling, but the large number of grids inversion is extremely time-consuming, so the parallel computation for the inversion is necessary. Acknowledgments: We thank to the support of the National Natural Science Foundation of China(41304094).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lisitsa, Vadim, E-mail: lisitsavv@ipgg.sbras.ru; Novosibirsk State University, Novosibirsk; Tcheverda, Vladimir
We present an algorithm for the numerical simulation of seismic wave propagation in models with a complex near surface part and free surface topography. The approach is based on the combination of finite differences with the discontinuous Galerkin method. The discontinuous Galerkin method can be used on polyhedral meshes; thus, it is easy to handle the complex surfaces in the models. However, this approach is computationally intense in comparison with finite differences. Finite differences are computationally efficient, but in general, they require rectangular grids, leading to the stair-step approximation of the interfaces, which causes strong diffraction of the wavefield. Inmore » this research we present a hybrid algorithm where the discontinuous Galerkin method is used in a relatively small upper part of the model and finite differences are applied to the main part of the model.« less
Modeling Human Cancers in Drosophila.
Sonoshita, M; Cagan, R L
2017-01-01
Cancer is a complex disease that affects multiple organs. Whole-body animal models provide important insights into oncology that can lead to clinical impact. Here, we review novel concepts that Drosophila studies have established for cancer biology, drug discovery, and patient therapy. Genetic studies using Drosophila have explored the roles of oncogenes and tumor-suppressor genes that when dysregulated promote cancer formation, making Drosophila a useful model to study multiple aspects of transformation. Not limited to mechanism analyses, Drosophila has recently been showing its value in facilitating drug development. Flies offer rapid, efficient platforms by which novel classes of drugs can be identified as candidate anticancer leads. Further, we discuss the use of Drosophila as a platform to develop therapies for individual patients by modeling the tumor's genetic complexity. Drosophila provides both a classical and a novel tool to identify new therapeutics, complementing other more traditional cancer tools. © 2017 Elsevier Inc. All rights reserved.
Star formation in a hierarchical model for Cloud Complexes
NASA Astrophysics Data System (ADS)
Sanchez, N.; Parravano, A.
The effects of the external and initial conditions on the star formation processes in Molecular Cloud Complexes are examined in the context of a schematic model. The model considers a hierarchical system with five predefined phases: warm gas, neutral gas, low density molecular gas, high density molecular gas and protostars. The model follows the mass evolution of each substructure by computing its mass exchange with their parent and children. The parent-child mass exchange depends on the radiation density at the interphase, which is produced by the radiation coming from the stars that form at the end of the hierarchical structure, and by the external radiation field. The system is chaotic in the sense that its temporal evolution is very sensitive to small changes in the initial or external conditions. However, global features such as the star formation efficience and the Initial Mass Function are less affected by those variations.
Numerical Approximation of Elasticity Tensor Associated With Green-Naghdi Rate.
Liu, Haofei; Sun, Wei
2017-08-01
Objective stress rates are often used in commercial finite element (FE) programs. However, deriving a consistent tangent modulus tensor (also known as elasticity tensor or material Jacobian) associated with the objective stress rates is challenging when complex material models are utilized. In this paper, an approximation method for the tangent modulus tensor associated with the Green-Naghdi rate of the Kirchhoff stress is employed to simplify the evaluation process. The effectiveness of the approach is demonstrated through the implementation of two user-defined fiber-reinforced hyperelastic material models. Comparisons between the approximation method and the closed-form analytical method demonstrate that the former can simplify the material Jacobian evaluation with satisfactory accuracy while retaining its computational efficiency. Moreover, since the approximation method is independent of material models, it can facilitate the implementation of complex material models in FE analysis using shell/membrane elements in abaqus.
Metallurgical Plant Optimization Through the use of Flowsheet Simulation Modelling
NASA Astrophysics Data System (ADS)
Kennedy, Mark William
Modern metallurgical plants typically have complex flowsheets and operate on a continuous basis. Real time interactions within such processes can be complex and the impacts of streams such as recycles on process efficiency and stability can be highly unexpected prior to actual operation. Current desktop computing power, combined with state-of-the-art flowsheet simulation software like Metsim, allow for thorough analysis of designs to explore the interaction between operating rate, heat and mass balances and in particular the potential negative impact of recycles. Using plant information systems, it is possible to combine real plant data with simple steady state models, using dynamic data exchange links to allow for near real time de-bottlenecking of operations. Accurate analytical results can also be combined with detailed unit operations models to allow for feed-forward model-based-control. This paper will explore some examples of the application of Metsim to real world engineering and plant operational issues.
Three-dimensional curved grid finite-difference modelling for non-planar rupture dynamics
NASA Astrophysics Data System (ADS)
Zhang, Zhenguo; Zhang, Wei; Chen, Xiaofei
2014-11-01
In this study, we present a new method for simulating the 3-D dynamic rupture process occurring on a non-planar fault. The method is based on the curved-grid finite-difference method (CG-FDM) proposed by Zhang & Chen and Zhang et al. to simulate the propagation of seismic waves in media with arbitrary irregular surface topography. While keeping the advantages of conventional FDM, that is computational efficiency and easy implementation, the CG-FDM also is flexible in modelling the complex fault model by using general curvilinear grids, and thus is able to model the rupture dynamics of a fault with complex geometry, such as oblique dipping fault, non-planar fault, fault with step-over, fault branching, even if irregular topography exists. The accuracy and robustness of this new method have been validated by comparing with the previous results of Day et al., and benchmarks for rupture dynamics simulations. Finally, two simulations of rupture dynamics with complex fault geometry, that is a non-planar fault and a fault rupturing a free surface with topography, are presented. A very interesting phenomenon was observed that topography can weaken the tendency for supershear transition to occur when rupture breaks out at a free surface. Undoubtedly, this new method provides an effective, at least an alternative, tool to simulate the rupture dynamics of a complex non-planar fault, and can be applied to model the rupture dynamics of a real earthquake with complex geometry.
Freire, Ricardo O; Rocha, Gerd B; Simas, Alfredo M
2006-03-01
lanthanide coordination compounds efficiently and accurately is central for the design of new ligands capable of forming stable and highly luminescent complexes. Accordingly, we present in this paper a report on the capability of various ab initio effective core potential calculations in reproducing the coordination polyhedron geometries of lanthanide complexes. Starting with all combinations of HF, B3LYP and MP2(Full) with STO-3G, 3-21G, 6-31G, 6-31G* and 6-31+G basis sets for [Eu(H2O)9]3+ and closing with more manageable calculations for the larger complexes, we computed the fully predicted ab initio geometries for a total of 80 calculations on 52 complexes of Sm(III), Eu(III), Gd(III), Tb(III), Dy(III), Ho(III), Er(III) and Tm(III), the largest containing 164 atoms. Our results indicate that RHF/STO-3G/ECP appears to be the most efficient model chemistry in terms of coordination polyhedron crystallographic geometry predictions from isolated lanthanide complex ion calculations. Moreover, both augmenting the basis set and/or including electron correlation generally enlarged the deviations and aggravated the quality of the predicted coordination polyhedron crystallographic geometry. Our results further indicate that Cosentino et al.'s suggestion of using RHF/3-21G/ECP geometries appears to be indeed a more robust, but not necessarily, more accurate recommendation to be adopted for the general lanthanide complex case. [Figure: see text].
NASA Astrophysics Data System (ADS)
Bosikov, I. I.; Klyuev, R. V.; Revazov, V. Ch; Pilieva, D. E.
2018-03-01
The article describes research and analysis of hazardous processes occurring in the natural-industrial system and effectiveness assessment of its functioning using mathematical models. Studies of the functioning regularities of the natural and industrial system are becoming increasingly relevant in connection with the formulation of the task of modernizing production and the economy of Russia as a whole. In connection with a significant amount of poorly structured data, it is complicated by regulations for the effective functioning of production processes, social and natural complexes, under which a sustainable development of the natural-industrial system of the mining and processing complex would be ensured. Therefore, the scientific and applied problems, the solution of which allows one to formalize the hidden structural functioning patterns of the natural-industrial system and to make managerial decisions of organizational and technological nature to improve the efficiency of the system, are very relevant.
Generic strategies for chemical space exploration.
Andersen, Jakob L; Flamm, Christoph; Merkle, Daniel; Stadler, Peter F
2014-01-01
The chemical universe of molecules reachable from a set of start compounds by iterative application of a finite number of reactions is usually so vast, that sophisticated and efficient exploration strategies are required to cope with the combinatorial complexity. A stringent analysis of (bio)chemical reaction networks, as approximations of these complex chemical spaces, forms the foundation for the understanding of functional relations in Chemistry and Biology. Graphs and graph rewriting are natural models for molecules and reactions. Borrowing the idea of partial evaluation from functional programming, we introduce partial applications of rewrite rules. A framework for the specification of exploration strategies in graph-rewriting systems is presented. Using key examples of complex reaction networks from carbohydrate chemistry we demonstrate the feasibility of this high-level strategy framework. While being designed for chemical applications, the framework can also be used to emulate higher-level transformation models such as illustrated in a small puzzle game.
Debating complexity in modeling
Hunt, Randall J.; Zheng, Chunmiao
1999-01-01
As scientists trying to understand the natural world, how should our effort be apportioned? We know that the natural world is characterized by complex and interrelated processes. Yet do we need to explicitly incorporate these intricacies to perform the tasks we are charged with? In this era of expanding computer power and development of sophisticated preprocessors and postprocessors, are bigger machines making better models? Put another way, do we understand the natural world better now with all these advancements in our simulation ability? Today the public's patience for long-term projects producing indeterminate results is wearing thin. This increases pressure on the investigator to use the appropriate technology efficiently. On the other hand, bringing scientific results into the legal arena opens up a new dimension to the issue: to the layperson, a tool that includes more of the complexity known to exist in the real world is expected to provide the more scientifically valid answer.
Structure of the Rigor Actin-Tropomyosin-Myosin Complex
Behrmann, Elmar; Müller, Mirco; Penczek, Pawel A.; Mannherz, Hans Georg; Manstein, Dietmar J.; Raunser, Stefan
2014-01-01
The interaction of myosin with actin filaments is the central feature of muscle contraction and cargo movement along actin filaments of the cytoskeleton. Myosin converts the chemical energy stored in ATP into force and movement along actin filaments. Myosin binding to actin induces conformational changes that are coupled to the nucleotide-binding pocket and amplified by a specialized region of the motor domain for efficient force generation. Tropomyosin plays a key role in regulating the productive interaction between myosins and actin. Here, we report the 8 Å resolution structure of the actin-tropomyosin-myosin complex determined by cryo electron microscopy. The pseudo-atomic model of the complex obtained from fitting crystal structures into the map defines the large actin-myosin-tropomyosin interface and the molecular interactions between the proteins in detail and allows us to propose a structural model for tropomyosin dependent myosin binding to actin and actin-induced nucleotide release from myosin. PMID:22817895
Analytical Micromechanics Modeling Technique Developed for Ceramic Matrix Composites Analysis
NASA Technical Reports Server (NTRS)
Min, James B.
2005-01-01
Ceramic matrix composites (CMCs) promise many advantages for next-generation aerospace propulsion systems. Specifically, carbon-reinforced silicon carbide (C/SiC) CMCs enable higher operational temperatures and provide potential component weight savings by virtue of their high specific strength. These attributes may provide systemwide benefits. Higher operating temperatures lessen or eliminate the need for cooling, thereby reducing both fuel consumption and the complex hardware and plumbing required for heat management. This, in turn, lowers system weight, size, and complexity, while improving efficiency, reliability, and service life, resulting in overall lower operating costs.
Computer-aided programming for message-passing system; Problems and a solution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, M.Y.; Gajski, D.D.
1989-12-01
As the number of processors and the complexity of problems to be solved increase, programming multiprocessing systems becomes more difficult and error-prone. Program development tools are necessary since programmers are not able to develop complex parallel programs efficiently. Parallel models of computation, parallelization problems, and tools for computer-aided programming (CAP) are discussed. As an example, a CAP tool that performs scheduling and inserts communication primitives automatically is described. It also generates the performance estimates and other program quality measures to help programmers in improving their algorithms and programs.
Computing quantum hashing in the model of quantum branching programs
NASA Astrophysics Data System (ADS)
Ablayev, Farid; Ablayev, Marat; Vasiliev, Alexander
2018-02-01
We investigate the branching program complexity of quantum hashing. We consider a quantum hash function that maps elements of a finite field into quantum states. We require that this function is preimage-resistant and collision-resistant. We consider two complexity measures for Quantum Branching Programs (QBP): a number of qubits and a number of compu-tational steps. We show that the quantum hash function can be computed efficiently. Moreover, we prove that such QBP construction is optimal. That is, we prove lower bounds that match the constructed quantum hash function computation.
Yuki, Masahiro; Sakata, Ken; Hirao, Yoshifumi; Nonoyama, Nobuaki; Nakajima, Kazunari; Nishibayashi, Yoshiaki
2015-04-01
Thiolate-bridged dinuclear ruthenium and iron complexes are found to work as efficient catalysts toward oxidation of molecular dihydrogen in protic solvents such as water and methanol under ambient reaction conditions. Heterolytic cleavage of the coordinated molecular dihydrogen at the dinuclear complexes and the sequential oxidation of the produced hydride complexes are involved as key steps to promote the present catalytic reaction. The catalytic activity of the dinuclear complexes toward the chemical oxidation of molecular dihydrogen achieves up to 10000 TON (turnover number), and electrooxidation of molecular dihydrogen proceeds quite rapidly. The result of the density functional theory (DFT) calculation on the reaction pathway indicates that a synergistic effect between the two ruthenium atoms plays an important role to realize the catalytic oxidation of molecular dihydrogen efficiently. The present dinuclear ruthenium complex is found to work as an efficient organometallic anode catalyst for the fuel cell. It is noteworthy that the present dinuclear complex worked not only as an effective catalyst toward chemical and electrochemical oxidation of molecular dihydrogen but also as a good anode catalyst for the fuel cell. We consider that the result described in this paper provides useful and valuable information to develop highly efficient and low-cost transition metal complexes as anode catalysts in the fuel cell.
NASA Astrophysics Data System (ADS)
Hutton, C.; Wagener, T.; Freer, J. E.; Duffy, C.; Han, D.
2015-12-01
Distributed models offer the potential to resolve catchment systems in more detail, and therefore simulate the hydrological impacts of spatial changes in catchment forcing (e.g. landscape change). Such models may contain a large number of model parameters which are computationally expensive to calibrate. Even when calibration is possible, insufficient data can result in model parameter and structural equifinality. In order to help reduce the space of feasible models and supplement traditional outlet discharge calibration data, semi-quantitative information (e.g. knowledge of relative groundwater levels), may also be used to identify behavioural models when applied to constrain spatially distributed predictions of states and fluxes. The challenge is to combine these different sources of information together to identify a behavioural region of state-space, and efficiently search a large, complex parameter space to identify behavioural parameter sets that produce predictions that fall within this behavioural region. Here we present a methodology to incorporate different sources of data to efficiently calibrate distributed catchment models. Metrics of model performance may be derived from multiple sources of data (e.g. perceptual understanding and measured or regionalised hydrologic signatures). For each metric, an interval or inequality is used to define the behaviour of the catchment system, accounting for data uncertainties. These intervals are then combined to produce a hyper-volume in state space. The state space is then recast as a multi-objective optimisation problem, and the Borg MOEA is applied to first find, and then populate the hyper-volume, thereby identifying acceptable model parameter sets. We apply the methodology to calibrate the PIHM model at Plynlimon, UK by incorporating perceptual and hydrologic data into the calibration problem. Furthermore, we explore how to improve calibration efficiency through search initialisation from shorter model runs.
Tamaki, Yusuke; Morimoto, Tatsuki; Koike, Kazuhide; Ishitani, Osamu
2012-01-01
Previously undescribed supramolecules constructed with various ratios of two kinds of Ru(II) complexes—a photosensitizer and a catalyst—were synthesized. These complexes can photocatalyze the reduction of CO2 to formic acid with high selectivity and durability using a wide range of wavelengths of visible light and NADH model compounds as electron donors in a mixed solution of dimethylformamide–triethanolamine. Using a higher ratio of the photosensitizer unit to the catalyst unit led to a higher yield of formic acid. In particular, of the reported photocatalysts, a trinuclear complex with two photosensitizer units and one catalyst unit photocatalyzed CO2 reduction (ΦHCOOH = 0.061, TONHCOOH = 671) with the fastest reaction rate (TOFHCOOH = 11.6 min-1). On the other hand, photocatalyses of a mixed system containing two kinds of model mononuclear Ru(II) complexes, and supramolecules with a higher ratio of the catalyst unit were much less efficient, and black oligomers and polymers were produced from the Ru complexes during photocatalytic reactions, which reduced the yield of formic acid. The photocatalytic formation of formic acid using the supramolecules described herein proceeds via two sequential processes: the photochemical reduction of the photosensitizer unit by NADH model compounds and intramolecular electron transfer to the catalyst unit. PMID:22908243
NASA Astrophysics Data System (ADS)
Grayver, Alexander V.
2015-07-01
This paper presents a distributed magnetotelluric inversion scheme based on adaptive finite-element method (FEM). The key novel aspect of the introduced algorithm is the use of automatic mesh refinement techniques for both forward and inverse modelling. These techniques alleviate tedious and subjective procedure of choosing a suitable model parametrization. To avoid overparametrization, meshes for forward and inverse problems were decoupled. For calculation of accurate electromagnetic (EM) responses, automatic mesh refinement algorithm based on a goal-oriented error estimator has been adopted. For further efficiency gain, EM fields for each frequency were calculated using independent meshes in order to account for substantially different spatial behaviour of the fields over a wide range of frequencies. An automatic approach for efficient initial mesh design in inverse problems based on linearized model resolution matrix was developed. To make this algorithm suitable for large-scale problems, it was proposed to use a low-rank approximation of the linearized model resolution matrix. In order to fill a gap between initial and true model complexities and resolve emerging 3-D structures better, an algorithm for adaptive inverse mesh refinement was derived. Within this algorithm, spatial variations of the imaged parameter are calculated and mesh is refined in the neighborhoods of points with the largest variations. A series of numerical tests were performed to demonstrate the utility of the presented algorithms. Adaptive mesh refinement based on the model resolution estimates provides an efficient tool to derive initial meshes which account for arbitrary survey layouts, data types, frequency content and measurement uncertainties. Furthermore, the algorithm is capable to deliver meshes suitable to resolve features on multiple scales while keeping number of unknowns low. However, such meshes exhibit dependency on an initial model guess. Additionally, it is demonstrated that the adaptive mesh refinement can be particularly efficient in resolving complex shapes. The implemented inversion scheme was able to resolve a hemisphere object with sufficient resolution starting from a coarse discretization and refining mesh adaptively in a fully automatic process. The code is able to harness the computational power of modern distributed platforms and is shown to work with models consisting of millions of degrees of freedom. Significant computational savings were achieved by using locally refined decoupled meshes.
Martin, Timothy M; Wysocki, Beata J; Beyersdorf, Jared P; Wysocki, Tadeusz A; Pannier, Angela K
2014-08-01
Gene delivery systems transport exogenous genetic information to cells or biological systems with the potential to directly alter endogenous gene expression and behavior with applications in functional genomics, tissue engineering, medical devices, and gene therapy. Nonviral systems offer advantages over viral systems because of their low immunogenicity, inexpensive synthesis, and easy modification but suffer from lower transfection levels. The representation of gene transfer using models offers perspective and interpretation of complex cellular mechanisms,including nonviral gene delivery where exact mechanisms are unknown. Here, we introduce a novel telecommunications model of the nonviral gene delivery process in which the delivery of the gene to a cell is synonymous with delivery of a packet of information to a destination computer within a packet-switched computer network. Such a model uses nodes and layers to simplify the complexity of modeling the transfection process and to overcome several challenges of existing models. These challenges include a limited scope and limited time frame, which often does not incorporate biological effects known to affect transfection. The telecommunication model was constructed in MATLAB to model lipoplex delivery of the gene encoding the green fluorescent protein to HeLa cells. Mitosis and toxicity events were included in the model resulting in simulation outputs of nuclear internalization and transfection efficiency that correlated with experimental data. A priori predictions based on model sensitivity analysis suggest that increasing endosomal escape and decreasing lysosomal degradation, protein degradation, and GFP-induced toxicity can improve transfection efficiency by three-fold. Application of the telecommunications model to nonviral gene delivery offers insight into the development of new gene delivery systems with therapeutically relevant transfection levels.
Incorporating Auditory Models in Speech/Audio Applications
NASA Astrophysics Data System (ADS)
Krishnamoorthi, Harish
2011-12-01
Following the success in incorporating perceptual models in audio coding algorithms, their application in other speech/audio processing systems is expanding. In general, all perceptual speech/audio processing algorithms involve minimization of an objective function that directly/indirectly incorporates properties of human perception. This dissertation primarily investigates the problems associated with directly embedding an auditory model in the objective function formulation and proposes possible solutions to overcome high complexity issues for use in real-time speech/audio algorithms. Specific problems addressed in this dissertation include: 1) the development of approximate but computationally efficient auditory model implementations that are consistent with the principles of psychoacoustics, 2) the development of a mapping scheme that allows synthesizing a time/frequency domain representation from its equivalent auditory model output. The first problem is aimed at addressing the high computational complexity involved in solving perceptual objective functions that require repeated application of auditory model for evaluation of different candidate solutions. In this dissertation, a frequency pruning and a detector pruning algorithm is developed that efficiently implements the various auditory model stages. The performance of the pruned model is compared to that of the original auditory model for different types of test signals in the SQAM database. Experimental results indicate only a 4-7% relative error in loudness while attaining up to 80-90 % reduction in computational complexity. Similarly, a hybrid algorithm is developed specifically for use with sinusoidal signals and employs the proposed auditory pattern combining technique together with a look-up table to store representative auditory patterns. The second problem obtains an estimate of the auditory representation that minimizes a perceptual objective function and transforms the auditory pattern back to its equivalent time/frequency representation. This avoids the repeated application of auditory model stages to test different candidate time/frequency vectors in minimizing perceptual objective functions. In this dissertation, a constrained mapping scheme is developed by linearizing certain auditory model stages that ensures obtaining a time/frequency mapping corresponding to the estimated auditory representation. This paradigm was successfully incorporated in a perceptual speech enhancement algorithm and a sinusoidal component selection task.
Reduced Complexity Modelling of Urban Floodplain Inundation
NASA Astrophysics Data System (ADS)
McMillan, H. K.; Brasington, J.; Mihir, M.
2004-12-01
Significant recent advances in floodplain inundation modelling have been achieved by directly coupling 1d channel hydraulic models with a raster storage cell approximation for floodplain flows. The strengths of this reduced-complexity model structure derive from its explicit dependence on a digital elevation model (DEM) to parameterize flows through riparian areas, providing a computationally efficient algorithm to model heterogeneous floodplains. Previous applications of this framework have generally used mid-range grid scales (101-102 m), showing the capacity of the models to simulate long reaches (103-104 m). However, the increasing availability of precision DEMs derived from airborne laser altimetry (LIDAR) enables their use at very high spatial resolutions (100-101 m). This spatial scale offers the opportunity to incorporate the complexity of the built environment directly within the floodplain DEM and simulate urban flooding. This poster describes a series of experiments designed to explore model functionality at these reduced scales. Important questions are considered, raised by this new approach, about the reliability and representation of the floodplain topography and built environment, and the resultant sensitivity of inundation forecasts. The experiments apply a raster floodplain model to reconstruct a 1:100 year flood event on the River Granta in eastern England, which flooded 72 properties in the town of Linton in October 2001. The simulations use a nested-scale model to maintain efficiency. A 2km by 4km urban zone is represented by a high-resolution DEM derived from single-pulse LIDAR data supplied by the UK Environment Agency, together with surveyed data and aerial photography. Novel methods of processing the raw data to provide the individual structure detail required are investigated and compared. This is then embedded within a lower-resolution model application at the reach scale which provides boundary conditions based on recorded flood stage. The high resolution predictions on a scale commensurate with urban structures make possible a multi-criteria validation which combines verification of reach-scale characteristics such as downstream flow and inundation extent with internal validation of flood depth at individual sites.