Plimpton, Steven James; Heffernan, Julieanne; Sasaki, Darryl Yoshio; Frischknecht, Amalie Lucile; Stevens, Mark Jackson; Frink, Laura J. Douglas
2005-11-01
Understanding the properties and behavior of biomembranes is fundamental to many biological processes and technologies. Microdomains in biomembranes or ''lipid rafts'' are now known to be an integral part of cell signaling, vesicle formation, fusion processes, protein trafficking, and viral and toxin infection processes. Understanding how microdomains form, how they depend on membrane constituents, and how they act not only has biological implications, but also will impact Sandia's effort in development of membranes that structurally adapt to their environment in a controlled manner. To provide such understanding, we created physically-based models of biomembranes. Molecular dynamics (MD) simulations and classical density functional theory (DFT) calculations using these models were applied to phenomena such as microdomain formation, membrane fusion, pattern formation, and protein insertion. Because lipid dynamics and self-organization in membranes occur on length and time scales beyond atomistic MD, we used coarse-grained models of double tail lipid molecules that spontaneously self-assemble into bilayers. DFT provided equilibrium information on membrane structure. Experimental work was performed to further help elucidate the fundamental membrane organization principles.
Simple models for biomembrane structure and dynamics
NASA Astrophysics Data System (ADS)
Brown, Frank L. H.
2007-07-01
Simulation of biomembranes over length and time scales relevant to cellular biology is not currently feasible with molecular dynamics including full atomic detail. Barring an unforeseen revolution in the computer industry, this situation will not change for many decades. We present two coarse grained simulation models for biomembranes that treat water implicitly (i.e. no water molecules appear in our simulations. The hydrophobic effect, hydrodynamics and related properties are approximately included without simulation of solvent). These models enable the study of systems and phenomena previously intractable to simulation. The influence of membrane bound proteins on lipid ordering and the diffusion of membrane bound proteins is discussed.
Introductory lecture: basic quantities in model biomembranes.
Nagle, John F
2013-01-01
One of the many aspects of membrane biophysics dealt with in this Faraday Discussion regards the material moduli that describe energies at a supramolecular level. This introductory lecture first critically reviews differences in reported numerical values of the bending modulus K(C), which is a central property for the biologically important flexibility of membranes. It is speculated that there may be a reason that the shape analysis method tends to give larger values of K(C) than the micromechanical manipulation method or the more recent X-ray method that agree very well with each other. Another theme of membrane biophysics is the use of simulations to provide exquisite detail of structures and processes. This lecture critically reviews the application of atomic level simulations to the quantitative structure of simple single component lipid bilayers and diagnostics are introduced to evaluate simulations. Another theme of this Faraday Discussion was lateral heterogeneity in biomembranes with many different lipids. Coarse grained simulations and analytical theories promise to synergistically enhance experimental studies when their interaction parameters are tuned to agree with experimental data, such as the slopes of experimental tie lines in ternary phase diagrams. Finally, attention is called to contributions that add relevant biological molecules to bilayers and to contributions that study the exciting shape changes and different non-bilayer structures with different lipids.
Ferroelectric active models of ion channels in biomembranes.
Bystrov, V S; Lakhno, V D; Molchanov, M
1994-06-21
Ferroactive models of ion channels in the theory of biological membranes are presented. The main equations are derived and their possible solutions are shown. The estimates of some experimentally measured parameters are given. Possible physical consequences of the suggested models are listed and the possibility of their experimental finding is discussed. The functioning of the biomembrane's ion channel is qualitatively described on the basis of the suggested ferroactive models. The main directions and prospects for development of the ferroactive approach to the theory of biological membranes and their structures are indicated.
Peridynamic Modeling of Ruptures in Biomembranes
Jesorka, Aldo; Bertoldi, Katia
2016-01-01
We simulate the formation of spontaneous ruptures in supported phospholipid double bilayer membranes, using peridynamic modeling. Experiments performed on spreading double bilayers typically show two distinct kinds of ruptures, floral and fractal, which form spontaneously in the distal (upper) bilayer at late stages of double bilayer formation on high energy substrates. It is, however, currently unresolved which factors govern the occurrence of either rupture type. Variations in the distance between the two bilayers, and the occurrence of interconnections (“pinning sites”) are suspected of contributing to the process. Our new simulations indicate that the pinned regions which form, presumably due to Ca2+ ions serving as bridging agent between the distal and the proximal bilayer, act as nucleation sites for the ruptures. Moreover, assuming that the pinning sites cause a non-zero shear modulus, our simulations also show that they change the rupture mode from floral to fractal. At zero shear modulus the pores appear to be circular, subsequently evolving into floral pores. With increasing shear modulus the pore edges start to branch, favoring fractal morphologies. We conclude that the pinning sites may indirectly determine the rupture morphology by contributing to shear stress in the distal membrane. PMID:27829001
Conjugation of squalene to acyclovir improves the affinity for biomembrane models.
Sarpietro, Maria Grazia; Micieli, Dorotea; Rocco, Flavio; Ceruti, Maurizio; Castelli, Francesco
2009-12-01
Differential scanning calorimetry was used to study the interaction of acyclovir and its prodrug squalenoyl-acyclovir (obtained by conjugation of 1,1',2-tris-nor-squalene acid (squaleneCOOH) with acyclovir) with biomembrane models made of DMPC multilamellar vesicles with the aim to verify whether a stronger interaction of the prodrug with respect to the free drug can be obtained. Multilamellar vesicles were prepared in the presence of increasing molar fractions of acyclovir, squaleneCOOH or prodrug and the effect of the compounds on the thermotropic behavior of vesicles was researched, revealing no effect of acyclovir but a strong effect of squaleneCOOH and prodrug. To evaluate if acyclovir, squaleneCOOH and prodrug can be absorbed by the biomembrane model, an experiment was carried out in which the considered compounds were left in contact with the biomembrane model and their eventual uptake was evaluated analyzing the effect on the thermotropic behavior of the biomembrane model. A very small uptake was revealed for all the compounds. To check the potential use of liposomes as a delivery system for the prodrug, the biomembrane models were incubated with liposomes loaded with the compounds and the compounds transferring from the loaded liposomes to the unloaded biomembrane model was followed. The results suggest that liposomes could be used to deliver the squalenoyl-acyclovir to the biomembrane model.
Rukhadze, Marina; Dzidziguri, Diana; Giorgobiani, Nana; Kerkenjia, Salome
2011-12-01
The structure of biomembranes was imitated by introducing nonionic surfactant polyoxyethylene (23) dodecylether, cholic acid and endogenic thermostable protein complex (14-65 kDa) into the mobile phase. The influence of concentration of these additives on the retention of the model compounds was studied. The competing interaction of cholic acid and endogenic thermostable protein complex in the lipid bilayer model was revealed on the basis of chromatographic data. The values of efficiency of the chromatographic column regarding solutes were increased by addition of endogenic thermostable protein complex to the mobile phase containing Brij-35 and cholic acid.
Element-specific density profiles in interacting biomembrane models
NASA Astrophysics Data System (ADS)
Schneck, Emanuel; Rodriguez-Loureiro, Ignacio; Bertinetti, Luca; Marin, Egor; Novikov, Dmitri; Konovalov, Oleg; Gochev, Georgi
2017-03-01
Surface interactions involving biomembranes, such as cell–cell interactions or membrane contacts inside cells play important roles in numerous biological processes. Structural insight into the interacting surfaces is a prerequisite to understand the interaction characteristics as well as the underlying physical mechanisms. Here, we work with simplified planar experimental models of membrane surfaces, composed of lipids and lipopolymers. Their interaction is quantified in terms of pressure–distance curves using ellipsometry at controlled dehydrating (interaction) pressures. For selected pressures, their internal structure is investigated by standing-wave x-ray fluorescence (SWXF). This technique yields specific density profiles of the chemical elements P and S belonging to lipid headgroups and polymer chains, as well as counter-ion profiles for charged surfaces.
Interactions of PAMAM dendrimers with negatively charged model biomembranes.
Yanez Arteta, Marianna; Ainalem, Marie-Louise; Porcar, Lionel; Martel, Anne; Coker, Helena; Lundberg, Dan; Chang, Debby P; Soltwedel, Olaf; Barker, Robert; Nylander, Tommy
2014-11-13
We have investigated the interactions between cationic poly(amidoamine) (PAMAM) dendrimers of generation 4 (G4), a potential gene transfection vector, with net-anionic model biomembranes composed of different ratios of zwitterionic phosphocholine (PC) and anionic phospho-L-serine (PS) phospholipids. Two types of model membranes were used: solid-supported bilayers, prepared with lipids carrying palmitoyl-oleoyl (PO) and diphytanoyl (DPh) acyl chains, and free-standing bilayers, formed at the interface between two aqueous droplets in oil (droplet interface bilayers, DIBs) using the DPh-based lipids. G4 dendrimers were found to translocate through POPC:POPS bilayers deposited on silica surfaces. The charge density of the bilayer affects translocation, which is reduced when the ionic strength increases. This shows that the dendrimer-bilayer interactions are largely controlled by their electrostatic attraction. The structure of the solid-supported bilayers remains intact upon translocation of the dendrimer. However, the amount of lipids in the bilayer decreases and dendrimer/lipid aggregates are formed in bulk solution, which can be deposited on the interfacial layers upon dilution of the system with dendrimer-free solvent. Electrophysiology measurements on DIBs confirm that G4 dendrimers cross the lipid membranes containing PS, which then become more permeable to ions. The obtained results have implications for PAMAM dendrimers as delivery vehicles to cells.
Pignatello, R.; Musumeci, T.; Basile, L.; Carbone, C.; Puglisi, G.
2011-01-01
Contact with many different biological membranes goes along the destiny of a drug after its systemic administration. From the circulating macrophage cells to the vessel endothelium, to more complex absorption barriers, the interaction of a biomolecule with these membranes largely affects its rate and time of biodistribution in the body and at the target sites. Therefore, investigating the phenomena occurring on the cell membranes, as well as their different interaction with drugs in the physiological or pathological conditions, is important to exploit the molecular basis of many diseases and to identify new potential therapeutic strategies. Of course, the complexity of the structure and functions of biological and cell membranes, has pushed researchers toward the proposition and validation of simpler two- and three-dimensional membrane models, whose utility and drawbacks will be discussed. This review also describes the analytical methods used to look at the interactions among bioactive compounds with biological membrane models, with a particular accent on the calorimetric techniques. These studies can be considered as a powerful tool for medicinal chemistry and pharmaceutical technology, in the steps of designing new drugs and optimizing the activity and safety profile of compounds already used in the therapy. PMID:21430952
Interaction of α-Hexylcinnamaldehyde with a Biomembrane Model: A Possible MDR Reversal Mechanism.
Sarpietro, Maria Grazia; Di Sotto, Antonella; Accolla, Maria Lorena; Castelli, Francesco
2015-05-22
The ability of the naturally derived compound α-hexylcinnamaldehyde (1) to interact with biomembranes and to modulate their permeability has been investigated as a strategy to reverse multidrug resistance (MDR) in cancer cells. Dimyristoylphosphatidylcholine (DMPC) multilamellar vesicles (MLVs) were used as biomembrane models, and differential scanning calorimetry was applied to measure the effect of 1 on the thermotropic behavior of DMPC MLVs. The effect of an aqueous medium or a lipid carrier on the uptake of 1 by the biomembrane was also characterized. Furthermore, taking into account that MDR is strictly regulated by redox signaling, the pro-oxidant and/or antioxidant effects of 1 were evaluated by the crocin-bleaching assay, in both hydrophilic and lipophilic environments. Compound 1 was uniformly distributed in the phospholipid bilayers and deeply interacted with DMPC MLVs, intercalating among the phospholipid acyl chains and thus decreasing their cooperativity. The lipophilic medium allowed the absorption of 1 into the phospholipid membrane. In the crocin-bleaching assay, the substance produced no pro-oxidant effects in both hydrophilic and lipophilic environments; conversely, a significant inhibition of AAPH-induced oxidation was exerted in hydrophilic medium. These results suggest a possible role of 1 as a chemopreventive and chemosensitizing agent for fighting cancer.
Effect of tetracaine on DMPC and DMPC+cholesterol biomembrane models: liposomes and monolayers.
Serro, A P; Galante, R; Kozica, A; Paradiso, P; da Silva, A M P S Gonçalves; Luzyanin, K V; Fernandes, A C; Saramago, B
2014-04-01
Different types of lipid bilayers/monolayers have been used to simulate the cellular membranes in the investigation of the interactions between drugs and cells. However, to our knowledge, very few studies focused on the influence of the chosen membrane model upon the obtained results. The main objective of this work is to understand how do the nature and immobilization state of the biomembrane models influence the action of the local anaesthetic tetracaine (TTC) upon the lipid membranes. The interaction of TTC with different biomembrane models of dimyristoylphosphatidylcholine (DMPC) with and without cholesterol (CHOL) was investigated through several techniques. A quartz crystal microbalance with dissipation (QCM-D) was used to study the effect on immobilized liposomes, while phosphorus nuclear magnetic resonance ((31)P-NMR) and differential scanning calorimetry (DSC) were applied to liposomes in suspension. The effect of TTC on Langmuir monolayers of lipids was also investigated through surface pressure-area measurements at the air-water interface. The general conclusion was that TTC has a fluidizing effect on the lipid membranes and, above certain concentrations, induces membrane swelling or even solubilization. However, different models led to variable responses to the TTC action. The intensity of the disordering effect caused by TTC increased in the following order: supported liposomes
Metal transport across biomembranes: emerging models for a distinct chemistry.
Argüello, José M; Raimunda, Daniel; González-Guerrero, Manuel
2012-04-20
Transition metals are essential components of important biomolecules, and their homeostasis is central to many life processes. Transmembrane transporters are key elements controlling the distribution of metals in various compartments. However, due to their chemical properties, transition elements require transporters with different structural-functional characteristics from those of alkali and alkali earth ions. Emerging structural information and functional studies have revealed distinctive features of metal transport. Among these are the relevance of multifaceted events involving metal transfer among participating proteins, the importance of coordination geometry at transmembrane transport sites, and the presence of the largely irreversible steps associated with vectorial transport. Here, we discuss how these characteristics shape novel transition metal ion transport models.
Sarpietro, Maria G; Accolla, Maria L; Santoro, Nancy; Mansfeld, Friederike M; Pignatello, Rosario; Toth, Istvan; Castelli, Francesco
2014-05-01
The interaction between an amphiphilic luteinizing hormone-releasing hormone (LHRH) prodrug that incorporated a lipoamino acid moiety (C12-LAA) with biological membrane models that consisted of multilamellar liposomes (MLVs) and phospholipid monolayers, was studied using Differential Scanning Calorimetry (DSC) and Langmuir-Blodgett film techniques. The effect of the prodrug C12[Q1]LHRH on the lipid layers was compared with the results obtained with the pure precursors, LHRH and C12-LAA. Conjugation of LHRH with a LAA promoiety showed to improve the peptide interaction with biomembrane models. Basing on the calorimetric findings, the LAA moiety aided the transfer of the prodrug from an aqueous solution to the biomembrane model.
NASA Astrophysics Data System (ADS)
Alonso, Sergio; Bär, Markus
2010-12-01
Proteins in living cells interact with membranes. They may bind to or unbind from the membrane to the cytosol depending on the lipid composition of the membrane and their interaction with cytosolic enzymes. Moreover, proteins can accumulate at the membrane and assemble in spatial domains. Here, a simple model of protein cycling at biomembranes is studied, when the total number of proteins is conserved. Specifically, we consider the spatio-temporal dynamics of MARCKS proteins and their interactions with enzymes facilitating translocation from and rebinding to the membrane. The model exhibits two qualitatively different mechanisms of protein domain formation: phase separation related to a long-wave instability of a membrane state with homogeneous protein coverage and stable coexistence of two states with different homogeneous protein coverage in bistable media. We evaluate the impact of the cytosolic volume on the occurrence of protein pattern formation by simulations in a three-dimensional model. We show that the explicit treatment of the volume in the model leads to an effective rescaling of the reaction rates. For a simplified model of protein cycling, we can derive analytical expressions for the rescaling coefficients and verify them by direct simulations with the complete three-dimensional model.
Idowu, Sunday Olakunle; Adeyemo, Morenikeji Ambali; Ogbonna, Udochi Ihechiluru
2009-01-01
Background Determination of lipophilicity as a tool for predicting pharmacokinetic molecular behavior is limited by the predictive power of available experimental models of the biomembrane. There is current interest, therefore, in models that accurately simulate the biomembrane structure and function. A novel bio-device; a lipid thin film, was engineered as an alternative approach to the previous use of hydrocarbon thin films in biomembrane modeling. Results Retention behavior of four structurally diverse model compounds; 4-amino-3,5-dinitrobenzoic acid (ADBA), naproxen (NPX), nabumetone (NBT) and halofantrine (HF), representing 4 broad classes of varying molecular polarities and aqueous solubility behavior, was investigated on the lipid film, liquid paraffin, and octadecylsilane layers. Computational, thermodynamic and image analysis confirms the peculiar amphiphilic configuration of the lipid film. Effect of solute-type, layer-type and variables interactions on retention behavior was delineated by 2-way analysis of variance (ANOVA) and quantitative structure property relationships (QSPR). Validation of the lipid film was implemented by statistical correlation of a unique chromatographic metric with Log P (octanol/water) and several calculated molecular descriptors of bulk and solubility properties. Conclusion The lipid film signifies a biomimetic artificial biological interface capable of both hydrophobic and specific electrostatic interactions. It captures the hydrophilic-lipophilic balance (HLB) in the determination of lipophilicity of molecules unlike the pure hydrocarbon film of the prior art. The potentials and performance of the bio-device gives the promise of its utility as a predictive analytic tool for early-stage drug discovery science. PMID:19735551
NASA Astrophysics Data System (ADS)
Rojo, N.; Muñoz, M.; Pujol, M.; Alsina, M. A.; Haro, I.
2004-12-01
In this work we studied by DSC the interaction of three antigenic overlapping peptides belonging to the E2 envelope protein of Hepatitis G virus, namely E2(39-53), E2(32-53) and E2(26-53), with liposomes of different lipid composition (DPPC, DMPC and DMPG) as biomembrane models. The effect of the three selected peptides on the thermotropic behaviour of lipid bilayers has been evaluated.
Nanodomains in Biomembranes with Recycling.
Berger, Mareike; Manghi, Manoel; Destainville, Nicolas
2016-10-13
Cell membranes are out of thermodynamic equilibrium notably because of membrane recycling, i.e., active exchange of material with the cytosol. We propose an analytically tractable model of biomembrane predicting the effects of recycling on the size of protein nanodomains also called protein clusters. The model includes a short-range attraction between proteins and a weaker long-range repulsion which ensures the existence of so-called cluster phases in equilibrium, where monomeric proteins coexist with finite-size domains. Our main finding is that, when taking recycling into account, the typical cluster size at steady state increases logarithmically with the recycling rate at fixed protein concentration. Using physically realistic model parameters, the predicted 2-fold increase due to recycling in living cells is most likely experimentally measurable with the help of super-resolution microscopy.
Sakai, Naomi; Mareda, Jiri; Matile, Stefan
2005-02-01
Synthetic ion channels and pores formed by rigid-rod molecules are summarized. This includes work on hydrogen-bonded chains installed along membrane-spanning rigid-rod scaffolds to transport protons. As a second topic, programmed assembly of p-septiphenyls with terminal iminodiacetate-copper complexes for potassium transport by cation-pi interactions is described. The third topic concerns rigid push-pull rods as fluorescent alpha-helix mimics to probe the importance of dipole-potential interactions for voltage gating, both on the functional and the structural level. Topic number four deals with p-octiphenyl staves as key scaffolds for the synthesis of rigid-rod beta-barrel pores. The description of internal and external design strategies for these rigid-rod beta-barrels covers a rich collection of pH-, pM-, voltage-, ligand-, and enzyme-gated synthetic multifunctional pores that can act as hosts, sensors, and catalysts. As far as practical applications are concerned, the possibility to detect chemical reactions with synthetic multifunctional pores appears most attractive. Recent molecular mechanics simulations are presented as a valuable approach to insights on the elusive suprastructures of multifunctional pores made from rigid rods.
Travelling lipid domains in a dynamic model for protein-induced pattern formation in biomembranes
NASA Astrophysics Data System (ADS)
John, Karin; Bär, Markus
2005-06-01
Cell membranes are composed of a mixture of lipids. Many biological processes require the formation of spatial domains in the lipid distribution of the plasma membrane. We have developed a mathematical model that describes the dynamic spatial distribution of acidic lipids in response to the presence of GMC proteins and regulating enzymes. The model encompasses diffusion of lipids and GMC proteins, electrostatic attraction between acidic lipids and GMC proteins as well as the kinetics of membrane attachment/detachment of GMC proteins. If the lipid-protein interaction is strong enough, phase separation occurs in the membrane as a result of free energy minimization and protein/lipid domains are formed. The picture is changed if a constant activity of enzymes is included into the model. We chose the myristoyl-electrostatic switch as a regulatory module. It consists of a protein kinase C that phosphorylates and removes the GMC proteins from the membrane and a phosphatase that dephosphorylates the proteins and enables them to rebind to the membrane. For sufficiently high enzymatic activity, the phase separation is replaced by travelling domains of acidic lipids and proteins. The latter active process is typical for nonequilibrium systems. It allows for a faster restructuring and polarization of the membrane since it acts on a larger length scale than the passive phase separation. The travelling domains can be pinned by spatial gradients in the activity; thus the membrane is able to detect spatial clues and can adapt its polarity dynamically to changes in the environment.
Neves, Ana Rute; Nunes, Cláudia; Reis, Salette
2015-09-03
Resveratrol has been widely studied because of its pleiotropic effects in cancer therapy, neuroprotection, and cardioprotection. It is believed that the interaction of resveratrol with biological membranes may play a key role in its therapeutic activity. The capacity of resveratrol to partition into lipid bilayers, its possible location within the membrane, and the influence of this compound on the membrane fluidity were investigated using membrane mimetic systems composed of egg l-α-phosphatidylcholine (EPC), cholesterol (CHOL), and sphingomyelin (SM). The results showed that resveratrol has greater affinity for the EPC bilayers than for EPC:CHOL [4:1] and EPC:CHOL:SM [1:1:1] membrane models. The increased difficulty in penetrating tight packed membranes is also demonstrated by fluorescence quenching of probes and by fluorescence anisotropy measurements. Resveratrol may be involved in the regulation of cell membrane fluidity, thereby contributing for cell homeostasis.
The SPASIBA force field of model compounds related to lipids of biomembranes
NASA Astrophysics Data System (ADS)
Chhiba, M.; Vergoten, G.
1996-10-01
The vibrational spectroscopic force field SPASIBA (Spectroscopic Potential Algorithm for SImulating Biomolecular conformational Adaptability), which has been shown to exhibit unique properties over the whole molecular potential energy surface rather than in the vicinity of minima (as current force fields do), has been developed for a series of model compounds related to the lipid component of biological membranes. The structures, relative energies and vibrational spectra of several phosphate anions, acetylcholine cations, phosphorylcholine and some of their deuterated analogs have been investigated in detail. In particular, the root mean square deviation of 11 cm -1 between the observed and the calculated vibrational frequency lends some confidence to the expectation that this force field will give meaningful results when used in molecular dynamics simulations.
Binding of LL-37 to model biomembranes: insight into target vs host cell recognition.
Sood, Rohit; Domanov, Yegor; Pietiäinen, Milla; Kontinen, Vesa P; Kinnunen, Paavo K J
2008-04-01
Pursuing the molecular mechanisms of the concentration dependent cytotoxic and hemolytic effects of the human antimicrobial peptide LL-37 on cells, we investigated the interactions of this peptide with lipids using different model membranes, together with fluorescence spectroscopy for the Trp-containing mutant LL-37(F27W). Minimum concentrations inhibiting bacterial growth and lipid interactions assessed by dynamic light scattering and monolayer penetration revealed the mutant to retain the characteristics of native LL-37. Although both LL-37 and the mutant intercalated effectively into zwitterionic phosphatidylcholine membranes the presence of acidic phospholipids caused augmented membrane binding. Interestingly, strongly attenuated intercalation of LL-37 into membranes containing both cholesterol and sphingomyelin (both at X=0.3) was observed. Accordingly, the distinction between target and host cells by LL-37 is likely to derive from i) acidic phospholipids causing enhanced association with the former cells as well as ii) from attenuated interactions with the outer surface of the plasma membrane of the peptide secreting host, imposed by its high content of cholesterol and sphingomyelin. Our results further suggest that LL-37 may exert its antimicrobial effects by compromising the membrane barrier properties of the target microbes by a mechanism involving cytotoxic oligomers, similarly to other peptides forming amyloid-like fibers in the presence of acidic phospholipids.
Ahlers, M; Grainger, D W; Herron, J N; Lim, K; Ringsdorf, H; Salesse, C
1992-01-01
Three model biomembrane systems, monolayers, micelles, and vesicles, have been used to study the influence of chemical and physical variables of hapten presentation at membrane interfaces on antibody binding. Hapten recognition and binding were monitored for the anti-fluorescein monoclonal antibody 4-4-20 generated against the hapten, fluorescein, in these membrane models as a function of fluorescein-conjugated lipid architecture. Specific recognition and binding in this system are conveniently monitored by quenching of fluorescein emission upon penetration of fluorescein into the antibody's active site. Lipid structure was shown to play a large role in affecting antibody quenching. Interestingly, the observed degrees of quenching were nearly independent of the lipid membrane model studied, but directly correlated with the chemical structure of the lipids. In all cases, the antibody recognized and quenched most efficiently a lipid based on dioctadecylamine where fluorescein is attached to the headgroup via a long, flexible hydrophilic spacer. Dipalmitoyl phosphatidylethanolamine containing a fluorescein headgroup demonstrated only partial binding/quenching. Egg phosphatidylethanolamine with a fluorescein headgroup showed no susceptibility to antibody recognition, binding, or quenching. Formation of two-dimensional protein domains upon antibody binding to the fluorescein-lipids in monolayers is also presented. Chemical and physical requirements for these antibody-hapten complexes at membrane surfaces have been discussed in terms of molecular dynamics simulations based on recent crystallographic models for this antibody-hapten complex (Herron et al., 1989. Proteins Struct. Funct. Genet. 5:271-280). Images FIGURE 7 FIGURE 8 PMID:1420916
Biomembrane and receptor mechanisms
Chapman, D.; Bertoli, E.
1987-01-01
This book cover the reviews on biomembrane dynamics; recent spectroscopic studies. Topics covered are freeze fracture: Seeing and thinking biological membranes, membrane proteins and receptors: structure and organisation; techniques to determine the transbilayer distribution and mobility of phospholipids in biological membranes, transbilayer organisation of phospholipids in the plasma membranes of pro-erythroblasts and normal and abnormal red cells, aminophospholipid translocation in the erythroctye membrane is mediated by a specific AIP-dependent enzyme; membrane protein interactions, lipid-protein interactions: selectively and receptor binding, membrane fluidity in the regulation of membrane-linked enzymes, the lipid regulation of receptor functions, microheterogencity of biological membrane: structural and functional implications, fusion-fission reactions in biological membranes and in phospholpid bilayers, methods for studying the structure and function of the mitochondrial uncoupling protein, methods for studying metabolite transport in mitochondria, transport of metabolites in mitochondria, membrane gangliosides and allied glycosphingolipids: Biochemical features and physicochemical properties, the use of merocyanine 540 for monitoring aggregation properties of sialogangliosides in solution, hormone reception at the cell surface - an overview, double role for GIP in the stimulus secretion sequence of mast cells and neurophils, tumor promoters and hormone receptor coupling mechanisms in the anterior pituitary. The regulation of hormone-dependent adenylate cyclase in native membranes and systems reconstituted from purified components.- Immunological tools for the study of plasma membrane receptors.
Andreani, Tatiana; Miziara, Leonardo; Lorenzón, Esteban N; de Souza, Ana Luiza R; Kiill, Charlene P; Fangueiro, Joana F; Garcia, Maria L; Gremião, Palmira D; Silva, Amélia M; Souto, Eliana B
2015-06-01
The present paper focuses on the development and characterization of silica nanoparticles (SiNP) coated with hydrophilic polymers as mucoadhesive carriers for oral administration of insulin. SiNP were prepared by sol-gel technology under mild conditions and coated with different hydrophilic polymers, namely, chitosan, sodium alginate or poly(ethylene glycol) (PEG) with low and high molecular weight (PEG 6000 and PEG 20000) to increase the residence time at intestinal mucosa. The mean size and size distribution, association efficiency, insulin structure and insulin thermal denaturation have been determined. The mean nanoparticle diameter ranged from 289 nm to 625 nm with a PI between 0.251 and 0.580. The insulin association efficiency in SiNP was recorded above 70%. After coating, the association efficiency of insulin increased up to 90%, showing the high affinity of the protein to the hydrophilic polymer chains. Circular dichroism (CD) indicated that no conformation changes of insulin structure occurred after loading the peptide into SiNP. Nano-differential scanning calorimetry (nDSC) showed that SiNP shifted the insulin endothermic peak to higher temperatures. The influence of coating on the interaction of nanoparticles with dipalmitoylphosphatidylcholine (DPPC) biomembrane models was also evaluated by nDSC. The increase of ΔH values suggested a strong association of non-coated SiNP and those PEGylated nanoparticles coated with DPPC polar heads by forming hydrogen bonds and/or by electrostatic interaction. The mucoadhesive properties of nanoparticles were examined by studying the interaction with mucin in aqueous solution. SiNP coated with alginate or chitosan showed high contact with mucin. On the other hand, non-coated SiNP and PEGylated SiNP showed lower interaction with mucin, indicating that these nanoparticles can interdiffuse across mucus network. The results of the present work provide valuable data in assessing the in vitro performance of insulin
Intelligent biomembrane obtained by irradiation techniques
NASA Astrophysics Data System (ADS)
Kaetsu, Isao; Uchida, Kumao; Sutani, Kouichi; Sakata, Shoei
2000-03-01
An intelligent biomembrane for environment-responsive feedback releases has been developed using radiation techniques. Various fine-porous base membranes (polyester, polycarbonate, silicon) were prepared by hole fabrication techniques with excimer-laser, ion-beam etching and photo-lithography etching. Then, various monomeric mixture of stimuli-sensitive hydrogels with or without immobilized enzymes were coated and polymerized on the porous membrane by UV, γ-ray or electron beam. The product showed the intelligent feedback release functions of model substance (methylene blue) in response to the on-off switching of signals such as pH changes and introduction of electric field. The responsiveness was remarkably improved by radiation induced IPN (interpenetrating polymer network) formation. Intelligent release controlled by a computer program was also studied and proved.
Statistical Thermodynamics of Biomembranes
Devireddy, Ram V.
2010-01-01
An overview of the major issues involved in the statistical thermodynamic treatment of phospholipid membranes at the atomistic level is summarized: thermodynamic ensembles, initial configuration (or the physical system being modeled), force field representation as well as the representation of long-range interactions. This is followed by a description of the various ways that the simulated ensembles can be analyzed: area of the lipid, mass density profiles, radial distribution functions (RDFs), water orientation profile, Deuteurium order parameter, free energy profiles and void (pore) formation; with particular focus on the results obtained from our recent molecular dynamic (MD) simulations of phospholipids interacting with dimethylsulfoxide (Me2SO), a commonly used cryoprotective agent (CPA). PMID:19460363
Mrázková, E; Hobza, P; Bohl, M; Gauger, D R; Pohle, W
2005-08-11
The chemical characteristics of the polar parts of phospholipids as the main components of biological membranes were investigated by using infrared (IR) spectroscopy and theoretical calculations with water as a probe molecule. The logical key molecule used in this study is methylphosphocholine (MePC) as it is not only a representative model for a polar lipid headgroup but itself has biological significance. Isolated MePC forms a compact (folded) structure which is essentially stabilized by two intramolecular C-H...O type hydrogen bonds. At lower hydration, considerable wavenumber shifts were revealed by IR spectroscopy: the frequencies of the (O-P-O)- stretches were strongly redshifted, whereas methyl and methylene C-H and O-P-O stretches shifted surprisingly to blue. The origin of both red- and blueshifts was rationalized, on the basis of molecular-dynamics and quantum-chemistry calculations. In more detail, the hydration-induced blueshifts of C-H stretches could be shown to arise from several origins: disruption of the intramolecular C-H...O hydrogen bonds, formation of intermolecular C-H...O(water) H-bonds. The stepwise disruption of the intramolecular hydrogen bonds appeared to be the main feature that causes partial unfolding of the compact structure. However, the transition from a folded to extended MePC structure was completed only at high hydration. One might hypothesize that the mechanism of hydration-driven conformational changes as described here for MePC could be transferred to other zwitterions with relevant internal C-H...O hydrogen bonds.
Horobin, R W; Stockert, J C; Rashid-Doubell, F
2015-05-01
We discuss a variety of biological targets including generic biomembranes and the membranes of the endoplasmic reticulum, endosomes/lysosomes, Golgi body, mitochondria (outer and inner membranes) and the plasma membrane of usual fluidity. For each target, we discuss the access of probes to the target membrane, probe uptake into the membrane and the mechanism of selectivity of the probe uptake. A statement of the QSAR decision rule that describes the required physicochemical features of probes that enable selective staining also is provided, followed by comments on exceptions and limits. Examples of probes typically used to demonstrate each target structure are noted and decision rule tabulations are provided for probes that localize in particular targets; these tabulations show distribution of probes in the conceptual space defined by the relevant structure parameters ("parameter space"). Some general implications and limitations of the QSAR models for probe targeting are discussed including the roles of certain cell and protocol factors that play significant roles in lipid staining. A case example illustrates the predictive ability of QSAR models. Key limiting values of the head group hydrophilicity parameter associated with membrane-probe interactions are discussed in an appendix.
Impedance Analysis of Surface-Bound Biomembranes
1990-06-08
and identify by block numb (i FIELD GROUP SUB-GROLm--- AC Impedance, Biomembranes, Lipid, Electrod\\) ’CBiosensor - O ( S. &-’te ,,• J ABSTRACT...Instit-ute 57 Union St., Worcester, MA 01608 ABSTRACTElcchria isThe impedance of different electrode substratesElcharacteriz l biomemance atnactuses fomed ...T10 2), indium/tin oxide (ITO) and platinum electrodes that have been "primed" by covalent attachment of long-chained alkyl groups . The electroes were
On soliton propagation in biomembranes and nerves.
Heimburg, Thomas; Jackson, Andrew D
2005-07-12
The lipids of biological membranes and intact biomembranes display chain melting transitions close to temperatures of physiological interest. During this transition the heat capacity, volume and area compressibilities, and relaxation times all reach maxima. Compressibilities are thus nonlinear functions of temperature and pressure in the vicinity of the melting transition, and we show that this feature leads to the possibility of soliton propagation in such membranes. In particular, if the membrane state is above the melting transition solitons will involve changes in lipid state. We discuss solitons in the context of several striking properties of nerve membranes under the influence of the action potential, including mechanical dislocations and temperature changes.
Biomembranes research using thermal and cold neutrons
Heberle, Frederick A.; Myles, Dean A. A.; Katsaras, John
2015-08-01
In 1932 James Chadwick discovered the neutron using a polonium source and a beryllium target (Chadwick, 1932). In a letter to Niels Bohr dated February 24, 1932, Chadwick wrote: “whatever the radiation from Be may be, it has most remarkable properties.” Where it concerns hydrogen-rich biological materials, the “most remarkable” property is the neutron’s differential sensitivity for hydrogen and its isotope deuterium. Such differential sensitivity is unique to neutron scattering, which unlike X-ray scattering, arises from nuclear forces. Consequently, the coherent neutron scattering length can experience a dramatic change in magnitude and phase as a result of resonance scattering, imparting sensitivity to both light and heavy atoms, and in favorable cases to their isotopic variants. Furthermore, this article describes recent biomembranes research using a variety of neutron scattering techniques.
Biomembranes research using thermal and cold neutrons
Heberle, Frederick A.; Myles, Dean A. A.; Katsaras, John
2015-08-01
In 1932 James Chadwick discovered the neutron using a polonium source and a beryllium target (Chadwick, 1932). In a letter to Niels Bohr dated February 24, 1932, Chadwick wrote: “whatever the radiation from Be may be, it has most remarkable properties.” Where it concerns hydrogen-rich biological materials, the “most remarkable” property is the neutron’s differential sensitivity for hydrogen and its isotope deuterium. Such differential sensitivity is unique to neutron scattering, which unlike X-ray scattering, arises from nuclear forces. Consequently, the coherent neutron scattering length can experience a dramatic change in magnitude and phase as a result of resonance scattering, impartingmore » sensitivity to both light and heavy atoms, and in favorable cases to their isotopic variants. Furthermore, this article describes recent biomembranes research using a variety of neutron scattering techniques.« less
Biomembranes research using thermal and cold neutrons.
Heberle, F A; Myles, D A A; Katsaras, J
2015-11-01
In 1932 James Chadwick discovered the neutron using a polonium source and a beryllium target (Chadwick, 1932). In a letter to Niels Bohr dated February 24, 1932, Chadwick wrote: "whatever the radiation from Be may be, it has most remarkable properties." Where it concerns hydrogen-rich biological materials, the "most remarkable" property is the neutron's differential sensitivity for hydrogen and its isotope deuterium. Such differential sensitivity is unique to neutron scattering, which unlike X-ray scattering, arises from nuclear forces. Consequently, the coherent neutron scattering length can experience a dramatic change in magnitude and phase as a result of resonance scattering, imparting sensitivity to both light and heavy atoms, and in favorable cases to their isotopic variants. This article describes recent biomembranes research using a variety of neutron scattering techniques.
The decreasing of corn root biomembrane penetration for acetochlor with vermicompost amendment
NASA Astrophysics Data System (ADS)
Sytnyk, Svitlana; Wiche, Oliver
2016-04-01
One of the topical environmental security issues is management and control of anthropogenic (artificially synthesized) chemical agents usage and utilization. Protection systems development against toxic effects of herbicides should be based on studies of biological indication mechanisms for identification of stressors effect in organisms. Lipid degradation is non-specific reaction to exogenous chemical agents effects. Therefore it is important to study responses of lipid components depending on the stressor type. We studied physiological and biochemical characteristics of lipid metabolism under action of herbicides of chloracetamide group. Corn at different stages of ontogenesis was used as testing object during model laboratory and microfield experiments. Cattle manure treated with earth worms Essenia Foetida was used as compost fertilizer to add to chain: chernozem (black soil) -corn system. It was found several acetochlor actions as following: -decreasing of sterols, phospholipids, phosphatidylcholines and phosphatidylethanolamines content; -increasing pool of available fatty acids and phosphatidic acids associated with intensification of hydrolysis processes; -lypase activity stimulation under effect of stressor in low concentrations; -lypase activity inhibition under effect of high stressor level; -decreasing of polyenoic free fatty acids indicating biomembrane degradation; -accumulation of phospholipids degradation products (phosphatidic acids); -decreasing of high-molecular compounds (phosphatidylcholin and phosphatidylinositol) concentrations; -change in the index of unsaturated and saturated free fatty acids ratio in biomembranes structure; It was established that incorporation of vermicompost in dose 0.4 kg/m2 in black soil lead to corn roots biomembrane restoration. It was fixed the decreasing roots biomembrane penetration for acetochlor in trial with vermicompost. Second compost substances antidote effect is the soil microorganism's activation
Goto, Thiago E; Lopes, Carla C; Nader, Helena B; Silva, Anielle C A; Dantas, Noelio O; Siqueira, José R; Caseli, Luciano
2016-07-01
Cadmium selenide (CdSe) magic-sized quantum dots (MSQDs) are semiconductor nanocrystals with stable luminescence that are feasible for biomedical applications, especially for in vivo and in vitro imaging of tumor cells. In this work, we investigated the specific interaction of CdSe MSQDs with tumorigenic and non-tumorigenic cells using Langmuir monolayers and Langmuir-Blodgett (LB) films of lipids as membrane models for diagnosis of cancerous cells. Surface pressure-area isotherms and polarization modulation reflection-absorption spectroscopy (PM-IRRAS) showed an intrinsic interaction between the quantum dots, inserted in the aqueous subphase, and Langmuir monolayers constituted either of selected lipids or of tumorigenic and non-tumorigenic cell extracts. The films were transferred to solid supports to obtain microscopic images, providing information on their morphology. Similarity between films with different compositions representing cell membranes, with or without the quantum dots, was evaluated by atomic force microscopy (AFM) and confocal microscopy. This study demonstrates that the affinity of quantum dots for models representing cancer cells permits the use of these systems as devices for cancer diagnosis.
Performance of skeleton-reinforced biomembranes in locomotion
NASA Astrophysics Data System (ADS)
Zhu, Qiang; Shoele, Kourosh
2008-11-01
Skeleton-reinforced biomembranes are ubiquitous in nature and play critical roles in many biological functions. Representative examples include insect wings, cell membranes, and mollusk nacres. In this study we focus on the ray fins of fish and investigate the effects of anisotropic flexibility on their performance. Employing a fluid-structure interaction algorithm by coupling a boundary-element model with a nonlinear structural model, we examined the dynamics of a membrane that is geometrically and structurally similar to a caudal fin. Several locomotion modes that closely resemble caudal fin kinematics reported in the literature are applied. Our results show that the flexibility of the fin significantly increases its capacity of thrust generation, manifested as increased efficiency, reduced transverse force, and reduced sensitivity to kinematic parameters. This design also makes the fin more controllable and deployable. Despite simplifications made in this model in terms of fin geometry, internal structure, and kinematics, detailed features of the simulated flow field are consistent with observations and speculations based upon Particle Image Velocimetry (PIV) measurements of flow around live fish.
Technology Transfer Automated Retrieval System (TEKTRAN)
Adsorption-desorption reactions are important processes that affect the transport of contaminants in the environment. Surface complexation models are chemical models that can account for the effects of variable chemical conditions, such as pH, on adsorption reactions. These models define specific ...
2015-01-01
SEP 2015 2. REPORT TYPE N/A 3. DATES COVERED - 4. TITLE AND SUBTITLE Complexity and animal models 5a. CONTRACT NUMBER 5b. GRANT NUMBER...decrease W/Wmax, thereby maintaining the relationship between variability and W/Wmax. doi:10.1016/j.jcrc.2010.05.012 Complexity and animal models...may not be possible during mass casualty and natural disaster situations or may need to be postponed during combat to avoid danger to the medic’s life
Use of inverse theory algorithms in the analysis of biomembrane NMR data.
Sternin, Edward
2007-01-01
Treating the analysis of experimental spectroscopic data as an inverse problem and using regularization techniques to obtain stable pseudoinverse solutions, allows access to previously unavailable level of spectroscopic detail. The data is mapped into an appropriate physically relevant parameter space, leading to better qualitative and quantitative understanding of the underlying physics, and in turn, to better and more detailed models. A brief survey of relevant inverse methods is illustrated by several successful applications to the analysis of nuclear magnetic resonance data, yielding new insight into the structure and dynamics of biomembrane lipids.
Sporadic meteoroid complex: Modeling
NASA Astrophysics Data System (ADS)
Andreev, V.
2014-07-01
The distribution of the sporadic meteoroids flux density over the celestial sphere is the common form of representation of the meteoroids distribution in the vicinity of the Earth's orbit. The determination of the flux density of sporadic meteor bodies is Q(V,e,f) = Q_0 P_e(V) P(e,f) where V is the meteoroid velocity, e,f are the radiant coordinates, Q_0 is the meteoroid flux over whole celestial sphere, P_e(V) is the conditional velocity distributions and P(e,f) is the radiant distribution over the celestial sphere. The sporadic meteoroid complex model is analytical and based on heliocentric velocities and radiant distributions. The multi-mode character of the heliocentric velocity and radiant distributions follows from the analysis of meteor observational data. This fact points to a complicated structure of the sporadic meteoroid complex. It is the consequence of the plurality of the parent bodies and the origin mechanisms of the meteoroids. The meteoroid complex was divided into four groups for that reason and with a goal of more accurate modelling of velocities and radiant distributions. As the classifying parameter to determine the meteoroid membership in any group, we adopt the Tisserand invariant relative to Jupiter T_J = 1/a + 2 A_J^{-3/2} √{a (1 - e^2)} cos i and the meteoroid orbit inclination i. Two meteoroid groups relate to long-period and short-period comets. One meteoroid group is related to asteroids. The relationship to the last, fourth group is a problematic one. Then, we construct models of radiant and velocity distributions for each group. The analytical model for the whole sporadic meteoroid complex is the sum of the ones for each group.
Predictive Surface Complexation Modeling
Sverjensky, Dimitri A.
2016-11-29
Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO_{2} and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall, my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.
Debating complexity in modeling
Hunt, Randall J.; Zheng, Chunmiao
1999-01-01
As scientists trying to understand the natural world, how should our effort be apportioned? We know that the natural world is characterized by complex and interrelated processes. Yet do we need to explicitly incorporate these intricacies to perform the tasks we are charged with? In this era of expanding computer power and development of sophisticated preprocessors and postprocessors, are bigger machines making better models? Put another way, do we understand the natural world better now with all these advancements in our simulation ability? Today the public's patience for long-term projects producing indeterminate results is wearing thin. This increases pressure on the investigator to use the appropriate technology efficiently. On the other hand, bringing scientific results into the legal arena opens up a new dimension to the issue: to the layperson, a tool that includes more of the complexity known to exist in the real world is expected to provide the more scientifically valid answer.
Biomembranes in atomistic and coarse-grained simulations
NASA Astrophysics Data System (ADS)
Pluhackova, Kristyna; Böckmann, Rainer A.
2015-08-01
The architecture of biological membranes is tightly coupled to the localization, organization, and function of membrane proteins. The organelle-specific distribution of lipids allows for the formation of functional microdomains (also called rafts) that facilitate the segregation and aggregation of membrane proteins and thus shape their function. Molecular dynamics simulations enable to directly access the formation, structure, and dynamics of membrane microdomains at the molecular scale and the specific interactions among lipids and proteins on timescales from picoseconds to microseconds. This review focuses on the latest developments of biomembrane force fields for both atomistic and coarse-grained molecular dynamics (MD) simulations, and the different levels of coarsening of biomolecular structures. It also briefly introduces scale-bridging methods applicable to biomembrane studies, and highlights selected recent applications.
Response of biomembrane domains to external stimuli
NASA Astrophysics Data System (ADS)
Urbancic, Iztok
To enrich our knowledge about membrane domains, new measurement techniques with extended spatial and temporal windows are being vigorously developed by combining various approaches. Following such efforts of the scientific community, we set up fluorescence microspectroscopy (FMS), bridging two well established methods: fluorescence microscopy, which enables imaging of the samples with spatial resolution down to 200 nm, and fluorescence spectroscopy that provides molecular information of the environment at nanometer and nanosecond scale. The combined method therefore allows us to localize this type of information with the precision suitable for studying various cellular structures. Faced with weak available fluorescence signals, we have put considerable efforts into optimization of measurement processes and analysis of the data. By introducing a novel acquisition scheme and by fitting the data with a mathematical model, we preserved the spectral resolution, characteristic for spectroscopic measurements of bulk samples, also at microscopic level. We have at the same time overcome the effects of photobleaching, which had previously considerably distorted the measured spectral lineshape of photosensitive dyes and consequently hindered the reliability of FMS. Our new approach has therefore greatly extended the range of applicable environmentally sensitive probes, which can now be designed to better accommodate the needs of each particular experiment. Moreover, photobleaching of fluorescence signal can now even be exploited to obtain new valuable information about molecular environment of the probes, as bleaching rates of certain probes also depend on physical and chemical properties of the local surroundings. In this manner we increased the number of available spatially localized spectral parameters, which becomes invaluable when investigating complex biological systems that can only be adequately characterized by several independent variables. Applying the developed
57 Fe Mössbauer probe of spin crossover thin films on a bio-membrane
NASA Astrophysics Data System (ADS)
Naik, Anil D.; Garcia, Yann
2012-03-01
An illustrious complex [Fe(ptz)6](BF4)2 (ptz = 1-propyl-tetrazole) ( 1) which was produced in the form of submicron crystals and thin film on Allium cepa membrane was probed by 57Fe Mossbauer spectroscopy in order to follow its intrinsic spin crossover. In addition to a weak signal that corresponds to neat SCO compound significant amount of other iron compounds are found that could have morphed from 1 due to specific host-guest interaction on the lipid-bilayer of bio-membrane. Further complimentary information about biogenic role of membrane, was obtained from variable temperature Mossbauer spectroscopy on a ~5% enriched [57Fe(H2O)6](BF4)2 salt on this membrane.
Action of the multifunctional peptide BP100 on native biomembranes examined by solid-state NMR.
Misiewicz, Julia; Afonin, Sergii; Grage, Stephan L; van den Berg, Jonas; Strandberg, Erik; Wadhwani, Parvesh; Ulrich, Anne S
2015-04-01
Membrane composition is a key factor that regulates the destructive activity of antimicrobial peptides and the non-leaky permeation of cell penetrating peptides in vivo. Hence, the choice of model membrane is a crucial aspect in NMR studies and should reflect the biological situation as closely as possible. Here, we explore the structure and dynamics of the short multifunctional peptide BP100 using a multinuclear solid-state NMR approach. The membrane alignment and mobility of this 11 amino acid peptide was studied in various synthetic lipid bilayers with different net charge, fluidity, and thickness, as well as in native biomembranes harvested from prokaryotic and eukaryotic cells. (19)F-NMR provided the high sensitivity and lack of natural abundance background that are necessary to observe a labelled peptide even in protoplast membranes from Micrococcus luteus and in erythrocyte ghosts. Six selectively (19)F-labeled BP100 analogues gave remarkably similar spectra in all of the macroscopically oriented membrane systems, which were studied under quasi-native conditions of ambient temperature and full hydration. This similarity suggests that BP100 has the same surface-bound helical structure and high mobility in the different biomembranes and model membranes alike, independent of charge, thickness or cholesterol content of the system. (31)P-NMR spectra of the phospholipid components did not indicate any bilayer perturbation, so the formation of toroidal wormholes or micellarization can be excluded as a mechanism of its antimicrobial or cell penetrating action. However, (2)H-NMR analysis of the acyl chain order parameter profiles showed that BP100 leads to considerable membrane thinning and thereby local destabilization.
Tools for characterizing biomembranes : final LDRD report.
Alam, Todd Michael; Stevens, Mark; Holland, Gregory P.; McIntyre, Sarah K.
2007-10-01
A suite of experimental nuclear magnetic resonance (NMR) spectroscopy tools were developed to investigate lipid structure and dynamics in model membrane systems. By utilizing both multinuclear and multidimensional NMR experiments a range of different intra- and inter-molecular contacts were probed within the membranes. Examples on pure single component lipid membranes and on the canonical raft forming mixture of DOPC/SM/Chol are presented. A unique gel phase pretransition in SM was also identified and characterized using these NMR techniques. In addition molecular dynamics into the hydrogen bonding network unique to sphingomyelin containing membranes were evaluated as a function of temperature, and are discussed.
Dynamic Tension Spectroscopy and Strength of Biomembranes
Evans, Evan; Heinrich, Volkmar; Ludwig, Florian; Rawicz, Wieslawa
2003-01-01
Rupturing fluid membrane vesicles with a steady ramp of micropipette suction produces a distribution of breakage tensions governed by the kinetic process of membrane failure. When plotted as a function of log(tension loading rate), the locations of distribution peaks define a dynamic tension spectrum with distinct regimes that reflect passage of prominent energy barriers along the kinetic pathway. Using tests on five types of giant phosphatidylcholine lipid vesicles over loading rates(tension/time) from 0.01–100 mN/m/s, we show that the kinetic process of membrane breakage can be modeled by a causal sequence of two thermally-activated transitions. At fast loading rates, a steep linear regime appears in each spectrum which implies that membrane failure starts with nucleation of a rare precursor defect. The slope and projected intercept of this regime are set by defect size and frequency of spontaneous formation, respectively. But at slow loading rates, each spectrum crosses over to a shallow-curved regime where rupture tension changes weakly with rate. This regime is predicted by the classical cavitation theory for opening an unstable hole in a two-dimensional film within the lifetime of the defect state. Under slow loading, membrane edge energy and the frequency scale for thermal fluctuations in hole size are the principal factors that govern the level of tension at failure. To critically test the model and obtain the parameters governing the rates of transition under stress, distributions of rupture tension were computed and matched to the measured histograms through solution of the kinetic master (Markov) equations for defect formation and annihilation or evolution to an unstable hole under a ramp of tension. As key predictors of membrane strength, the results for spontaneous frequencies of defect formation and hole edge energies were found to correlate with membrane thicknesses and elastic bending moduli, respectively. PMID:14507698
The thermodynamics of simple biomembrane mimetic systems
Raudino, Antonio; Sarpietro, Maria Grazia; Pannuzzo, Martina
2011-01-01
Insight into the forces governing a system is essential for understanding its behavior and function. Thermodynamic investigations provide a wealth of information that is not, or is hardly, available from other methods. This article reviews thermodynamic approaches and assays to measure collective properties such as heat adsorption / emission and volume variations. These methods can be successfully applied to the study of lipid vesicles (liposomes) and biological membranes. With respect to instrumentation, differential scanning calorimetry, pressure perturbation calorimetry, isothermal titration calorimetry, dilatometry, and acoustic techniques aimed at measuring the isothermal and adiabatic processes, two- and three-dimensional compressibilities are considered. Applications of these techniques to lipid systems include the measurement of different thermodynamic parameters and a detailed characterization of thermotropic, barotropic, and lyotropic phase behavior. The membrane binding and / or partitioning of solutes (proteins, peptides, drugs, surfactants, ions, etc.) can also be quantified and modeled. Many thermodynamic assays are available for studying the effect of proteins and other additives on membranes, characterizing non-ideal mixing, domain formation, bilayer stability, curvature strain, permeability, solubilization, and fusion. Studies of membrane proteins in lipid environments elucidate lipid–protein interactions in membranes. Finally, a plethora of relaxation phenomena toward equilibrium thermodynamic structures can be also investigated. The systems are described in terms of enthalpic and entropic forces, equilibrium constants, heat capacities, partial volume changes, volume and area compressibility, and so on, also shedding light on the stability of the structures and the molecular origin and mechanism of the structural changes. PMID:21430953
Measured depletion of ions at the biomembrane interface.
Petrache, Horia I; Kimchi, Itamar; Harries, Daniel; Parsegian, V Adrian
2005-08-24
Expected from theory and simulations, depletion of ions at fuzzy biomembrane interfaces has long eluded experiments. Here, we show how salt exclusion can be accurately measured by surprisingly simple yet accurate benchtop measurements. Multilamellar aggregates of common phospholipids sink in low salt but float in salt solutions that are much less dense than the lipid itself. By manipulating bath and lipid densities, using heavy water and varied lipid chain length, we obtain accurate exclusion curves over a wide range of KCl and KBr concentrations. While maintaining a constant width at low salt, the exclusion layer decreases in high salt, following the Debye screening length. Consistent with interfacial accumulation of polarizable ions, bromide salts are less excluded than chloride, with an attraction of approximately 2kBT per Br- ion. So far neglected in theoretical descriptions, the competition between salt exclusion and binding is critical to understanding membrane interactions and specific ionic effects.
Atmospheric modeling in complex terrain
Williams, M. D.; Streit, G. E.
1990-05-01
Los Alamos investigators have developed several models which are relevant to modeling Mexico City air quality. The collection of models includes: meteorological models, dispersion models, air chemistry models, and visibility models. The models have been applied in several different contexts. They have been developed primarily to address the complexities posed by complex terrain. HOTMAC is the meteorological model which requires terrain and limited meteorological information. HOTMAC incorporates a relatively complete description of atmospheric physics to give good descriptions of the wind, temperature, and turbulence fields. RAPTAD is a dispersion code which uses random particle transport and kernel representations to efficiently provide accurate pollutant concentration fields. RAPTAD provides a much better description of tracer dispersion than do Gaussian puff models which fail to properly represent the effects of the wind profile near the surface. ATMOS and LAVM treat photochemistry and visibility respectively. ATMOS has been used to describe wintertime chemistry of the Denver brown cloud. Its description provided reasonable agreement with measurements for the high altitude of Denver. LAVM can provide both numerical indices or pictoral representations of visibility effects of pollutants. 15 refs., 74 figs.
Nogueira, Daniele Rubert; Mitjans, Montserrat; Busquets, M Antonia; Pérez, Lourdes; Vinardell, M Pilar
2012-08-14
Amino acid-based surfactants constitute an important class of natural surface-active biomolecules with an unpredictable number of industrial applications. To gain a better mechanistic understanding of surfactant-induced membrane destabilization, we assessed the phospholipid bilayer-perturbing properties of new cationic lysine-based surfactants. We used erythrocytes as biomembrane models to study the hemolytic activity of surfactants and their effects on cells' osmotic resistance and morphology, as well as on membrane fluidity and membrane protein profile with varying pH. The antihemolytic capacity of amphiphiles correlated negatively with the length of the alkyl chain. Anisotropy measurements showed that the pH-sensitive surfactants, with the positive charge on the α-amino group of lysine, significantly increased membrane fluidity at acidic conditions. SDS-PAGE analysis revealed that surfactants induced significant degradation of membrane proteins in hypo-osmotic medium and at pH 5.4. By scanning electron microscopy examinations, we corroborated the interaction of surfactants with lipid bilayer. We found that varying the surfactant chemical structure is a way to modulate the positioning of the molecule inside bilayer and, thus, the overall effect on the membrane. Our work showed that pH-sensitive lysine-based surfactants significantly disturb the lipid bilayer of biomembranes especially at acidic conditions, which suggests that these compounds are promising as a new class of multifunctional bioactive excipients for active intracellular drug delivery.
Complex Networks in Psychological Models
NASA Astrophysics Data System (ADS)
Wedemann, R. S.; Carvalho, L. S. A. V. D.; Donangelo, R.
We develop schematic, self-organizing, neural-network models to describe mechanisms associated with mental processes, by a neurocomputational substrate. These models are examples of real world complex networks with interesting general topological structures. Considering dopaminergic signal-to-noise neuronal modulation in the central nervous system, we propose neural network models to explain development of cortical map structure and dynamics of memory access, and unify different mental processes into a single neurocomputational substrate. Based on our neural network models, neurotic behavior may be understood as an associative memory process in the brain, and the linguistic, symbolic associative process involved in psychoanalytic working-through can be mapped onto a corresponding process of reconfiguration of the neural network. The models are illustrated through computer simulations, where we varied dopaminergic modulation and observed the self-organizing emergent patterns at the resulting semantic map, interpreting them as different manifestations of mental functioning, from psychotic through to normal and neurotic behavior, and creativity.
Hepatocellular biomembrane peroxidation in copper-induced injury
Homer, B.L.
1986-01-01
The pathogenesis of Cu-induced hepatocellular biomembrane peroxidation was studied in male Fischer rats by analyzing hepatic morphologic alterations, measuring the activity of hepatic free radical scavenger enzymes, and determining the distribution of hepatic cytosolic Cu bound to high and low molecular weight proteins. Seventy-five weanling rats were divided into 3 group of 25 each and injected once daily with either 6.25 mg/kg or 12.5 mg/kg cupric chloride, or 0.2 ml/100 gm saline. Five rats from each group were killed after 3, 14, 28, 42, and 70 consecutive days of injections. The level of malondialdehyde was elevated after 3 days of Cu injections and continued to increase until it peaked in the high-dose group after 28 days and in the low-dose group after 42 days. The density of catalase-containing peroxisomes was reduced in Cu-treated rats, correlating with a reduced activity of hepatic catalase. Catalase activity in Cu-treated rats was reduced after 3 days, and always remained < or = to the activity in control rats. The activity of glutathione peroxidase in high-dose rats always was < or = to the level in control rats, while the activity in control rats always was < or = to the level in low-dose rats. Meanwhile, the activity of superoxide dismutase increase in Cu-treated rats after 28 days. The concentration of cytosolic low molecular weight protein-bound Cu was elevated after 3 days in both Cu-treated groups and continued to increase, leveling off or peaking after 42 days. Regression analysis and in vitro studies, involving the peroxidation of erythrocyte ghost membranes, demonstrated that Cu bound to low molecular weight proteins was less likely to induce lipoperoxidation than copper bound to high molecular weight proteins.
Molecular modeling of polynucleotide complexes.
Meneksedag-Erol, Deniz; Tang, Tian; Uludağ, Hasan
2014-08-01
Delivery of polynucleotides into patient cells is a promising strategy for treatment of genetic disorders. Gene therapy aims to either synthesize desired proteins (DNA delivery) or suppress expression of endogenous genes (siRNA delivery). Carriers constitute an important part of gene therapeutics due to limitations arising from the pharmacokinetics of polynucleotides. Non-viral carriers such as polymers and lipids protect polynucleotides from intra and extracellular threats and facilitate formation of cell-permeable nanoparticles through shielding and/or bridging multiple polynucleotide molecules. Formation of nanoparticulate systems with optimal features, their cellular uptake and intracellular trafficking are crucial steps for an effective gene therapy. Despite the great amount of experimental work pursued, critical features of the nanoparticles as well as their processing mechanisms are still under debate due to the lack of instrumentation at atomic resolution. Molecular modeling based computational approaches can shed light onto the atomic level details of gene delivery systems, thus provide valuable input that cannot be readily obtained with experimental techniques. Here, we review the molecular modeling research pursued on critical gene therapy steps, highlight the knowledge gaps in the field and providing future perspectives. Existing modeling studies revealed several important aspects of gene delivery, such as nanoparticle formation dynamics with various carriers, effect of carrier properties on complexation, carrier conformations in endosomal stages, and release of polynucleotides from carriers. Rate-limiting steps related to cellular events (i.e. internalization, endosomal escape, and nuclear uptake) are now beginning to be addressed by computational approaches. Limitations arising from current computational power and accuracy of modeling have been hindering the development of more realistic models. With the help of rapidly-growing computational power
Cao, Ping; Dou, Guifang; Cheng, Yuanguo; Che, Jinjing
2017-01-01
Most mechanistic studies on human immunodeficiency virus (HIV) peptide fusion inhibitors have focused on the interactions between fusion inhibitors and viral envelope proteins. However, the interactions of fusion inhibitors with viral membranes are also essential for the efficacy of these drugs. Here, we utilized surface plasmon resonance (SPR) technology to study the interactions between the HIV fusion inhibitor peptides sifuvirtide and enfuvirtide and biomembrane models. Sifuvirtide presented selectivity toward biomembrane models composed of saturated dipalmitoylphosphatidylcholine (DPPC) (32-fold higher compared with unsaturated 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine [POPC]) and sphingomyelin (SM) (31-fold higher compared with POPC), which are rigid compositions enriched in the HIV viral membrane. In contrast, enfuvirtide showed no significant selectively toward these rigid membrane models. Furthermore, the bindings of sifuvirtide and enfuvirtide to SM bilayers were markedly higher than those to monolayers (14-fold and 23-fold, respectively), indicating that the inner leaflet influences the binding of these drugs to SM bilayers. No obvious differences were noted in the bindings of either peptide to the other mono- and bilayer models tested, illustrating that both peptides interact with these membranes through surface-binding. The bindings of the inhibitor peptides to biomembranes were found to be driven predominantly by hydrophobic interactions rather than electrostatic interactions, as determined by comparing their affinities to those of positively charged 1-palmitoyl-2-oleoyl-sn-glycero-3-ethylphosphocholine (EPC) to zwitterionic membrane models. The improved efficiency of sifuvirtide relative to enfuvirtide might be related to its ability to adsorb on rigid lipidic areas, such as the viral envelope and lipid rafts, which results in an increased sifuvirtide concentration at the fusion site.
Cao, Ping; Dou, Guifang
2017-01-01
Most mechanistic studies on human immunodeficiency virus (HIV) peptide fusion inhibitors have focused on the interactions between fusion inhibitors and viral envelope proteins. However, the interactions of fusion inhibitors with viral membranes are also essential for the efficacy of these drugs. Here, we utilized surface plasmon resonance (SPR) technology to study the interactions between the HIV fusion inhibitor peptides sifuvirtide and enfuvirtide and biomembrane models. Sifuvirtide presented selectivity toward biomembrane models composed of saturated dipalmitoylphosphatidylcholine (DPPC) (32-fold higher compared with unsaturated 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine [POPC]) and sphingomyelin (SM) (31-fold higher compared with POPC), which are rigid compositions enriched in the HIV viral membrane. In contrast, enfuvirtide showed no significant selectively toward these rigid membrane models. Furthermore, the bindings of sifuvirtide and enfuvirtide to SM bilayers were markedly higher than those to monolayers (14-fold and 23-fold, respectively), indicating that the inner leaflet influences the binding of these drugs to SM bilayers. No obvious differences were noted in the bindings of either peptide to the other mono- and bilayer models tested, illustrating that both peptides interact with these membranes through surface-binding. The bindings of the inhibitor peptides to biomembranes were found to be driven predominantly by hydrophobic interactions rather than electrostatic interactions, as determined by comparing their affinities to those of positively charged 1-palmitoyl-2-oleoyl-sn-glycero-3-ethylphosphocholine (EPC) to zwitterionic membrane models. The improved efficiency of sifuvirtide relative to enfuvirtide might be related to its ability to adsorb on rigid lipidic areas, such as the viral envelope and lipid rafts, which results in an increased sifuvirtide concentration at the fusion site. PMID:28207776
Park, Ji Ung; Ham, Jiyeon; Kim, Sukwha; Seo, Ji-Hun; Kim, Sang-Hyon; Lee, Seonju; Min, Hye Jeong; Choi, Sunghyun; Choi, Ra Mi; Kim, Heejin; Oh, Sohee; Hur, Ji An; Choi, Tae Hyun; Lee, Yan
2014-10-01
Despite their popular use in breast augmentation and reconstruction surgeries, the limited biocompatibility of silicone implants can induce severe side effects, including capsular contracture - an excessive foreign body reaction that forms a tight and hard fibrous capsule around the implant. This study examines the effects of using biomembrane-mimicking surface coatings to prevent capsular formations on silicone implants. The covalently attached biomembrane-mimicking polymer, poly(2-methacryloyloxyethyl phosphorylcholine) (PMPC), prevented nonspecific protein adsorption and fibroblast adhesion on the silicone surface. More importantly, in vivo capsule formations around PMPC-grafted silicone implants in rats were significantly thinner and exhibited lower collagen densities and more regular collagen alignments than bare silicone implants. The observed decrease in α-smooth muscle actin also supported the alleviation of capsular formations by the biomembrane-mimicking coating. Decreases in inflammation-related cells, myeloperoxidase and transforming growth factor-β resulted in reduced inflammation in the capsular tissue. The biomembrane-mimicking coatings used on these silicone implants demonstrate great potential for preventing capsular contracture and developing biocompatible materials for various biomedical applications.
Teacher Modeling Using Complex Informational Texts
ERIC Educational Resources Information Center
Fisher, Douglas; Frey, Nancy
2015-01-01
Modeling in complex texts requires that teachers analyze the text for factors of qualitative complexity and then design lessons that introduce students to that complexity. In addition, teachers can model the disciplinary nature of content area texts as well as word solving and comprehension strategies. Included is a planning guide for think aloud.
"Computational Modeling of Actinide Complexes"
Balasubramanian, K
2007-03-07
We will present our recent studies on computational actinide chemistry of complexes which are not only interesting from the standpoint of actinide coordination chemistry but also of relevance to environmental management of high-level nuclear wastes. We will be discussing our recent collaborative efforts with Professor Heino Nitsche of LBNL whose research group has been actively carrying out experimental studies on these species. Computations of actinide complexes are also quintessential to our understanding of the complexes found in geochemical, biochemical environments and actinide chemistry relevant to advanced nuclear systems. In particular we have been studying uranyl, plutonyl, and Cm(III) complexes are in aqueous solution. These studies are made with a variety of relativistic methods such as coupled cluster methods, DFT, and complete active space multi-configuration self-consistent-field (CASSCF) followed by large-scale CI computations and relativistic CI (RCI) computations up to 60 million configurations. Our computational studies on actinide complexes were motivated by ongoing EXAFS studies of speciated complexes in geo and biochemical environments carried out by Prof Heino Nitsche's group at Berkeley, Dr. David Clark at Los Alamos and Dr. Gibson's work on small actinide molecules at ORNL. The hydrolysis reactions of urnayl, neputyl and plutonyl complexes have received considerable attention due to their geochemical and biochemical importance but the results of free energies in solution and the mechanism of deprotonation have been topic of considerable uncertainty. We have computed deprotonating and migration of one water molecule from the first solvation shell to the second shell in UO{sub 2}(H{sub 2}O){sub 5}{sup 2+}, UO{sub 2}(H{sub 2}O){sub 5}{sup 2+}NpO{sub 2}(H{sub 2}O){sub 6}{sup +}, and PuO{sub 2}(H{sub 2}O){sub 5}{sup 2+} complexes. Our computed Gibbs free energy(7.27 kcal/m) in solution for the first time agrees with the experiment (7.1 kcal
Capturing Complexity through Maturity Modelling
ERIC Educational Resources Information Center
Underwood, Jean; Dillon, Gayle
2004-01-01
The impact of information and communication technologies (ICT) on the process and products of education is difficult to assess for a number of reasons. In brief, education is a complex system of interrelationships, of checks and balances. This context is not a neutral backdrop on which teaching and learning are played out. Rather, it may help, or…
Molecular simulation and modeling of complex I.
Hummer, Gerhard; Wikström, Mårten
2016-07-01
Molecular modeling and molecular dynamics simulations play an important role in the functional characterization of complex I. With its large size and complicated function, linking quinone reduction to proton pumping across a membrane, complex I poses unique modeling challenges. Nonetheless, simulations have already helped in the identification of possible proton transfer pathways. Simulations have also shed light on the coupling between electron and proton transfer, thus pointing the way in the search for the mechanistic principles underlying the proton pump. In addition to reviewing what has already been achieved in complex I modeling, we aim here to identify pressing issues and to provide guidance for future research to harness the power of modeling in the functional characterization of complex I. This article is part of a Special Issue entitled Respiratory complex I, edited by Volker Zickermann and Ulrich Brandt.
Hierarchical Models of the Nearshore Complex System
2004-01-01
unclassified unclassified /,andard Form 7 7Qien. -pii Prescrbed by ANS Sid 239-18 zgB -10z Hierarchical Models of the Nearshore Complex System: Final...TITLE AND SUBTITLE S. FUNDING NUMBERS Hierarchical Models of the Nearshore Complex System N00014-02-1-0358 6. AUTHOR(S) Brad Werner 7. PERFORMING...8217 ........... The long-term goal of this reasearch was to develop and test predictive models for nearshore processes. This grant was terminaton funding for the
Scaffolding in Complex Modelling Situations
ERIC Educational Resources Information Center
Stender, Peter; Kaiser, Gabriele
2015-01-01
The implementation of teacher-independent realistic modelling processes is an ambitious educational activity with many unsolved problems so far. Amongst others, there hardly exists any empirical knowledge about efficient ways of possible teacher support with students' activities, which should be mainly independent from the teacher. The research…
Complex Parameter Landscape for a Complex Neuron Model
Achard, Pablo; De Schutter, Erik
2006-01-01
The electrical activity of a neuron is strongly dependent on the ionic channels present in its membrane. Modifying the maximal conductances from these channels can have a dramatic impact on neuron behavior. But the effect of such modifications can also be cancelled out by compensatory mechanisms among different channels. We used an evolution strategy with a fitness function based on phase-plane analysis to obtain 20 very different computational models of the cerebellar Purkinje cell. All these models produced very similar outputs to current injections, including tiny details of the complex firing pattern. These models were not completely isolated in the parameter space, but neither did they belong to a large continuum of good models that would exist if weak compensations between channels were sufficient. The parameter landscape of good models can best be described as a set of loosely connected hyperplanes. Our method is efficient in finding good models in this complex landscape. Unraveling the landscape is an important step towards the understanding of functional homeostasis of neurons. PMID:16848639
Role models for complex networks
NASA Astrophysics Data System (ADS)
Reichardt, J.; White, D. R.
2007-11-01
We present a framework for automatically decomposing (“block-modeling”) the functional classes of agents within a complex network. These classes are represented by the nodes of an image graph (“block model”) depicting the main patterns of connectivity and thus functional roles in the network. Using a first principles approach, we derive a measure for the fit of a network to any given image graph allowing objective hypothesis testing. From the properties of an optimal fit, we derive how to find the best fitting image graph directly from the network and present a criterion to avoid overfitting. The method can handle both two-mode and one-mode data, directed and undirected as well as weighted networks and allows for different types of links to be dealt with simultaneously. It is non-parametric and computationally efficient. The concepts of structural equivalence and modularity are found as special cases of our approach. We apply our method to the world trade network and analyze the roles individual countries play in the global economy.
Agent-based modeling of complex infrastructures
North, M. J.
2001-06-01
Complex Adaptive Systems (CAS) can be applied to investigate complex infrastructures and infrastructure interdependencies. The CAS model agents within the Spot Market Agent Research Tool (SMART) and Flexible Agent Simulation Toolkit (FAST) allow investigation of the electric power infrastructure, the natural gas infrastructure and their interdependencies.
Modeling the complex bromate-iodine reaction.
Machado, Priscilla B; Faria, Roberto B
2009-05-07
In this article, it is shown that the FLEK model (ref 5 ) is able to model the experimental results of the bromate-iodine clock reaction. Five different complex chemical systems, the bromate-iodide clock and oscillating reactions, the bromite-iodide clock and oscillating reactions, and now the bromate-iodine clock reaction are adequately accounted for by the FLEK model.
Numerical models of complex diapirs
NASA Astrophysics Data System (ADS)
Podladchikov, Yu.; Talbot, C.; Poliakov, A. N. B.
1993-12-01
Numerically modelled diapirs that rise into overburdens with viscous rheology produce a large variety of shapes. This work uses the finite-element method to study the development of diapirs that rise towards a surface on which a diapir-induced topography creeps flat or disperses ("erodes") at different rates. Slow erosion leads to diapirs with "mushroom" shapes, moderate erosion rate to "wine glass" diapirs and fast erosion to "beer glass"- and "column"-shaped diapirs. The introduction of a low-viscosity layer at the top of the overburden causes diapirs to develop into structures resembling a "Napoleon hat". These spread lateral sheets.
Alix, Philippe; Winterer, Jochen; Müller, Wolfgang
2003-09-30
Slice cultures on biomembrane are the method of choice for studying Ca2+-dependent plastic changes occurring over several days to weeks. Using IR-differential interference contrast, good visualization of neurons in biomembrane slice cultures has been achieved despite a negative optical effect of the biomembrane, but epifluorescence imaging requires removal of a Wollaston prism and the analyzer. Here, we describe a novel illumination method to overcome this problem. Using optic fiber illumination at a shallow angle from the top of the slice culture, with or without additional illumination from the bottom, we obtained good cellular resolution of neurons in biomembrane slice cultures as well as in acute slices with an infrared-video camera. With this technique, we demonstrate visually guided whole-cell patch-clamp recording of Na+- and K+-currents as well as combination of whole-cell recording with fluorescence imaging of hippocampal and entorhinal cortex neurons in biomembrane slice cultures. Our inexpensive method should prove very useful for studying in vitro effects of long-term manipulations on membrane currents and intracellular Ca2+-signaling.
Slip complexity in earthquake fault models.
Rice, J R; Ben-Zion, Y
1996-04-30
We summarize studies of earthquake fault models that give rise to slip complexities like those in natural earthquakes. For models of smooth faults between elastically deformable continua, it is critical that the friction laws involve a characteristic distance for slip weakening or evolution of surface state. That results in a finite nucleation size, or coherent slip patch size, h*. Models of smooth faults, using numerical cell size properly small compared to h*, show periodic response or complex and apparently chaotic histories of large events but have not been found to show small event complexity like the self-similar (power law) Gutenberg-Richter frequency-size statistics. This conclusion is supported in the present paper by fully inertial elastodynamic modeling of earthquake sequences. In contrast, some models of locally heterogeneous faults with quasi-independent fault segments, represented approximately by simulations with cell size larger than h* so that the model becomes "inherently discrete," do show small event complexity of the Gutenberg-Richter type. Models based on classical friction laws without a weakening length scale or for which the numerical procedure imposes an abrupt strength drop at the onset of slip have h* = 0 and hence always fall into the inherently discrete class. We suggest that the small-event complexity that some such models show will not survive regularization of the constitutive description, by inclusion of an appropriate length scale leading to a finite h*, and a corresponding reduction of numerical grid size.
Preferential urn model and nongrowing complex networks.
Ohkubo, Jun; Yasuda, Muneki; Tanaka, Kazuyuki
2005-12-01
A preferential urn model, which is based on the concept "the rich get richer," is proposed. From a relationship between a nongrowing model for complex networks and the preferential urn model in regard to degree distributions, it is revealed that a fitness parameter in the nongrowing model is interpreted as an inverse local temperature in the preferential urn model. Furthermore, it is clarified that the preferential urn model with randomness generates a fat-tailed occupation distribution; the concept of the local temperature enables us to understand the fat-tailed occupation distribution intuitively. Since the preferential urn model is a simple stochastic model, it can be applied to research on not only the nongrowing complex networks, but also many other fields such as econophysics and social sciences.
Complex system modelling for veterinary epidemiology.
Lanzas, Cristina; Chen, Shi
2015-02-01
The use of mathematical models has a long tradition in infectious disease epidemiology. The nonlinear dynamics and complexity of pathogen transmission pose challenges in understanding its key determinants, in identifying critical points, and designing effective mitigation strategies. Mathematical modelling provides tools to explicitly represent the variability, interconnectedness, and complexity of systems, and has contributed to numerous insights and theoretical advances in disease transmission, as well as to changes in public policy, health practice, and management. In recent years, our modelling toolbox has considerably expanded due to the advancements in computing power and the need to model novel data generated by technologies such as proximity loggers and global positioning systems. In this review, we discuss the principles, advantages, and challenges associated with the most recent modelling approaches used in systems science, the interdisciplinary study of complex systems, including agent-based, network and compartmental modelling. Agent-based modelling is a powerful simulation technique that considers the individual behaviours of system components by defining a set of rules that govern how individuals ("agents") within given populations interact with one another and the environment. Agent-based models have become a recent popular choice in epidemiology to model hierarchical systems and address complex spatio-temporal dynamics because of their ability to integrate multiple scales and datasets.
Modelling Canopy Flows over Complex Terrain
NASA Astrophysics Data System (ADS)
Grant, Eleanor R.; Ross, Andrew N.; Gardiner, Barry A.
2016-12-01
Recent studies of flow over forested hills have been motivated by a number of important applications including understanding CO_2 and other gaseous fluxes over forests in complex terrain, predicting wind damage to trees, and modelling wind energy potential at forested sites. Current modelling studies have focussed almost exclusively on highly idealized, and usually fully forested, hills. Here, we present model results for a site on the Isle of Arran, Scotland with complex terrain and heterogeneous forest canopy. The model uses an explicit representation of the canopy and a 1.5-order turbulence closure for flow within and above the canopy. The validity of the closure scheme is assessed using turbulence data from a field experiment before comparing predictions of the full model with field observations. For near-neutral stability, the results compare well with the observations, showing that such a relatively simple canopy model can accurately reproduce the flow patterns observed over complex terrain and realistic, variable forest cover, while at the same time remaining computationally feasible for real case studies. The model allows closer examination of the flow separation observed over complex forested terrain. Comparisons with model simulations using a roughness length parametrization show significant differences, particularly with respect to flow separation, highlighting the need to explicitly model the forest canopy if detailed predictions of near-surface flow around forests are required.
From Complex to Simple: Interdisciplinary Stochastic Models
ERIC Educational Resources Information Center
Mazilu, D. A.; Zamora, G.; Mazilu, I.
2012-01-01
We present two simple, one-dimensional, stochastic models that lead to a qualitative understanding of very complex systems from biology, nanoscience and social sciences. The first model explains the complicated dynamics of microtubules, stochastic cellular highways. Using the theory of random walks in one dimension, we find analytical expressions…
Modeling the chemistry of complex petroleum mixtures.
Quann, R J
1998-12-01
Determining the complete molecular composition of petroleum and its refined products is not feasible with current analytical techniques because of the astronomical number of molecular components. Modeling the composition and behavior of such complex mixtures in refinery processes has accordingly evolved along a simplifying concept called lumping. Lumping reduces the complexity of the problem to a manageable form by grouping the entire set of molecular components into a handful of lumps. This traditional approach does not have a molecular basis and therefore excludes important aspects of process chemistry and molecular property fundamentals from the model's formulation. A new approach called structure-oriented lumping has been developed to model the composition and chemistry of complex mixtures at a molecular level. The central concept is to represent an individual molecular or a set of closely related isomers as a mathematical construct of certain specific and repeating structural groups. A complex mixture such as petroleum can then be represented as thousands of distinct molecular components, each having a mathematical identity. This enables the automated construction of large complex reaction networks with tens of thousands of specific reactions for simulating the chemistry of complex mixtures. Further, the method provides a convenient framework for incorporating molecular physical property correlations, existing group contribution methods, molecular thermodynamic properties, and the structure--activity relationships of chemical kinetics in the development of models.
Modeling the chemistry of complex petroleum mixtures.
Quann, R J
1998-01-01
Determining the complete molecular composition of petroleum and its refined products is not feasible with current analytical techniques because of the astronomical number of molecular components. Modeling the composition and behavior of such complex mixtures in refinery processes has accordingly evolved along a simplifying concept called lumping. Lumping reduces the complexity of the problem to a manageable form by grouping the entire set of molecular components into a handful of lumps. This traditional approach does not have a molecular basis and therefore excludes important aspects of process chemistry and molecular property fundamentals from the model's formulation. A new approach called structure-oriented lumping has been developed to model the composition and chemistry of complex mixtures at a molecular level. The central concept is to represent an individual molecular or a set of closely related isomers as a mathematical construct of certain specific and repeating structural groups. A complex mixture such as petroleum can then be represented as thousands of distinct molecular components, each having a mathematical identity. This enables the automated construction of large complex reaction networks with tens of thousands of specific reactions for simulating the chemistry of complex mixtures. Further, the method provides a convenient framework for incorporating molecular physical property correlations, existing group contribution methods, molecular thermodynamic properties, and the structure--activity relationships of chemical kinetics in the development of models. PMID:9860903
Updating the debate on model complexity
Simmons, Craig T.; Hunt, Randall J.
2012-01-01
As scientists who are trying to understand a complex natural world that cannot be fully characterized in the field, how can we best inform the society in which we live? This founding context was addressed in a special session, “Complexity in Modeling: How Much is Too Much?” convened at the 2011 Geological Society of America Annual Meeting. The session had a variety of thought-provoking presentations—ranging from philosophy to cost-benefit analyses—and provided some areas of broad agreement that were not evident in discussions of the topic in 1998 (Hunt and Zheng, 1999). The session began with a short introduction during which model complexity was framed borrowing from an economic concept, the Law of Diminishing Returns, and an example of enjoyment derived by eating ice cream. Initially, there is increasing satisfaction gained from eating more ice cream, to a point where the gain in satisfaction starts to decrease, ending at a point when the eater sees no value in eating more ice cream. A traditional view of model complexity is similar—understanding gained from modeling can actually decrease if models become unnecessarily complex. However, oversimplified models—those that omit important aspects of the problem needed to make a good prediction—can also limit and confound our understanding. Thus, the goal of all modeling is to find the “sweet spot” of model sophistication—regardless of whether complexity was added sequentially to an overly simple model or collapsed from an initial highly parameterized framework that uses mathematics and statistics to attain an optimum (e.g., Hunt et al., 2007). Thus, holistic parsimony is attained, incorporating “as simple as possible,” as well as the equally important corollary “but no simpler.”
Coimbra, João T S; Sousa, Sérgio F; Fernandes, Pedro A; Rangel, Maria; Ramos, Maria J
2014-01-01
The AMBER family of force fields is one of the most commonly used alternatives to describe proteins and drug-like molecules in molecular dynamics simulations. However, the absence of a specific set of parameters for lipids has been limiting the widespread application of this force field in biomembrane simulations, including membrane protein simulations and drug-membrane simulations. Here, we report the systematic parameterization of 12 common lipid types consistent with the General Amber Force Field (GAFF), with charge-parameters determined with RESP at the HF/6-31G(d) level of theory, to be consistent with AMBER. The accuracy of the scheme was evaluated by comparing predicted and experimental values for structural lipid properties in MD simulations in an NPT ensemble with explicit solvent in 100:100 bilayer systems. Globally, a consistent agreement with experimental reference data on membrane structures was achieved for some lipid types when using the typical MD conditions normally employed when handling membrane proteins and drug-membrane simulations (a tensionless NPT ensemble, 310 K), without the application of any of the constraints often used in other biomembrane simulations (such as the surface tension and the total simulation box area). The present set of parameters and the universal approach used in the parameterization of all the lipid types described here, as well as the consistency with the AMBER force field family, together with the tensionless NPT ensemble used, opens the door to systematic studies combining lipid components with small drug-like molecules or membrane proteins and show the potential of GAFF in dealing with biomembranes.
Multifaceted Modelling of Complex Business Enterprises.
Chakraborty, Subrata; Mengersen, Kerrie; Fidge, Colin; Ma, Lin; Lassen, David
2015-01-01
We formalise and present a new generic multifaceted complex system approach for modelling complex business enterprises. Our method has a strong focus on integrating the various data types available in an enterprise which represent the diverse perspectives of various stakeholders. We explain the challenges faced and define a novel approach to converting diverse data types into usable Bayesian probability forms. The data types that can be integrated include historic data, survey data, and management planning data, expert knowledge and incomplete data. The structural complexities of the complex system modelling process, based on various decision contexts, are also explained along with a solution. This new application of complex system models as a management tool for decision making is demonstrated using a railway transport case study. The case study demonstrates how the new approach can be utilised to develop a customised decision support model for a specific enterprise. Various decision scenarios are also provided to illustrate the versatility of the decision model at different phases of enterprise operations such as planning and control.
Multifaceted Modelling of Complex Business Enterprises
2015-01-01
We formalise and present a new generic multifaceted complex system approach for modelling complex business enterprises. Our method has a strong focus on integrating the various data types available in an enterprise which represent the diverse perspectives of various stakeholders. We explain the challenges faced and define a novel approach to converting diverse data types into usable Bayesian probability forms. The data types that can be integrated include historic data, survey data, and management planning data, expert knowledge and incomplete data. The structural complexities of the complex system modelling process, based on various decision contexts, are also explained along with a solution. This new application of complex system models as a management tool for decision making is demonstrated using a railway transport case study. The case study demonstrates how the new approach can be utilised to develop a customised decision support model for a specific enterprise. Various decision scenarios are also provided to illustrate the versatility of the decision model at different phases of enterprise operations such as planning and control. PMID:26247591
Complex quantum network model of energy transfer in photosynthetic complexes.
Ai, Bao-Quan; Zhu, Shi-Liang
2012-12-01
The quantum network model with real variables is usually used to describe the excitation energy transfer (EET) in the Fenna-Matthews-Olson (FMO) complexes. In this paper we add the quantum phase factors to the hopping terms and find that the quantum phase factors play an important role in the EET. The quantum phase factors allow us to consider the space structure of the pigments. It is found that phase coherence within the complexes would allow quantum interference to affect the dynamics of the EET. There exist some optimal phase regions where the transfer efficiency takes its maxima, which indicates that when the pigments are optimally spaced, the exciton can pass through the FMO with perfect efficiency. Moreover, the optimal phase regions almost do not change with the environments. In addition, we find that the phase factors are useful in the EET just in the case of multiple pathways. Therefore, we demonstrate that the quantum phases may bring the other two factors, the optimal space of the pigments and multiple pathways, together to contribute the EET in photosynthetic complexes with perfect efficiency.
Slip complexity in earthquake fault models.
Rice, J R; Ben-Zion, Y
1996-01-01
We summarize studies of earthquake fault models that give rise to slip complexities like those in natural earthquakes. For models of smooth faults between elastically deformable continua, it is critical that the friction laws involve a characteristic distance for slip weakening or evolution of surface state. That results in a finite nucleation size, or coherent slip patch size, h*. Models of smooth faults, using numerical cell size properly small compared to h*, show periodic response or complex and apparently chaotic histories of large events but have not been found to show small event complexity like the self-similar (power law) Gutenberg-Richter frequency-size statistics. This conclusion is supported in the present paper by fully inertial elastodynamic modeling of earthquake sequences. In contrast, some models of locally heterogeneous faults with quasi-independent fault segments, represented approximately by simulations with cell size larger than h* so that the model becomes "inherently discrete," do show small event complexity of the Gutenberg-Richter type. Models based on classical friction laws without a weakening length scale or for which the numerical procedure imposes an abrupt strength drop at the onset of slip have h* = 0 and hence always fall into the inherently discrete class. We suggest that the small-event complexity that some such models show will not survive regularization of the constitutive description, by inclusion of an appropriate length scale leading to a finite h*, and a corresponding reduction of numerical grid size. Images Fig. 2 Fig. 3 Fig. 4 Fig. 5 PMID:11607669
Minimum-complexity helicopter simulation math model
NASA Technical Reports Server (NTRS)
Heffley, Robert K.; Mnich, Marc A.
1988-01-01
An example of a minimal complexity simulation helicopter math model is presented. Motivating factors are the computational delays, cost, and inflexibility of the very sophisticated math models now in common use. A helicopter model form is given which addresses each of these factors and provides better engineering understanding of the specific handling qualities features which are apparent to the simulator pilot. The technical approach begins with specification of features which are to be modeled, followed by a build up of individual vehicle components and definition of equations. Model matching and estimation procedures are given which enable the modeling of specific helicopters from basic data sources such as flight manuals. Checkout procedures are given which provide for total model validation. A number of possible model extensions and refinement are discussed. Math model computer programs are defined and listed.
Trends in modeling Biomedical Complex Systems
Milanesi, Luciano; Romano, Paolo; Castellani, Gastone; Remondini, Daniel; Liò, Petro
2009-01-01
In this paper we provide an introduction to the techniques for multi-scale complex biological systems, from the single bio-molecule to the cell, combining theoretical modeling, experiments, informatics tools and technologies suitable for biological and biomedical research, which are becoming increasingly multidisciplinary, multidimensional and information-driven. The most important concepts on mathematical modeling methodologies and statistical inference, bioinformatics and standards tools to investigate complex biomedical systems are discussed and the prominent literature useful to both the practitioner and the theoretician are presented. PMID:19828068
Label-free characterization of biomembranes: from structure to dynamics.
Mashaghi, Alireza; Mashaghi, Samaneh; Reviakine, Ilya; Heeren, Ron M A; Sandoghdar, Vahid; Bonn, Mischa
2014-02-07
We review recent progress in the study of the structure and dynamics of phospholipid membranes and associated proteins, using novel label-free analytical tools. We describe these techniques and illustrate them with examples highlighting current capabilities and limitations. Recent advances in applying such techniques to biological and model membranes for biophysical studies and biosensing applications are presented, and future prospects are discussed.
NASA Astrophysics Data System (ADS)
Rupiasih, Ni Nyoman; Suyanto, Hery; Sumadiyasa, Made; Purwanto, Christine Prita; Purnomo, Rendra Rustam
2013-09-01
The capability of Laser-Induced Breakdown Spectroscopy (LIBS) to resolve filtration process of Ag liquid sample by chitosan biomembrane is demonstrated. The biomembrane was prepared by inversion method used to filter Ag liquid using pressurized technique samples which were then analyzed by monitoring the emission corresponding to Ag (I) at wavelength of 328 nm. The experiment was conducted by varying the laser energy i.e. 80, 120, and 160 mJ, where, subsequently, and its effect on the depth-profile from 20 - 200 μm was characterized by LIBS. The results showed that the physical processes of pressurized filtration led a homogeneous Ag in the membrane from the surface to a depth of 200 μm. The optimum condition was obtained at laser energy of 120 mJ. The adsorption occurred only on the surface of the membrane i.e. 20 μm depth, but there was no inclusion. Improvement of the detection performance of adsorption process was done by heating the dripped membrane at 35 °C and was resulting in increase in emission intensity as expected.
The Kuramoto model in complex networks
NASA Astrophysics Data System (ADS)
Rodrigues, Francisco A.; Peron, Thomas K. DM.; Ji, Peng; Kurths, Jürgen
2016-01-01
Synchronization of an ensemble of oscillators is an emergent phenomenon present in several complex systems, ranging from social and physical to biological and technological systems. The most successful approach to describe how coherent behavior emerges in these complex systems is given by the paradigmatic Kuramoto model. This model has been traditionally studied in complete graphs. However, besides being intrinsically dynamical, complex systems present very heterogeneous structure, which can be represented as complex networks. This report is dedicated to review main contributions in the field of synchronization in networks of Kuramoto oscillators. In particular, we provide an overview of the impact of network patterns on the local and global dynamics of coupled phase oscillators. We cover many relevant topics, which encompass a description of the most used analytical approaches and the analysis of several numerical results. Furthermore, we discuss recent developments on variations of the Kuramoto model in networks, including the presence of noise and inertia. The rich potential for applications is discussed for special fields in engineering, neuroscience, physics and Earth science. Finally, we conclude by discussing problems that remain open after the last decade of intensive research on the Kuramoto model and point out some promising directions for future research.
Modelling biological complexity: a physical scientist's perspective
Coveney, Peter V; Fowler, Philip W
2005-01-01
We discuss the modern approaches of complexity and self-organization to understanding dynamical systems and how these concepts can inform current interest in systems biology. From the perspective of a physical scientist, it is especially interesting to examine how the differing weights given to philosophies of science in the physical and biological sciences impact the application of the study of complexity. We briefly describe how the dynamics of the heart and circadian rhythms, canonical examples of systems biology, are modelled by sets of nonlinear coupled differential equations, which have to be solved numerically. A major difficulty with this approach is that all the parameters within these equations are not usually known. Coupled models that include biomolecular detail could help solve this problem. Coupling models across large ranges of length- and time-scales is central to describing complex systems and therefore to biology. Such coupling may be performed in at least two different ways, which we refer to as hierarchical and hybrid multiscale modelling. While limited progress has been made in the former case, the latter is only beginning to be addressed systematically. These modelling methods are expected to bring numerous benefits to biology, for example, the properties of a system could be studied over a wider range of length- and time-scales, a key aim of systems biology. Multiscale models couple behaviour at the molecular biological level to that at the cellular level, thereby providing a route for calculating many unknown parameters as well as investigating the effects at, for example, the cellular level, of small changes at the biomolecular level, such as a genetic mutation or the presence of a drug. The modelling and simulation of biomolecular systems is itself very computationally intensive; we describe a recently developed hybrid continuum-molecular model, HybridMD, and its associated molecular insertion algorithm, which point the way towards the
Modelling biological complexity: a physical scientist's perspective.
Coveney, Peter V; Fowler, Philip W
2005-09-22
We discuss the modern approaches of complexity and self-organization to understanding dynamical systems and how these concepts can inform current interest in systems biology. From the perspective of a physical scientist, it is especially interesting to examine how the differing weights given to philosophies of science in the physical and biological sciences impact the application of the study of complexity. We briefly describe how the dynamics of the heart and circadian rhythms, canonical examples of systems biology, are modelled by sets of nonlinear coupled differential equations, which have to be solved numerically. A major difficulty with this approach is that all the parameters within these equations are not usually known. Coupled models that include biomolecular detail could help solve this problem. Coupling models across large ranges of length- and time-scales is central to describing complex systems and therefore to biology. Such coupling may be performed in at least two different ways, which we refer to as hierarchical and hybrid multiscale modelling. While limited progress has been made in the former case, the latter is only beginning to be addressed systematically. These modelling methods are expected to bring numerous benefits to biology, for example, the properties of a system could be studied over a wider range of length- and time-scales, a key aim of systems biology. Multiscale models couple behaviour at the molecular biological level to that at the cellular level, thereby providing a route for calculating many unknown parameters as well as investigating the effects at, for example, the cellular level, of small changes at the biomolecular level, such as a genetic mutation or the presence of a drug. The modelling and simulation of biomolecular systems is itself very computationally intensive; we describe a recently developed hybrid continuum-molecular model, HybridMD, and its associated molecular insertion algorithm, which point the way towards the
Dual-resolution molecular dynamics simulation of antimicrobials in biomembranes
Orsi, Mario; Noro, Massimo G.; Essex, Jonathan W.
2011-01-01
Triclocarban and triclosan, two potent antibacterial molecules present in many consumer products, have been subject to growing debate on a number of issues, particularly in relation to their possible role in causing microbial resistance. In this computational study, we present molecular-level insights into the interaction between these antimicrobial agents and hydrated phospholipid bilayers (taken as a simple model for the cell membrane). Simulations are conducted by a novel ‘dual-resolution’ molecular dynamics approach which combines accuracy with efficiency: the antimicrobials, modelled atomistically, are mixed with simplified (coarse-grain) models of lipids and water. A first set of calculations is run to study the antimicrobials' transfer free energies and orientations as a function of depth inside the membrane. Both molecules are predicted to preferentially accumulate in the lipid headgroup–glycerol region; this finding, which reproduces corresponding experimental data, is also discussed in terms of a general relation between solute partitioning and the intramembrane distribution of pressure. A second set of runs involves membranes incorporated with different molar concentrations of antimicrobial molecules (up to one antimicrobial per two lipids). We study the effects induced on fundamental membrane properties, such as the electron density, lateral pressure and electrical potential profiles. In particular, the analysis of the spontaneous curvature indicates that increasing antimicrobial concentrations promote a ‘destabilizing’ tendency towards non-bilayer phases, as observed experimentally. The antimicrobials' influence on the self-assembly process is also investigated. The significance of our results in the context of current theories of antimicrobial action is discussed. PMID:21131331
Determination of Biomembrane Bending Moduli in Fully Atomistic Simulations
2015-01-01
The bilayer bending modulus (Kc) is one of the most important physical constants characterizing lipid membranes, but precisely measuring it is a challenge, both experimentally and computationally. Experimental measurements on chemically identical bilayers often differ depending upon the techniques employed, and robust simulation results have previously been limited to coarse-grained models (at varying levels of resolution). This Communication demonstrates the extraction of Kc from fully atomistic molecular dynamics simulations for three different single-component lipid bilayers (DPPC, DOPC, and DOPE). The results agree quantitatively with experiments that measure thermal shape fluctuations in giant unilamellar vesicles. Lipid tilt, twist, and compression moduli are also reported. PMID:25202918
Comparing flood loss models of different complexity
NASA Astrophysics Data System (ADS)
Schröter, Kai; Kreibich, Heidi; Vogel, Kristin; Riggelsen, Carsten; Scherbaum, Frank; Merz, Bruno
2013-04-01
Any deliberation on flood risk requires the consideration of potential flood losses. In particular, reliable flood loss models are needed to evaluate cost-effectiveness of mitigation measures, to assess vulnerability, for comparative risk analysis and financial appraisal during and after floods. In recent years, considerable improvements have been made both concerning the data basis and the methodological approaches used for the development of flood loss models. Despite of that, flood loss models remain an important source of uncertainty. Likewise the temporal and spatial transferability of flood loss models is still limited. This contribution investigates the predictive capability of different flood loss models in a split sample cross regional validation approach. For this purpose, flood loss models of different complexity, i.e. based on different numbers of explaining variables, are learned from a set of damage records that was obtained from a survey after the Elbe flood in 2002. The validation of model predictions is carried out for different flood events in the Elbe and Danube river basins in 2002, 2005 and 2006 for which damage records are available from surveys after the flood events. The models investigated are a stage-damage model, the rule based model FLEMOps+r as well as novel model approaches which are derived using data mining techniques of regression trees and Bayesian networks. The Bayesian network approach to flood loss modelling provides attractive additional information concerning the probability distribution of both model predictions and explaining variables.
Modeling Electromagnetic Scattering From Complex Inhomogeneous Objects
NASA Technical Reports Server (NTRS)
Deshpande, Manohar; Reddy, C. J.
2011-01-01
This software innovation is designed to develop a mathematical formulation to estimate the electromagnetic scattering characteristics of complex, inhomogeneous objects using the finite-element-method (FEM) and method-of-moments (MoM) concepts, as well as to develop a FORTRAN code called FEMOM3DS (Finite Element Method and Method of Moments for 3-Dimensional Scattering), which will implement the steps that are described in the mathematical formulation. Very complex objects can be easily modeled, and the operator of the code is not required to know the details of electromagnetic theory to study electromagnetic scattering.
Flowgraph Models for Complex Multistate System Reliabiliy.
Williams, B. J.; Huzurbazar, A. V.
2005-01-01
This chapter reviews flowgraph models for complex multistate systems. The focus is on modeling data from semi-Markov processes and constructing likelihoods when different portions of the system data are censored and incomplete. Semi-Markov models play an important role in the analysis of time to event data. However, in practice, data analysis for semi-Markov processes can be quite difficult and many simplifying assumptions are made. Flowgraph models are multistate models that provide a data analytic method for semi-Markov processes. Flowgraphs are useful for estimating Bayes predictive densities, predictive reliability functions, and predictive hazard functions for waiting times of interest in the presence of censored and incomplete data. This chapter reviews data analysis for flowgraph models and then presents methods for constructing likelihoods when portions of the system data are missing.
Gómez-Hernández, J Jaime
2006-01-01
It is difficult to define complexity in modeling. Complexity is often associated with uncertainty since modeling uncertainty is an intrinsically difficult task. However, modeling uncertainty does not require, necessarily, complex models, in the sense of a model requiring an unmanageable number of degrees of freedom to characterize the aquifer. The relationship between complexity, uncertainty, heterogeneity, and stochastic modeling is not simple. Aquifer models should be able to quantify the uncertainty of their predictions, which can be done using stochastic models that produce heterogeneous realizations of aquifer parameters. This is the type of complexity addressed in this article.
Human driven transitions in complex model ecosystems
NASA Astrophysics Data System (ADS)
Harfoot, Mike; Newbold, Tim; Tittinsor, Derek; Purves, Drew
2015-04-01
Human activities have been observed to be impacting ecosystems across the globe, leading to reduced ecosystem functioning, altered trophic and biomass structure and ultimately ecosystem collapse. Previous attempts to understand global human impacts on ecosystems have usually relied on statistical models, which do not explicitly model the processes underlying the functioning of ecosystems, represent only a small proportion of organisms and do not adequately capture complex non-linear and dynamic responses of ecosystems to perturbations. We use a mechanistic ecosystem model (1), which simulates the underlying processes structuring ecosystems and can thus capture complex and dynamic interactions, to investigate boundaries of complex ecosystems to human perturbation. We explore several drivers including human appropriation of net primary production and harvesting of animal biomass. We also present an analysis of the key interactions between biotic, societal and abiotic earth system components, considering why and how we might think about these couplings. References: M. B. J. Harfoot et al., Emergent global patterns of ecosystem structure and function from a mechanistic general ecosystem model., PLoS Biol. 12, e1001841 (2014).
BDI-modelling of complex intracellular dynamics.
Jonker, C M; Snoep, J L; Treur, J; Westerhoff, H V; Wijngaards, W C A
2008-03-07
A BDI-based continuous-time modelling approach for intracellular dynamics is presented. It is shown how temporalized BDI-models make it possible to model intracellular biochemical processes as decision processes. By abstracting from some of the details of the biochemical pathways, the model achieves understanding in nearly intuitive terms, without losing veracity: classical intentional state properties such as beliefs, desires and intentions are founded in reality through precise biochemical relations. In an extensive example, the complex regulation of Escherichia coli vis-à-vis lactose, glucose and oxygen is simulated as a discrete-state, continuous-time temporal decision manager. Thus a bridge is introduced between two different scientific areas: the area of BDI-modelling and the area of intracellular dynamics.
A Practical Philosophy of Complex Climate Modelling
NASA Technical Reports Server (NTRS)
Schmidt, Gavin A.; Sherwood, Steven
2014-01-01
We give an overview of the practice of developing and using complex climate models, as seen from experiences in a major climate modelling center and through participation in the Coupled Model Intercomparison Project (CMIP).We discuss the construction and calibration of models; their evaluation, especially through use of out-of-sample tests; and their exploitation in multi-model ensembles to identify biases and make predictions. We stress that adequacy or utility of climate models is best assessed via their skill against more naive predictions. The framework we use for making inferences about reality using simulations is naturally Bayesian (in an informal sense), and has many points of contact with more familiar examples of scientific epistemology. While the use of complex simulations in science is a development that changes much in how science is done in practice, we argue that the concepts being applied fit very much into traditional practices of the scientific method, albeit those more often associated with laboratory work.
Intrinsic Uncertainties in Modeling Complex Systems.
Cooper, Curtis S; Bramson, Aaron L.; Ames, Arlo L.
2014-09-01
Models are built to understand and predict the behaviors of both natural and artificial systems. Because it is always necessary to abstract away aspects of any non-trivial system being modeled, we know models can potentially leave out important, even critical elements. This reality of the modeling enterprise forces us to consider the prospective impacts of those effects completely left out of a model - either intentionally or unconsidered. Insensitivity to new structure is an indication of diminishing returns. In this work, we represent a hypothetical unknown effect on a validated model as a finite perturba- tion whose amplitude is constrained within a control region. We find robustly that without further constraints, no meaningful bounds can be placed on the amplitude of a perturbation outside of the control region. Thus, forecasting into unsampled regions is a very risky proposition. We also present inherent difficulties with proper time discretization of models and representing in- herently discrete quantities. We point out potentially worrisome uncertainties, arising from math- ematical formulation alone, which modelers can inadvertently introduce into models of complex systems. Acknowledgements This work has been funded under early-career LDRD project #170979, entitled "Quantify- ing Confidence in Complex Systems Models Having Structural Uncertainties", which ran from 04/2013 to 09/2014. We wish to express our gratitude to the many researchers at Sandia who con- tributed ideas to this work, as well as feedback on the manuscript. In particular, we would like to mention George Barr, Alexander Outkin, Walt Beyeler, Eric Vugrin, and Laura Swiler for provid- ing invaluable advice and guidance through the course of the project. We would also like to thank Steven Kleban, Amanda Gonzales, Trevor Manzanares, and Sarah Burwell for their assistance in managing project tasks and resources.
Different Epidemic Models on Complex Networks
NASA Astrophysics Data System (ADS)
Zhang, Hai-Feng; Small, Michael; Fu, Xin-Chu
2009-07-01
Models for diseases spreading are not just limited to SIS or SIR. For instance, for the spreading of AIDS/HIV, the susceptible individuals can be classified into different cases according to their immunity, and similarly, the infected individuals can be sorted into different classes according to their infectivity. Moreover, some diseases may develop through several stages. Many authors have shown that the individuals' relation can be viewed as a complex network. So in this paper, in order to better explain the dynamical behavior of epidemics, we consider different epidemic models on complex networks, and obtain the epidemic threshold for each case. Finally, we present numerical simulations for each case to verify our results.
Noncommutative complex Grosse-Wulkenhaar model
Hounkonnou, Mahouton Norbert; Samary, Dine Ousmane
2008-11-18
This paper stands for an application of the noncommutative (NC) Noether theorem, given in our previous work [AIP Proc 956(2007) 55-60], for the NC complex Grosse-Wulkenhaar model. It provides with an extension of a recent work [Physics Letters B 653(2007) 343-345]. The local conservation of energy-momentum tensors (EMTs) is recovered using improvement procedures based on Moyal algebraic techniques. Broken dilatation symmetry is discussed. NC gauge currents are also explicitly computed.
Complex Constructivism: A Theoretical Model of Complexity and Cognition
ERIC Educational Resources Information Center
Doolittle, Peter E.
2014-01-01
Education has long been driven by its metaphors for teaching and learning. These metaphors have influenced both educational research and educational practice. Complexity and constructivism are two theories that provide functional and robust metaphors. Complexity provides a metaphor for the structure of myriad phenomena, while constructivism…
Magnetic modeling of the Bushveld Igneous Complex
NASA Astrophysics Data System (ADS)
Webb, S. J.; Cole, J.; Letts, S. A.; Finn, C.; Torsvik, T. H.; Lee, M. D.
2009-12-01
Magnetic modeling of the 2.06 Ga Bushveld Complex presents special challenges due a variety of magnetic effects. These include strong remanence in the Main Zone and extremely high magnetic susceptibilities in the Upper Zone, which exhibit self-demagnetization. Recent palaeomagnetic results have resolved a long standing discrepancy between age data, which constrain the emplacement to within 1 million years, and older palaeomagnetic data which suggested ~50 million years for emplacement. The new palaeomagnetic results agree with the age data and present a single consistent pole, as opposed to a long polar wander path, for the Bushveld for all of the Zones and all of the limbs. These results also pass a fold test indicating the Bushveld Complex was emplaced horizontally lending support to arguments for connectivity. The magnetic signature of the Bushveld Complex provides an ideal mapping tool as the UZ has high susceptibility values and is well layered showing up as distinct anomalies on new high resolution magnetic data. However, this signature is similar to the highly magnetic BIFs found in the Transvaal and in the Witwatersrand Supergroups. Through careful mapping using new high resolution aeromagnetic data, we have been able to map the Bushveld UZ in complicated geological regions and identify a characteristic signature with well defined layers. The Main Zone, which has a more subdued magnetic signature, does have a strong remanent component and exhibits several magnetic reversals. The magnetic layers of the UZ contain layers of magnetitite with as much as 80-90% pure magnetite with large crystals (1-2 cm). While these layers are not strongly remanent, they have extremely high magnetic susceptibilities, and the self demagnetization effect must be taken into account when modeling these layers. Because the Bushveld Complex is so large, the geometry of the Earth’s magnetic field relative to the layers of the UZ Bushveld Complex changes orientation, creating
The inflammatory stimulus of a natural latex biomembrane improves healing in mice.
Andrade, T A M; Iyer, A; Das, P K; Foss, N T; Garcia, S B; Coutinho-Netto, J; Jordão-Jr, A A; Frade, M A C
2011-10-01
The aim of the present study was to compare healing obtained with biomembranes with the natural healing process (sham) using biochemical and immunohistological assays. C57BL/6 mice were divided into 4 groups of 15 mice each and received different subcutaneous implants: natural latex biomembrane (NLB), denatured latex (DL), expanded polytetrafluorethylene (ePTFE), or sham. On the 2nd, 7th, and 14th days post-treatment, 5 mice per group were sacrificed and biopsied for the following measurements: oxidative stress based on malondialdehyde (MDA), myeloperoxidase (MPO) and hydrogen peroxide by the method of ferrous oxidation-xylenol orange (FOX), as well as glutathione and total proteins; histological evaluation to enumerate inflammatory cells, fibroblasts, blood vessels, and collagen, and immunohistochemical staining for inducible nitric oxide synthase, interleukin-1β, vascular endothelial growth factor (VEGF), and transforming growth factor-β1 (TGF-β1). On day 2 post-treatment, NLB stimulated a dense inflammatory infiltrate mainly consisting of polymorphonuclear cells, as indicated by increased MPO (P < 0.05), but oxidative stress due to MDA was not observed until the 7th day (P < 0.05). The number of blood vessels was greater in NLB (P < 0.05) and DL (P < 0.05) mice compared to sham animals on day 14. NLB induced fibroplasia by day 14 (P < 0.05) with low expression of TGF-β1 and collagenesis. Thus, NLB significantly induced the inflammatory phase of healing mediated by oxidative stress, which appeared to influence the subsequent phases such as angiogenesis (with low expression of VEGF) and fibroplasia (independent of TGF-β1) without influencing collagenesis.
Soares, Diana Gabriela; Rosseto, Hebert Luís; Basso, Fernanda Gonçalves; Scheffel, Débora Salles; Hebling, Josimeri; Costa, Carlos Alberto de Souza
2016-01-01
The development of biomaterials capable of driving dental pulp stem cell differentiation into odontoblast-like cells able to secrete reparative dentin is the goal of current conservative dentistry. In the present investigation, a biomembrane (BM) composed of a chitosan/collagen matrix embedded with calcium-aluminate microparticles was tested. The BM was produced by mixing collagen gel with a chitosan solution (2:1), and then adding bioactive calcium-aluminate cement as the mineral phase. An inert material (polystyrene) was used as the negative control. Human dental pulp cells were seeded onto the surface of certain materials, and the cytocompatibility was evaluated by cell proliferation and cell morphology, assessed after 1, 7, 14 and 28 days in culture. The odontoblastic differentiation was evaluated by measuring alkaline phosphatase (ALP) activity, total protein production, gene expression of DMP-1/DSPP and mineralized nodule deposition. The pulp cells were able to attach onto the BM surface and spread, displaying a faster proliferative rate at initial periods than that of the control cells. The BM also acted on the cells to induce more intense ALP activity, protein production at 14 days, and higher gene expression of DSPP and DMP-1 at 28 days, leading to the deposition of about five times more mineralized matrix than the cells in the control group. Therefore, the experimental biomembrane induced the differentiation of pulp cells into odontoblast-like cells featuring a highly secretory phenotype. This innovative bioactive material can drive other protocols for dental pulp exposure treatment by inducing the regeneration of dentin tissue mediated by resident cells.
Structured analysis and modeling of complex systems
NASA Technical Reports Server (NTRS)
Strome, David R.; Dalrymple, Mathieu A.
1992-01-01
The Aircrew Evaluation Sustained Operations Performance (AESOP) facility at Brooks AFB, Texas, combines the realism of an operational environment with the control of a research laboratory. In recent studies we collected extensive data from the Airborne Warning and Control Systems (AWACS) Weapons Directors subjected to high and low workload Defensive Counter Air Scenarios. A critical and complex task in this environment involves committing a friendly fighter against a hostile fighter. Structured Analysis and Design techniques and computer modeling systems were applied to this task as tools for analyzing subject performance and workload. This technology is being transferred to the Man-Systems Division of NASA Johnson Space Center for application to complex mission related tasks, such as manipulating the Shuttle grappler arm.
Project trades model for complex space missions
NASA Technical Reports Server (NTRS)
Girerd, Andre R.; Shishko, Roberto
2003-01-01
A Project Trades Model (PTM) is a collection of tools/simulations linked together to rapidly perform integrated system trade studies of performance, cost, risk, and mission effectiveness. An operating PTM captures the interactions between various targeted systems and subsystems through an exchange of computed variables of the constituent models. Selection and implementation of the order, method of interaction, model type, and envisioned operation of the ensemble of tools rpresents the key system engineering challenge of the approach. This paper describes an approach to building a PTM and using it to perform top-level system trades for a complex space mission. In particular, the PTM discussed here is for a future Mars mission involving a large rover.
On Complexity of the Quantum Ising Model
NASA Astrophysics Data System (ADS)
Bravyi, Sergey; Hastings, Matthew
2017-01-01
We study complexity of several problems related to the Transverse field Ising Model (TIM). First, we consider the problem of estimating the ground state energy known as the Local Hamiltonian Problem (LHP). It is shown that the LHP for TIM on degree-3 graphs is equivalent modulo polynomial reductions to the LHP for general k-local `stoquastic' Hamiltonians with any constant {k ≥ 2}. This result implies that estimating the ground state energy of TIM on degree-3 graphs is a complete problem for the complexity class {StoqMA} —an extension of the classical class {MA}. As a corollary, we complete the complexity classification of 2-local Hamiltonians with a fixed set of interactions proposed recently by Cubitt and Montanaro. Secondly, we study quantum annealing algorithms for finding ground states of classical spin Hamiltonians associated with hard optimization problems. We prove that the quantum annealing with TIM Hamiltonians is equivalent modulo polynomial reductions to the quantum annealing with a certain subclass of k-local stoquastic Hamiltonians. This subclass includes all Hamiltonians representable as a sum of a k-local diagonal Hamiltonian and a 2-local stoquastic Hamiltonian.
Lattice Boltzmann model for the complex Ginzburg-Landau equation.
Zhang, Jianying; Yan, Guangwu
2010-06-01
A lattice Boltzmann model with complex distribution function for the complex Ginzburg-Landau equation (CGLE) is proposed. By using multiscale technique and the Chapman-Enskog expansion on complex variables, we obtain a series of complex partial differential equations. Then, complex equilibrium distribution function and its complex moments are obtained. Based on this model, the rotation and oscillation properties of stable spiral waves and the breaking-up behavior of unstable spiral waves in CGLE are investigated in detail.
Complex Educational Design: A Course Design Model Based on Complexity
ERIC Educational Resources Information Center
Freire, Maximina Maria
2013-01-01
Purpose: This article aims at presenting a conceptual framework which, theoretically grounded on complexity, provides the basis to conceive of online language courses that intend to respond to the needs of students and society. Design/methodology/approach: This paper is introduced by reflections on distance education and on the paradigmatic view…
Delineating parameter unidentifiabilities in complex models
NASA Astrophysics Data System (ADS)
Raman, Dhruva V.; Anderson, James; Papachristodoulou, Antonis
2017-03-01
Scientists use mathematical modeling as a tool for understanding and predicting the properties of complex physical systems. In highly parametrized models there often exist relationships between parameters over which model predictions are identical, or nearly identical. These are known as structural or practical unidentifiabilities, respectively. They are hard to diagnose and make reliable parameter estimation from data impossible. They furthermore imply the existence of an underlying model simplification. We describe a scalable method for detecting unidentifiabilities, as well as the functional relations defining them, for generic models. This allows for model simplification, and appreciation of which parameters (or functions thereof) cannot be estimated from data. Our algorithm can identify features such as redundant mechanisms and fast time-scale subsystems, as well as the regimes in parameter space over which such approximations are valid. We base our algorithm on a quantification of regional parametric sensitivity that we call `multiscale sloppiness'. Traditionally, the link between parametric sensitivity and the conditioning of the parameter estimation problem is made locally, through the Fisher information matrix. This is valid in the regime of infinitesimal measurement uncertainty. We demonstrate the duality between multiscale sloppiness and the geometry of confidence regions surrounding parameter estimates made where measurement uncertainty is non-negligible. Further theoretical relationships are provided linking multiscale sloppiness to the likelihood-ratio test. From this, we show that a local sensitivity analysis (as typically done) is insufficient for determining the reliability of parameter estimation, even with simple (non)linear systems. Our algorithm can provide a tractable alternative. We finally apply our methods to a large-scale, benchmark systems biology model of necrosis factor (NF)-κ B , uncovering unidentifiabilities.
Using Perspective to Model Complex Processes
Kelsey, R.L.; Bisset, K.R.
1999-04-04
The notion of perspective, when supported in an object-based knowledge representation, can facilitate better abstractions of reality for modeling and simulation. The object modeling of complex physical and chemical processes is made more difficult in part due to the poor abstractions of state and phase changes available in these models. The notion of perspective can be used to create different views to represent the different states of matter in a process. These techniques can lead to a more understandable model. Additionally, the ability to record the progress of a process from start to finish is problematic. It is desirable to have a historic record of the entire process, not just the end result of the process. A historic record should facilitate backtracking and re-start of a process at different points in time. The same representation structures and techniques can be used to create a sequence of process markers to represent a historic record. By using perspective, the sequence of markers can have multiple and varying views tailored for a particular user's context of interest.
Reducing Spatial Data Complexity for Classification Models
NASA Astrophysics Data System (ADS)
Ruta, Dymitr; Gabrys, Bogdan
2007-11-01
Intelligent data analytics gradually becomes a day-to-day reality of today's businesses. However, despite rapidly increasing storage and computational power current state-of-the-art predictive models still can not handle massive and noisy corporate data warehouses. What is more adaptive and real-time operational environment requires multiple models to be frequently retrained which further hinders their use. Various data reduction techniques ranging from data sampling up to density retention models attempt to address this challenge by capturing a summarised data structure, yet they either do not account for labelled data or degrade the classification performance of the model trained on the condensed dataset. Our response is a proposition of a new general framework for reducing the complexity of labelled data by means of controlled spatial redistribution of class densities in the input space. On the example of Parzen Labelled Data Compressor (PLDC) we demonstrate a simulatory data condensation process directly inspired by the electrostatic field interaction where the data are moved and merged following the attracting and repelling interactions with the other labelled data. The process is controlled by the class density function built on the original data that acts as a class-sensitive potential field ensuring preservation of the original class density distributions, yet allowing data to rearrange and merge joining together their soft class partitions. As a result we achieved a model that reduces the labelled datasets much further than any competitive approaches yet with the maximum retention of the original class densities and hence the classification performance. PLDC leaves the reduced dataset with the soft accumulative class weights allowing for efficient online updates and as shown in a series of experiments if coupled with Parzen Density Classifier (PDC) significantly outperforms competitive data condensation methods in terms of classification performance at the
Reducing Spatial Data Complexity for Classification Models
Ruta, Dymitr; Gabrys, Bogdan
2007-11-29
Intelligent data analytics gradually becomes a day-to-day reality of today's businesses. However, despite rapidly increasing storage and computational power current state-of-the-art predictive models still can not handle massive and noisy corporate data warehouses. What is more adaptive and real-time operational environment requires multiple models to be frequently retrained which further hinders their use. Various data reduction techniques ranging from data sampling up to density retention models attempt to address this challenge by capturing a summarised data structure, yet they either do not account for labelled data or degrade the classification performance of the model trained on the condensed dataset. Our response is a proposition of a new general framework for reducing the complexity of labelled data by means of controlled spatial redistribution of class densities in the input space. On the example of Parzen Labelled Data Compressor (PLDC) we demonstrate a simulatory data condensation process directly inspired by the electrostatic field interaction where the data are moved and merged following the attracting and repelling interactions with the other labelled data. The process is controlled by the class density function built on the original data that acts as a class-sensitive potential field ensuring preservation of the original class density distributions, yet allowing data to rearrange and merge joining together their soft class partitions. As a result we achieved a model that reduces the labelled datasets much further than any competitive approaches yet with the maximum retention of the original class densities and hence the classification performance. PLDC leaves the reduced dataset with the soft accumulative class weights allowing for efficient online updates and as shown in a series of experiments if coupled with Parzen Density Classifier (PDC) significantly outperforms competitive data condensation methods in terms of classification performance at the
Koynova, Rumiana; MacDonald, Robert C.
2010-01-18
A viewpoint now emerging is that a critical factor in lipid-mediated transfection (lipofection) is the structural evolution of lipoplexes upon interacting and mixing with cellular lipids. Here we report our finding that lipid mixtures mimicking biomembrane lipid compositions are superior to pure anionic liposomes in their ability to release DNA from lipoplexes (cationic lipid/DNA complexes), even though they have a much lower negative charge density (and thus lower capacity to neutralize the positive charge of the lipoplex lipids). Flow fluorometry revealed that the portion of DNA released after a 30-min incubation of the cationic O-ethylphosphatidylcholine lipoplexes with the anionic phosphatidylserine or phosphatidylglycerol was 19% and 37%, respectively, whereas a mixture mimicking biomembranes (MM: phosphatidylcholine/phosphatidylethanolamine/phosphatidylserine /cholesterol 45:20:20:15 w/w) and polar lipid extract from bovine liver released 62% and 74%, respectively, of the DNA content. A possible reason for this superior power in releasing DNA by the natural lipid mixtures was suggested by structural experiments: while pure anionic lipids typically form lamellae, the natural lipid mixtures exhibited a surprising predilection to form nonlamellar phases. Thus, the MM mixture arranged into lamellar arrays at physiological temperature, but began to convert to the hexagonal phase at a slightly higher temperature, {approx} 40-45 C. A propensity to form nonlamellar phases (hexagonal, cubic, micellar) at close to physiological temperatures was also found with the lipid extracts from natural tissues (from bovine liver, brain, and heart). This result reveals that electrostatic interactions are only one of the factors involved in lipid-mediated DNA delivery. The tendency of lipid bilayers to form nonlamellar phases has been described in terms of bilayer 'frustration' which imposes a nonzero intrinsic curvature of the two opposing monolayers. Because the stored curvature
Advanced Combustion Modeling for Complex Turbulent Flows
NASA Technical Reports Server (NTRS)
Ham, Frank Stanford
2005-01-01
The next generation of aircraft engines will need to pass stricter efficiency and emission tests. NASA's Ultra-Efficient Engine Technology (UEET) program has set an ambitious goal of 70% reduction of NO(x) emissions and a 15% increase in fuel efficiency of aircraft engines. We will demonstrate the state-of-the-art combustion tools developed a t Stanford's Center for Turbulence Research (CTR) as part of this program. In the last decade, CTR has spear-headed a multi-physics-based combustion modeling program. Key technologies have been transferred to the aerospace industry and are currently being used for engine simulations. In this demo, we will showcase the next-generation combustion modeling tools that integrate a very high level of detailed physics into advanced flow simulation codes. Combustor flows involve multi-phase physics with liquid fuel jet breakup, evaporation, and eventual combustion. Individual components of the simulation are verified against complex test cases and show excellent agreement with experimental data.
Discrete Element Modeling of Complex Granular Flows
NASA Astrophysics Data System (ADS)
Movshovitz, N.; Asphaug, E. I.
2010-12-01
Granular materials occur almost everywhere in nature, and are actively studied in many fields of research, from food industry to planetary science. One approach to the study of granular media, the continuum approach, attempts to find a constitutive law that determines the material's flow, or strain, under applied stress. The main difficulty with this approach is that granular systems exhibit different behavior under different conditions, behaving at times as an elastic solid (e.g. pile of sand), at times as a viscous fluid (e.g. when poured), or even as a gas (e.g. when shaken). Even if all these physics are accounted for, numerical implementation is made difficult by the wide and often discontinuous ranges in continuum density and sound speed. A different approach is Discrete Element Modeling (DEM). Here the goal is to directly model every grain in the system as a rigid body subject to various body and surface forces. The advantage of this method is that it treats all of the above regimes in the same way, and can easily deal with a system moving back and forth between regimes. But as a granular system typically contains a multitude of individual grains, the direct integration of the system can be very computationally expensive. For this reason most DEM codes are limited to spherical grains of uniform size. However, spherical grains often cannot replicate the behavior of real world granular systems. A simple pile of spherical grains, for example, relies on static friction alone to keep its shape, while in reality a pile of irregular grains can maintain a much steeper angle by interlocking force chains. In the present study we employ a commercial DEM, nVidia's PhysX Engine, originally designed for the game and animation industry, to simulate complex granular flows with irregular, non-spherical grains. This engine runs as a multi threaded process and can be GPU accelerated. We demonstrate the code's ability to physically model granular materials in the three regimes
NASA Astrophysics Data System (ADS)
Qi, Wei; Ishimaru, Ichiro
2010-02-01
We propose an image-producing Fourier spectroscopic technology that enables two-dimensional spectroscopic images to be obtained within the focusing plane alone. This technology incorporates auto-correlational phase-shift interferometry that uses only object light generated by the bright points that optically make up the object. We are currently involved in studies of non-invasive technologies used to measure blood components such as glucose and lipids, which are measured for use in daily living. Previous studies have investigated non-invasive technologies that measure blood glucose levels by utilizing near-infrared light that permeates the skin well. It has been confirmed that subtle changes in the concentration of a glucose solution, a sample used to measure the glucose level, can be measured by analyzing the spectroscopic characteristics of near-infrared light; however, when applied to a biomembrane, technology such as this is incapable of precisely measuring the glucose level because light diffusion within the skin disturbs the measurement. Our proposed technology enables two-dimensional spectroscopy to a limited depth below the skin covered by the measurement. Specifically, our technology concentrates only on the vascular territory near the skin surface, which is only minimally affected by light diffusion, as discussed previously; the spectroscopic characteristics of this territory are obtained and the glucose level can be measured with good sensitivity. In this paper we propose an image-producing Fourier spectroscopy method that is used as the measuring technology in producing a three-dimensional spectroscopic image.
Wound healing modulation by a latex protein-containing polyvinyl alcohol biomembrane.
Ramos, Márcio V; de Alencar, Nylane Maria N; de Oliveira, Raquel S B; Freitas, Lyara B N; Aragão, Karoline S; de Andrade, Thiago Antônio M; Frade, Marco Andrey C; Brito, Gerly Anne C; de Figueiredo, Ingrid Samantha T
2016-07-01
In a previous study, we performed the chemical characterization of a polyvinyl alcohol (PVA) membrane supplemented with latex proteins (LP) displaying wound healing activity, and its efficacy as a delivery system was demonstrated. Here, we report on aspects of the mechanism underlying the performance of the PVA-latex protein biomembrane on wound healing. LP-PVA, but not PVA, induced more intense leukocyte (neutrophil) migration and mast cell degranulation during the inflammatory phase of the cicatricial process. Likewise, LP-PVA induced an increase in key markers and mediators of the inflammatory response (myeloperoxidase activity, nitric oxide, TNF, and IL-1β). These results demonstrated that LP-PVA significantly accelerates the early phase of the inflammatory process by upregulating cytokine release. This remarkable effect improves the subsequent phases of the healing process. The polyvinyl alcohol membrane was fully absorbed as an inert support while LP was shown to be active. It is therefore concluded that the LP-PVA is a suitable bioresource for biomedical engineering.
Ambjörnsson, Tobias; Lomholt, Michael A; Hansen, Per Lyngs
2007-05-01
We investigate the effect on biomembrane mechanical properties due to the presence an external potential for a nonconductive incompressible membrane surrounded by different electrolytes. By solving the Debye-Hückel and Laplace equations for the electrostatic potential and using the relevant stress-tensor we find (1) in the small screening length limit, where the Debye screening length is smaller than the distance between the electrodes, the screening certifies that all electrostatic interactions are short range and the major effect of the applied potential is to decrease the membrane tension and increase the bending rigidity; explicit expressions for electrostatic contribution to the tension and bending rigidity are derived as a function of the applied potential, the Debye screening lengths, and the dielectric constants of the membrane and the solvents. For sufficiently large voltages the negative contribution to the tension is expected to cause a membrane stretching instability. (2) For the dielectric limit, i.e., no salt (and small wave vectors compared to the distance between the electrodes), when the dielectric constant on the two sides are different the applied potential induces an effective (unscreened) membrane charge density, whose long-range interaction is expected to lead to a membrane undulation instability.
Probing protein-protein interaction in biomembranes using Fourier transform infrared spectroscopy.
Haris, Parvez I
2013-10-01
The position, intensity and width of bands in infrared spectra that arise from vibrational modes within a protein can be used to probe protein secondary structure, amino acid side chain structure as well as protein dynamics and stability. FTIR spectroscopic studies on protein-protein interaction have been severely limited due to extensive overlap of peaks, from the interacting proteins. This problem is being addressed by combining data processing and acquisition techniques (difference spectroscopy and two-dimensional spectroscopy) with judicious modifications in the protein primary structure through molecular biological and chemical methods. These include the ability to modify amino acids (site-directed mutagenesis; chemical synthesis) and produce isotopically labelled proteins and peptides. Whilst great progress is being made towards overcoming the congestion of overlapping peaks, the slow progress in the assignment of bands continues to be a major hindrance in the use of infrared spectroscopy for obtaining highly accurate and precise information on protein structure. This review discusses some of these problems and presents examples of infrared studies on protein-protein interaction in biomembrane systems. This article is part of a Special Issue entitled: FTIR in membrane proteins and peptide studies.
Ultrasonic ray models for complex geometries
NASA Astrophysics Data System (ADS)
Schumm, A.
2000-05-01
Computer Aided Design techniques have become an inherent part of many industrial applications and are also gaining popularity in Nondestructive Testing. In sound field calculations, CAD representations can contribute to one of the generic problem in ultrasonic modeling, the wave propagation in complex geometries. Ray tracing codes were the first to take account of the geometry, providing qualitative information on beam propagation, such as geometrical echoes, multiple sound paths and possible conversions between wave modes. The forward ray tracing approach is intuitive and straightforward and can evolve towards a more quantitative code if transmission, divergence and polarization information is added. If used to evaluate the impulse response of a given geometry, an approximated time-dependent received signal can be obtained after convolution with the excitation signal. The more accurate reconstruction of a sound field after interaction with a geometrical interface according to ray theory requires inverse (or Fermat) ray-tracing to obtain the contribution of each elementary point source to the field at a given observation point. The resulting field of a finite transducer can then be obtained after integration over all point sources. While conceptionally close to classical ray tracing, this approach puts more stringent requirements on the CAD representation employed and is more difficult to extend towards multiple interfaces. In this communication we present examples for both approaches. In a prospective step, the link between both ray techniques is shown, and we illustrate how a combination of both approaches contributes to the solution of an industrial problem.
Modeling competitive substitution in a polyelectrolyte complex
Peng, B.; Muthukumar, M.
2015-12-28
We have simulated the invasion of a polyelectrolyte complex made of a polycation chain and a polyanion chain, by another longer polyanion chain, using the coarse-grained united atom model for the chains and the Langevin dynamics methodology. Our simulations reveal many intricate details of the substitution reaction in terms of conformational changes of the chains and competition between the invading chain and the chain being displaced for the common complementary chain. We show that the invading chain is required to be sufficiently longer than the chain being displaced for effecting the substitution. Yet, having the invading chain to be longer than a certain threshold value does not reduce the substitution time much further. While most of the simulations were carried out in salt-free conditions, we show that presence of salt facilitates the substitution reaction and reduces the substitution time. Analysis of our data shows that the dominant driving force for the substitution process involving polyelectrolytes lies in the release of counterions during the substitution.
Clinical complexity in medicine: A measurement model of task and patient complexity
Islam, R.; Weir, C.; Fiol, G. Del
2016-01-01
Summary Background Complexity in medicine needs to be reduced to simple components in a way that is comprehensible to researchers and clinicians. Few studies in the current literature propose a measurement model that addresses both task and patient complexity in medicine. Objective The objective of this paper is to develop an integrated approach to understand and measure clinical complexity by incorporating both task and patient complexity components focusing on infectious disease domain. The measurement model was adapted and modified to healthcare domain. Methods Three clinical Infectious Disease teams were observed, audio-recorded and transcribed. Each team included an Infectious Diseases expert, one Infectious Diseases fellow, one physician assistant and one pharmacy resident fellow. The transcripts were parsed and the authors independently coded complexity attributes. This baseline measurement model of clinical complexity was modified in an initial set of coding process and further validated in a consensus-based iterative process that included several meetings and email discussions by three clinical experts from diverse backgrounds from the Department of Biomedical Informatics at the University of Utah. Inter-rater reliability was calculated using Cohen’s kappa. Results The proposed clinical complexity model consists of two separate components. The first is a clinical task complexity model with 13 clinical complexity-contributing factors and 7 dimensions. The second is the patient complexity model with 11 complexity-contributing factors and 5 dimensions. Conclusion The measurement model for complexity encompassing both task and patient complexity will be a valuable resource for future researchers and industry to measure and understand complexity in healthcare. PMID:26404626
Modeling Complex Chemical Systems: Problems and Solutions
NASA Astrophysics Data System (ADS)
van Dijk, Jan
2016-09-01
Non-equilibrium plasmas in complex gas mixtures are at the heart of numerous contemporary technologies. They typically contain dozens to hundreds of species, involved in hundreds to thousands of reactions. Chemists and physicists have always been interested in what are now called chemical reduction techniques (CRT's). The idea of such CRT's is that they reduce the number of species that need to be considered explicitly without compromising the validity of the model. This is usually achieved on the basis of an analysis of the reaction time scales of the system under study, which identifies species that are in partial equilibrium after a given time span. The first such CRT that has been widely used in plasma physics was developed in the 1960's and resulted in the concept of effective ionization and recombination rates. It was later generalized to systems in which multiple levels are effected by transport. In recent years there has been a renewed interest in tools for chemical reduction and reaction pathway analysis. An example of the latter is the PumpKin tool. Another trend is that techniques that have previously been developed in other fields of science are adapted as to be able to handle the plasma state of matter. Examples are the Intrinsic Low Dimension Manifold (ILDM) method and its derivatives, which originate from combustion engineering, and the general-purpose Principle Component Analysis (PCA) technique. In this contribution we will provide an overview of the most common reduction techniques, then critically assess the pros and cons of the methods that have gained most popularity in recent years. Examples will be provided for plasmas in argon and carbon dioxide.
Power Curve Modeling in Complex Terrain Using Statistical Models
NASA Astrophysics Data System (ADS)
Bulaevskaya, V.; Wharton, S.; Clifton, A.; Qualley, G.; Miller, W.
2014-12-01
Traditional power output curves typically model power only as a function of the wind speed at the turbine hub height. While the latter is an essential predictor of power output, wind speed information in other parts of the vertical profile, as well as additional atmospheric variables, are also important determinants of power. The goal of this work was to determine the gain in predictive ability afforded by adding wind speed information at other heights, as well as other atmospheric variables, to the power prediction model. Using data from a wind farm with a moderately complex terrain in the Altamont Pass region in California, we trained three statistical models, a neural network, a random forest and a Gaussian process model, to predict power output from various sets of aforementioned predictors. The comparison of these predictions to the observed power data revealed that considerable improvements in prediction accuracy can be achieved both through the addition of predictors other than the hub-height wind speed and the use of statistical models. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract DE-AC52-07NA27344 and was funded by Wind Uncertainty Quantification Laboratory Directed Research and Development Project at LLNL under project tracking code 12-ERD-069.
NASA Astrophysics Data System (ADS)
Li, Li; Wu, Qing-Sheng; Ding, Ya-Ping
2004-12-01
In this paper, a novel method is reported by which semiconductor materials are synthesized via controlled organism membranes. Semiconductor lead selenide nanorods and nanotubes have been successfully prepared simultaneously through living bio-membrane bi-templates of the mungbean sprout. The lead selenide nanorods are approximately 45 nm in diameter, and up to 1100 nm in length; all of them are single crystalline in structure. Lead selenide nanotubes are 50 nm in diameter, and up to 2000 nm in length, and are poly-crystalline in structure. The characteristics of the products are illustrated by various means, and their possible formation mechanism is explored.
Spatiotemporal Organization of Spin-Coated Supported Model Membranes
NASA Astrophysics Data System (ADS)
Simonsen, Adam Cohen
All cells of living organisms are separated from their surroundings and organized internally by means of flexible lipid membranes. In fact, there is consensus that the minimal requirements for self-replicating life processes include the following three features: (1) information carriers (DNA, RNA), (2) a metabolic system, and (3) encapsulation in a container structure [1]. Therefore, encapsulation can be regarded as an essential part of life itself. In nature, membranes are highly diverse interfacial structures that compartmentalize cells [2]. While prokaryotic cells only have an outer plasma membrane and a less-well-developed internal membrane structure, eukaryotic cells have a number of internal membranes associated with the organelles and the nucleus. Many of these membrane structures, including the plasma membrane, are complex layered systems, but with the basic structure of a lipid bilayer. Biomembranes contain hundreds of different lipid species in addition to embedded or peripherally associated membrane proteins and connections to scaffolds such as the cytoskeleton. In vitro, lipid bilayers are spontaneously self-organized structures formed by a large group of amphiphilic lipid molecules in aqueous suspensions. Bilayer formation is driven by the entropic properties of the hydrogen bond network in water in combination with the amphiphilic nature of the lipids. The molecular shapes of the lipid constituents play a crucial role in bilayer formation, and only lipids with approximately cylindrical shapes are able to form extended bilayers. The bilayer structure of biomembranes was discovered by Gorter and Grendel in 1925 [3] using monolayer studies of lipid extracts from red blood cells. Later, a number of conceptual models were developed to rationalize the organization of lipids and proteins in biological membranes. One of the most celebrated is the fluid-mosaic model by Singer and Nicolson (1972) [4]. According to this model, the lipid bilayer component of
Modeling Complex Workflow in Molecular Diagnostics
Gomah, Mohamed E.; Turley, James P.; Lu, Huimin; Jones, Dan
2010-01-01
One of the hurdles to achieving personalized medicine has been implementing the laboratory processes for performing and reporting complex molecular tests. The rapidly changing test rosters and complex analysis platforms in molecular diagnostics have meant that many clinical laboratories still use labor-intensive manual processing and testing without the level of automation seen in high-volume chemistry and hematology testing. We provide here a discussion of design requirements and the results of implementation of a suite of lab management tools that incorporate the many elements required for use of molecular diagnostics in personalized medicine, particularly in cancer. These applications provide the functionality required for sample accessioning and tracking, material generation, and testing that are particular to the evolving needs of individualized molecular diagnostics. On implementation, the applications described here resulted in improvements in the turn-around time for reporting of more complex molecular test sets, and significant changes in the workflow. Therefore, careful mapping of workflow can permit design of software applications that simplify even the complex demands of specialized molecular testing. By incorporating design features for order review, software tools can permit a more personalized approach to sample handling and test selection without compromising efficiency. PMID:20007844
Specifying and Refining a Complex Measurement Model.
ERIC Educational Resources Information Center
Levy, Roy; Mislevy, Robert J.
This paper aims to describe a Bayesian approach to modeling and estimating cognitive models both in terms of statistical machinery and actual instrument development. Such a method taps the knowledge of experts to provide initial estimates for the probabilistic relationships among the variables in a multivariate latent variable model and refines…
Acquisition of Complex Systemic Thinking: Mental Models of Evolution
ERIC Educational Resources Information Center
d'Apollonia, Sylvia T.; Charles, Elizabeth S.; Boyd, Gary M.
2004-01-01
We investigated the impact of introducing college students to complex adaptive systems on their subsequent mental models of evolution compared to those of students taught in the same manner but with no reference to complex systems. The students' mental models (derived from similarity ratings of 12 evolutionary terms using the pathfinder algorithm)…
Complex Systems and Human Performance Modeling
2013-12-01
human communication patterns can be implemented in a task network modeling tool. Although queues are a basic feature in many task network modeling...time. MODELING COMMUNICATIVE BEHAVIOR Barabasi (2010) argues that human communication patterns are “bursty”; that is, the inter-event arrival...Having implemented the methods advocated by Clauset et al. in C3TRACE, we have grown more confident that the human communication data discussed above
Multiscale Computational Models of Complex Biological Systems
Walpole, Joseph; Papin, Jason A.; Peirce, Shayn M.
2014-01-01
Integration of data across spatial, temporal, and functional scales is a primary focus of biomedical engineering efforts. The advent of powerful computing platforms, coupled with quantitative data from high-throughput experimental platforms, has allowed multiscale modeling to expand as a means to more comprehensively investigate biological phenomena in experimentally relevant ways. This review aims to highlight recently published multiscale models of biological systems while using their successes to propose the best practices for future model development. We demonstrate that coupling continuous and discrete systems best captures biological information across spatial scales by selecting modeling techniques that are suited to the task. Further, we suggest how to best leverage these multiscale models to gain insight into biological systems using quantitative, biomedical engineering methods to analyze data in non-intuitive ways. These topics are discussed with a focus on the future of the field, the current challenges encountered, and opportunities yet to be realized. PMID:23642247
Information, complexity and efficiency: The automobile model
Allenby, B. |
1996-08-08
The new, rapidly evolving field of industrial ecology - the objective, multidisciplinary study of industrial and economic systems and their linkages with fundamental natural systems - provides strong ground for believing that a more environmentally and economically efficient economy will be more information intensive and complex. Information and intellectual capital will be substituted for the more traditional inputs of materials and energy in producing a desirable, yet sustainable, quality of life. While at this point this remains a strong hypothesis, the evolution of the automobile industry can be used to illustrate how such substitution may, in fact, already be occurring in an environmentally and economically critical sector.
Slip complexity in dynamic models of earthquake faults.
Langer, J S; Carlson, J M; Myers, C R; Shaw, B E
1996-01-01
We summarize recent evidence that models of earthquake faults with dynamically unstable friction laws but no externally imposed heterogeneities can exhibit slip complexity. Two models are described here. The first is a one-dimensional model with velocity-weakening stick-slip friction; the second is a two-dimensional elastodynamic model with slip-weakening friction. Both exhibit small-event complexity and chaotic sequences of large characteristic events. The large events in both models are composed of Heaton pulses. We argue that the key ingredients of these models are reasonably accurate representations of the properties of real faults. PMID:11607671
Modeling Power Systems as Complex Adaptive Systems
Chassin, David P.; Malard, Joel M.; Posse, Christian; Gangopadhyaya, Asim; Lu, Ning; Katipamula, Srinivas; Mallow, J V.
2004-12-30
Physical analogs have shown considerable promise for understanding the behavior of complex adaptive systems, including macroeconomics, biological systems, social networks, and electric power markets. Many of today's most challenging technical and policy questions can be reduced to a distributed economic control problem. Indeed, economically based control of large-scale systems is founded on the conjecture that the price-based regulation (e.g., auctions, markets) results in an optimal allocation of resources and emergent optimal system control. This report explores the state-of-the-art physical analogs for understanding the behavior of some econophysical systems and deriving stable and robust control strategies for using them. We review and discuss applications of some analytic methods based on a thermodynamic metaphor, according to which the interplay between system entropy and conservation laws gives rise to intuitive and governing global properties of complex systems that cannot be otherwise understood. We apply these methods to the question of how power markets can be expected to behave under a variety of conditions.
Integrated Modeling of Complex Optomechanical Systems
NASA Astrophysics Data System (ADS)
Andersen, Torben; Enmark, Anita
2011-09-01
Mathematical modeling and performance simulation are playing an increasing role in large, high-technology projects. There are two reasons; first, projects are now larger than they were before, and the high cost calls for detailed performance prediction before construction. Second, in particular for space-related designs, it is often difficult to test systems under realistic conditions beforehand, and mathematical modeling is then needed to verify in advance that a system will work as planned. Computers have become much more powerful, permitting calculations that were not possible before. At the same time mathematical tools have been further developed and found acceptance in the community. Particular progress has been made in the fields of structural mechanics, optics and control engineering, where new methods have gained importance over the last few decades. Also, methods for combining optical, structural and control system models into global models have found widespread use. Such combined models are usually called integrated models and were the subject of this symposium. The objective was to bring together people working in the fields of groundbased optical telescopes, ground-based radio telescopes, and space telescopes. We succeeded in doing so and had 39 interesting presentations and many fruitful discussions during coffee and lunch breaks and social arrangements. We are grateful that so many top ranked specialists found their way to Kiruna and we believe that these proceedings will prove valuable during much future work.
Reduced-Complexity Models for Network Performance Prediction
2005-05-01
traffic over the network . To understand such a complex system it is necessary to develop accurate, yet simple, models to describe the performance...interconnected in complex ways, with millions of users sending traffic over the network . To understand such a complex system, it is necessary to develop...number of downloaders . . . . . . . . . . . . . . . . . 17 11 A network of ISP clouds. In this figure, the ISPs are connected via peering points, denoted
A simple model clarifies the complicated relationships of complex networks
Zheng, Bojin; Wu, Hongrun; Kuang, Li; Qin, Jun; Du, Wenhua; Wang, Jianmin; Li, Deyi
2014-01-01
Real-world networks such as the Internet and WWW have many common traits. Until now, hundreds of models were proposed to characterize these traits for understanding the networks. Because different models used very different mechanisms, it is widely believed that these traits origin from different causes. However, we find that a simple model based on optimisation can produce many traits, including scale-free, small-world, ultra small-world, Delta-distribution, compact, fractal, regular and random networks. Moreover, by revising the proposed model, the community-structure networks are generated. By this model and the revised versions, the complicated relationships of complex networks are illustrated. The model brings a new universal perspective to the understanding of complex networks and provide a universal method to model complex networks from the viewpoint of optimisation. PMID:25160506
Complex Chebyshev-polynomial-based unified model (CCPBUM) neural networks
NASA Astrophysics Data System (ADS)
Jeng, Jin-Tsong; Lee, Tsu-Tian
1998-03-01
In this paper, we propose complex Chebyshev Polynomial Based unified model neural network for the approximation of complex- valued function. Based on this approximate transformable technique, we have derived the relationship between the single-layered neural network and multi-layered perceptron neural network. It is shown that the complex Chebyshev Polynomial Based unified model neural network can be represented as a functional link network that are based on Chebyshev polynomial. We also derived a new learning algorithm for the proposed network. It turns out that the complex Chebyshev Polynomial Based unified model neural network not only has the same capability of universal approximator, but also has faster learning speed than conventional complex feedforward/recurrent neural network.
Zhang, Jingtao; Fan, Haihong; Levorse, Dorothy A; Crocker, Louis S
2011-03-01
Ionizable amino lipids are being pursued as an important class of materials for delivering small interfering RNA (siRNA) therapeutics, and research is being conducted to elucidate the structure-activity relationships (SAR) of these lipids. The pK(a) of cationic lipid headgroups is one of the critical physiochemical properties of interest due to the strong impact of lipid ionization on the assembly and performance of these lipids. This research focused on developing approaches that permit the rapid determination of the relevant pK(a) of the ionizable amino lipids. Two distinct approaches were investigated: (1) potentiometric titration of amino lipids dissolved in neutral surfactant micelles; and (2) pH-dependent partitioning of a fluorescent dye to cationic liposomes formulated from amino lipids. Using the approaches developed here, the pK(a) values of cationic lipids with distinct headgroups were measured and found to be significantly lower than calculated values. It was also found that lipid-lipid interaction has a strong impact on the pK(a) values of lipids. Lysis of model biomembranes by cationic lipids was used to evaluate the impact of lipid pK(a) on the interaction between cationic lipids and cell membranes. It was found that cationic lipid-biomembrane interaction depends strongly on lipid pK(a) and solution pH, and this interaction is much stronger when amino lipids are highly charged. The presence of an optimal pK(a) range of ionizable amino lipids for siRNA delivery was suggested based on these results. The pK(a) methods reported here can be used to support the SAR screen of cationic lipids for siRNA delivery, and the information revealed through studying the impact of pK(a) on the interaction between cationic lipids and cell membranes will contribute significantly to the design of more efficient siRNA delivery vehicles.
A musculoskeletal model of the elbow joint complex
NASA Technical Reports Server (NTRS)
Gonzalez, Roger V.; Barr, Ronald E.; Abraham, Lawrence D.
1993-01-01
This paper describes a musculoskeletal model that represents human elbow flexion-extension and forearm pronation-supination. Musculotendon parameters and the skeletal geometry were determined for the musculoskeletal model in the analysis of ballistic elbow joint complex movements. The key objective was to develop a computational model, guided by optimal control, to investigate the relationship among patterns of muscle excitation, individual muscle forces, and movement kinematics. The model was verified using experimental kinematic, torque, and electromyographic data from volunteer subjects performing both isometric and ballistic elbow joint complex movements. In general, the model predicted kinematic and muscle excitation patterns similar to what was experimentally measured.
An elementary method for implementing complex biokinetic models.
Leggett, R W; Eckerman, K F; Williams, L R
1993-03-01
Recent efforts to incorporate greater anatomical and physiological realism into biokinetic models have resulted in many cases in mathematically complex formulations that limit routine application of the models. This paper describes an elementary, computer-efficient technique for implementing complex compartmental models, with attention focused primarily on biokinetic models involving time-dependent transfer rates and recycling. The technique applies, in particular, to the physiologically based, age-specific biokinetic models recommended in Publication No. 56 of the International Commission on Radiological Protection, Age-Dependent Doses to Members of the Public from Intake of Radionuclides.
Classrooms as Complex Adaptive Systems: A Relational Model
ERIC Educational Resources Information Center
Burns, Anne; Knox, John S.
2011-01-01
In this article, we describe and model the language classroom as a complex adaptive system (see Logan & Schumann, 2005). We argue that linear, categorical descriptions of classroom processes and interactions do not sufficiently explain the complex nature of classrooms, and cannot account for how classroom change occurs (or does not occur), over…
Realistic modeling of complex oxide materials
NASA Astrophysics Data System (ADS)
Solovyev, I. V.
2011-01-01
Since electronic and magnetic properties of many transition-metal oxides can be efficiently controlled by external factors such as the temperature, pressure, electric or magnetic field, they are regarded as promising materials for various applications. From the viewpoint of the electronic structure, these phenomena are frequently related to the behavior of a small group of states located near the Fermi level. The basic idea of this project is to construct a model for the low-energy states, derive all the parameters rigorously on the basis of density functional theory (DFT), and to study this model by modern techniques. After a brief review of the method, the abilities of this approach will be illustrated on a number of examples, including multiferroic manganites and spin-orbital-lattice coupled phenomena in RVO 3 (where R is the three-valent element).
Complex Network Modeling with an Emulab HPC
2012-09-01
field. Actual Joint Tactical Radio System (JTRS) radios, Operations Network ( OPNET ) emulations, and GNU (recursive definition for GNU is Not Unix...open-source software-defined-radio software/ firmware/ hardware emulations can be accommodated. Index Terms—network emulation, Emulab, OPNET I...other hand, simulation tools such as MATLAB, Optimized Network Engineering Tools ( OPNET ), NS2, and CORE (a modeling environment from Vitech
Computational Modeling of Uranium Hydriding and Complexes
Balasubramanian, K; Siekhaus, W J; McLean, W
2003-02-03
Uranium hydriding is one of the most important processes that has received considerable attention over many years. Although many experimental and modeling studies have been carried out concerning thermochemistry, diffusion kinetics and mechanisms of U-hydriding, very little is known about the electronic structure and electronic features that govern the U-hydriding process. Yet it is the electronic feature that controls the activation barrier and thus the rate of hydriding. Moreover the role of impurities and the role of the product UH{sub 3} on hydriding rating are not fully understood. An early study by Condon and Larson concerns with the kinetics of U-hydrogen system and a mathematical model for the U-hydriding process. They proposed that diffusion in the reactant phase by hydrogen before nucleation to form hydride phase and that the reaction is first order for hydriding and zero order for dehydriding. Condon has also calculated and measures the reaction rates of U-hydriding and proposed a diffusion model for the U-hydriding. This model was found to be in excellent agreement with the experimental reaction rates. From the slopes of the Arrhenius plot the activation energy was calculated as 6.35 kcal/mole. In a subsequent study Kirkpatrick formulated a close-form for approximate solution to Condon's equation. Bloch and Mintz have proposed the kinetics and mechanism for the U-H reaction over a wide range of pressures and temperatures. They have discussed their results through two models, one, which considers hydrogen diffusion through a protective UH{sub 3} product layer, and the second where hydride growth occurs at the hydride-metal interface. These authors obtained two-dimensional fits of experimental data to the pressure-temperature reactions. Kirkpatrick and Condon have obtained a linear solution to hydriding of uranium. These authors showed that the calculated reaction rates compared quite well with the experimental data at a hydrogen pressure of 1 atm. Powell
Model complexity and performance: How far can we simplify?
NASA Astrophysics Data System (ADS)
Raick, C.; Soetaert, K.; Grégoire, M.
2006-07-01
Handling model complexity and reliability is a key area of research today. While complex models containing sufficient detail have become possible due to increased computing power, they often lead to too much uncertainty. On the other hand, very simple models often crudely oversimplify the real ecosystem and can not be used for management purposes. Starting from a complex and validated 1D pelagic ecosystem model of the Ligurian Sea (NW Mediterranean Sea), we derived simplified aggregated models in which either the unbalanced algal growth, the functional group diversity or the explicit description of the microbial loop was sacrificed. To overcome the problem of data availability with adequate spatial and temporal resolution, the outputs of the complex model are used as the baseline of perfect knowledge to calibrate the simplified models. Objective criteria of model performance were used to compare the simplified models’ results to the complex model output and to the available data at the DYFAMED station in the central Ligurian Sea. We show that even the simplest (NPZD) model is able to represent the global ecosystem features described by the complex model (e.g. primary and secondary productions, particulate organic matter export flux, etc.). However, a certain degree of sophistication in the formulation of some biogeochemical processes is required to produce realistic behaviors (e.g. the phytoplankton competition, the potential carbon or nitrogen limitation of the zooplankton ingestion, the model trophic closure, etc.). In general, a 9 state-variable model that has the functional group diversity removed, but which retains the bacterial loop and the unbalanced algal growth, performs best.
Prequential Analysis of Complex Data with Adaptive Model Reselection.
Clarke, Jennifer; Clarke, Bertrand
2009-11-01
In Prequential analysis, an inference method is viewed as a forecasting system, and the quality of the inference method is based on the quality of its predictions. This is an alternative approach to more traditional statistical methods that focus on the inference of parameters of the data generating distribution. In this paper, we introduce adaptive combined average predictors (ACAPs) for the Prequential analysis of complex data. That is, we use convex combinations of two different model averages to form a predictor at each time step in a sequence. A novel feature of our strategy is that the models in each average are re-chosen adaptively at each time step. To assess the complexity of a given data set, we introduce measures of data complexity for continuous response data. We validate our measures in several simulated contexts prior to using them in real data examples. The performance of ACAPs is compared with the performances of predictors based on stacking or likelihood weighted averaging in several model classes and in both simulated and real data sets. Our results suggest that ACAPs achieve a better trade off between model list bias and model list variability in cases where the data is very complex. This implies that the choices of model class and averaging method should be guided by a concept of complexity matching, i.e. the analysis of a complex data set may require a more complex model class and averaging strategy than the analysis of a simpler data set. We propose that complexity matching is akin to a bias-variance tradeoff in statistical modeling.
Innes, Carrie R H; Lee, Dominic; Chen, Chen; Ponder-Sutton, Agate M; Melzer, Tracy R; Jones, Richard D
2011-09-01
Prediction of complex behavioural tasks via relatively simple modelling techniques, such as logistic regression and discriminant analysis, often has limited success. We hypothesized that to more accurately model complex behaviour, more complex models, such as kernel-based methods, would be needed. To test this hypothesis, we assessed the value of six modelling approaches for predicting driving ability based on performance on computerized sensory-motor and cognitive tests (SMCTests™) in 501 people with brain disorders. The models included three models previously used to predict driving ability (discriminant analysis, DA; binary logistic regression, BLR; and nonlinear causal resource analysis, NCRA) and three kernel methods (support vector machine, SVM; product kernel density, PK; and kernel product density, KP). At the classification level, two kernel methods were substantially more accurate at classifying on-road pass or fail (SVM 99.6%, PK 99.8%) than the other models (DA 76%, BLR 78%, NCRA 74%, KP 81%). However, accuracy decreased substantially for all of the kernel models when cross-validation techniques were used to estimate prediction of on-road pass or fail in an independent referral group (SVM 73-76%, PK 72-73%, KP 71-72%) but decreased only slightly for DA (74-75%) and BLR (75-76%). Cross-validation of NCRA was not possible. In conclusion, while kernel-based models are successful at modelling complex data at a classification level, this is likely to be due to overfitting of the data, which does not lead to an improvement in accuracy in independent data over and above the accuracy of other less complex modelling techniques.
Rogalska, Ewa; Więcław-Czapla, Katarzyna
2013-01-01
Three antimicrobial peptides derived from bovine milk proteins were examined with regard to penetration into insoluble monolayers formed with 1,2-dipalmitoyl-sn-glycero-3-phosphocholine (DPPC) or 1,2-dipalmitoyl-sn-glycero-3-phospho-rac-(1-glycerol) sodium salt (DPPG). Effects on surface pressure (Π) and electric surface potential (ΔV) were measured, Π with a platinum Wilhelmy plate and ΔV with a vibrating plate. The penetration measurements were performed under stationary diffusion conditions and upon the compression of the monolayers. The two type measurements showed greatly different effects of the peptide-lipid interactions. Results of the stationary penetration show that the peptide interactions with DPPC monolayer are weak, repulsive, and nonspecific while the interactions with DPPG monolayer are significant, attractive, and specific. These results are in accord with the fact that antimicrobial peptides disrupt bacteria membranes (negative) while no significant effect on the host membranes (neutral) is observed. No such discrimination was revealed from the compression isotherms. The latter indicate that squeezing the penetrant out of the monolayer upon compression does not allow for establishing the penetration equilibrium, so the monolayer remains supersaturated with the penetrant and shows an under-equilibrium orientation within the entire compression range, practically. PMID:24455264
Castelli, Francesco; Sarpietro, Maria Grazia; Rocco, Flavio; Ceruti, Maurizio; Cattel, Luigi
2007-09-01
The stability and bioavailability of anticancer agents, such as gemcitabine, can be increased by forming prodrugs. Gemcitabine is rapidly deaminated to the inactive metabolite (2('),2(')-difluorodeoxyuridine), thus to improve its stability a series of increasingly lipophilic gemcitabine prodrugs linked through the 4-amino group to valeroyl, lauroyl, and stearoyl acyl chains were synthesized. Studies of monolayer properties are important to improve understanding of biological phenomena involving lipid/gemcitabine or lipid/gemcitabine derivative interactions. The interfacial behavior of monolayers constituted by DMPC plus gemcitabine or lipophilic gemcitabine prodrugs at increasing molar fractions was studied at the air/water interface at temperatures below (10 degrees C) and above (37 degrees C) the lipid phase transition. The effect of the hydrophobic chain length of gemcitabine derivatives on the isotherm of pure DMPC was investigated by surface tension measurement, and the results are reported as molar fractions as a function of mean molecular area per molecule. The results show that the compounds interact with DMPC producing mixed monolayers that are subject to an expansion effect, depending on the prodrug chain length. The results give useful hints of the interaction of these prodrugs with biological membranes and increase knowledge on the incorporation site of such compounds, as a function of their lipophilicity, in a lipid carrier; they may lead to improved liposomal formulation design.
Sawyer, D T; Roberts, J L; Calderwood, T S; Sugimoto, H; McDowell, M S
1985-12-17
In aprotic media the electrochemical reduction of dioxygen yields superoxide ion (O2-), which is an effective Brønsted base, nucleophile, one-electron reductant, and one-electron oxidant of reduced transition metal ions. With electrophilic substrates (organic halides and carbonyl carbons) O2- displaces a leaving group to form a peroxy radical (ROO.) in the primary process. Superoxide ion oxidizes the activated hydrogen atoms of ascorbic acid, catechols, hydrophenazines and hydroflavins. Combination of O2- with 1,2-diphenylhydrazine yields the anion radical of azobenzene, which reacts with O2 to give azobenzene and O2- (an example of O2--induced autoxidation). With phenylhydrazine, O2- produces phenyl radicals. The in situ formation of HO2. (O2- plus a proton source) results in H-atom abstraction from allylic and other groups with weak heteroatom--H bonds (binding energy (b.e.) less than 335 kJ). This is a competitive process with the facile second-order disproportionation of HO2. to H2O2 and O2 (kbi approximately equal to 10(4) mol-1 s-1 in Me2SO). Addition of [FeII(MeCN)4] (ClO4)2 to solutions of hydrogen peroxide in dry acetonitrile catalyses a rapid disproportionation of H2O2 via the initial formation of an adduct [FeII(H2O2)2+----Fe(O)(H2O)2+], which oxidizes a second H2O2 to oxygen. In the presence of organic substrates such as 1,4-cyclohexadiene, 1,2-diphenylhydrazine, catechols and thiols the FeII-H2O2/MeCN system yields dehydrogenated products; with alcohols, aldehydes, methylstyrene, thioethers, sulphoxides, and phosphines, the FeII(H2O2)2+ adduct promotes their monoxygenation. The product from the FeO2+-H2O2 reaction, [FeII(H2O2)22+], exhibits chemistry that is closely similar to that for singlet oxygen (1O2), which has been confirmed by the stoichiometric dioxygenation of diphenylisobenzofuran, 9,10-diphenylanthracene, rubrene and electron-rich unsaturated carbon-carbon bonds (Ph2C = CPh2, PhC = CPh and cis-PhCH = CHPh). In dry ligand-free acetonitrile (MeCN), anhydrous ferric chloride (FeIIICl3) activates hydrogen peroxide for the efficient epoxidation of alkenes. The FeIIICl3 further catalyses the dimerization of the resulting epoxides to dioxanes. These observations indicate that strong Lewis acids that are coordinatively unsaturated, [FeII(MeCN)4]2+ and [FeIIICl3], activate H2O2 to form an effective oxygenation and dehydrogenation agent.(ABSTRACT TRUNCATED AT 400 WORDS)
Effect of charged lidocaine on static and dynamic properties of model bio-membranes.
Yi, Zheng; Nagao, Michihiro; Bossev, Dobrin P
2012-01-01
The effect of the charged lidocaine on the structure and dynamics of DMPC/DMPG (mass fraction of 95/5) unilamellar vesicles has been investigated. Changes in membrane organization caused by the presence of lidocaine were detected through small angle neutron scattering experiments. Our results suggest that the presence of lidocaine in the vicinity of the headgroups of lipid membranes leads to an increase of the area per lipid molecule and to a decrease of membrane thickness. Such changes in membrane structure may induce disordering of the tail group. This scenario explains the reduction of the main transition temperature of lipid membranes, as the fraction of lidocaine per lipid molecules increases, which was evident from differential scanning calorimetry results. Furthermore neutron spin echo spectroscopy was used for the dynamics measurements and the results reveal that presence of charged lidocaine increases the bending elasticity of the lipid membranes in the fluid phase and slows the temperature-dependent change of bending elasticity across the main transition temperature.
Modeling a Ca2+ Channel/BKCa Channel Complex at the Single-Complex Level
Cox, Daniel H.
2014-01-01
BKCa-channel activity often affects the firing properties of neurons, the shapes of neuronal action potentials (APs), and in some cases the extent of neurotransmitter release. It has become clear that BKCa channels often form complexes with voltage-gated Ca2+ channels (CaV channels) such that when a CaV channel is activated, the ensuing influx of Ca2+ activates its closely associated BKCa channel. Thus, in modeling the electrical properties of neurons, it would be useful to have quantitative models of CaV/BKCa complexes. Furthermore, in a population of CaV/BKCa complexes, all BKCa channels are not exposed to the same Ca2+ concentration at the same time. Thus, stochastic rather than deterministic models are required. To date, however, no such models have been described. Here, however, I present a stochastic model of a CaV2.1/BKCa(α-only) complex, as might be found in a central nerve terminal. The CaV2.1/BKCa model is based on kinetic modeling of its two component channels at physiological temperature. Surprisingly, The CaV2.1/BKCa model predicts that although the CaV channel will open nearly every time during a typical cortical AP, its associated BKCa channel is expected to open in only 30% of trials, and this percentage is very sensitive to the duration of the AP, the distance between the two channels in the complex, and the presence of fast internal Ca2+ buffers. Also, the model predicts that the kinetics of the BKCa currents of a population of CaV2.1/BKCa complexes will not be limited by the kinetics of the CaV2.1 channel, and during a train of APs, the current response of the complex is expected to faithfully follow even very rapid trains. Aside from providing insight into how these complexes are likely to behave in vivo, the models presented here could also be of use more generally as components of higher-level models of neural function. PMID:25517147
Size and complexity in model financial systems.
Arinaminpathy, Nimalan; Kapadia, Sujit; May, Robert M
2012-11-06
The global financial crisis has precipitated an increasing appreciation of the need for a systemic perspective toward financial stability. For example: What role do large banks play in systemic risk? How should capital adequacy standards recognize this role? How is stability shaped by concentration and diversification in the financial system? We explore these questions using a deliberately simplified, dynamic model of a banking system that combines three different channels for direct transmission of contagion from one bank to another: liquidity hoarding, asset price contagion, and the propagation of defaults via counterparty credit risk. Importantly, we also introduce a mechanism for capturing how swings in "confidence" in the system may contribute to instability. Our results highlight that the importance of relatively large, well-connected banks in system stability scales more than proportionately with their size: the impact of their collapse arises not only from their connectivity, but also from their effect on confidence in the system. Imposing tougher capital requirements on larger banks than smaller ones can thus enhance the resilience of the system. Moreover, these effects are more pronounced in more concentrated systems, and continue to apply, even when allowing for potential diversification benefits that may be realized by larger banks. We discuss some tentative implications for policy, as well as conceptual analogies in ecosystem stability and in the control of infectious diseases.
Size and complexity in model financial systems
Arinaminpathy, Nimalan; Kapadia, Sujit; May, Robert M.
2012-01-01
The global financial crisis has precipitated an increasing appreciation of the need for a systemic perspective toward financial stability. For example: What role do large banks play in systemic risk? How should capital adequacy standards recognize this role? How is stability shaped by concentration and diversification in the financial system? We explore these questions using a deliberately simplified, dynamic model of a banking system that combines three different channels for direct transmission of contagion from one bank to another: liquidity hoarding, asset price contagion, and the propagation of defaults via counterparty credit risk. Importantly, we also introduce a mechanism for capturing how swings in “confidence” in the system may contribute to instability. Our results highlight that the importance of relatively large, well-connected banks in system stability scales more than proportionately with their size: the impact of their collapse arises not only from their connectivity, but also from their effect on confidence in the system. Imposing tougher capital requirements on larger banks than smaller ones can thus enhance the resilience of the system. Moreover, these effects are more pronounced in more concentrated systems, and continue to apply, even when allowing for potential diversification benefits that may be realized by larger banks. We discuss some tentative implications for policy, as well as conceptual analogies in ecosystem stability and in the control of infectious diseases. PMID:23091020
Generalized complex geometry, generalized branes and the Hitchin sigma model
NASA Astrophysics Data System (ADS)
Zucchini, Roberto
2005-03-01
Hitchin's generalized complex geometry has been shown to be relevant in compactifications of superstring theory with fluxes and is expected to lead to a deeper understanding of mirror symmetry. Gualtieri's notion of generalized complex submanifold seems to be a natural candidate for the description of branes in this context. Recently, we introduced a Batalin-Vilkovisky field theoretic realization of generalized complex geometry, the Hitchin sigma model, extending the well known Poisson sigma model. In this paper, exploiting Gualtieri's formalism, we incorporate branes into the model. A detailed study of the boundary conditions obeyed by the world sheet fields is provided. Finally, it is found that, when branes are present, the classical Batalin-Vilkovisky cohomology contains an extra sector that is related non trivially to a novel cohomology associated with the branes as generalized complex submanifolds.
Experimental porcine model of complex fistula-in-ano
A Ba-Bai-Ke-Re, Ma-Mu-Ti-Jiang; Chen, Hui; Liu, Xue; Wang, Yun-Hai
2017-01-01
AIM To establish and evaluate an experimental porcine model of fistula-in-ano. METHODS Twelve healthy pigs were randomly divided into two groups. Under general anesthesia, the experimental group underwent rubber band ligation surgery, and the control group underwent an artificial damage technique. Clinical magnetic resonance imaging (MRI) and histopathological evaluation were performed on the 38th d and 48th d after surgery in both groups, respectively. RESULTS There were no significant differences between the experimental group and the control group in general characteristics such as body weight, gender, and the number of fistula (P > 0.05). In the experimental group, 15 fistulas were confirmed clinically, 13 complex fistulas were confirmed by MRI, and 11 complex fistulas were confirmed by histopathology. The success rate in the porcine complex fistula model establishment was 83.33%. Among the 18 fistulas in the control group, 5 fistulas were confirmed clinically, 4 complex fistulas were confirmed by MRI, and 3 fistulas were confirmed by histopathology. The success rate in the porcine fistula model establishment was 27.78%. Thus, the success rate of the rubber band ligation group was significantly higher than the control group (P < 0.05). CONCLUSION Rubber band ligation is a stable and reliable method to establish complex fistula-in-ano models. Large animal models of complex anal fistulas can be used for the diagnosis and treatment of anal fistulas. PMID:28348488
Finite element analysis to model complex mitral valve repair.
Labrosse, Michel; Mesana, Thierry; Baxter, Ian; Chan, Vincent
2016-01-01
Although finite element analysis has been used to model simple mitral repair, it has not been used to model complex repair. A virtual mitral valve model was successful in simulating normal and abnormal valve function. Models were then developed to simulate an edge-to-edge repair and repair employing quadrangular resection. Stress contour plots demonstrated increased stresses along the mitral annulus, corresponding to the annuloplasty. The role of finite element analysis in guiding clinical practice remains undetermined.
Dong, Z B; Li, S P; Hong, M; Zhu, Q
2005-07-15
The screening and analysis of bioactive components in traditional Chinese medicines (TCMs) is very important not only for the quality control of crude drugs but also for elucidating the therapeutic principle. In this study, a method for screening potential active components from TCMs was developed by using biomembrane extraction and high performance liquid chromatography. Based on the methodology, aqueous extract of Angelica sinensis (WEAS) was used, and four compounds were detected by HPLC in the desorption eluate of red cell membrane extraction for WEAS. The compounds were identified as ferulic acid, ligustilide, senkyunolide H and senkyunolide I based on their UV, MS and NMR spectra. Actually, ferulic acid and ligustilide are considered as major active components in Angelica sinensis. Therefore, this method may be applied to predict the potential bioactivities of multiple compounds in TCMs simultaneously.
NASA Astrophysics Data System (ADS)
He, Bing; Yuan, Lan; Dai, Wenbing; Gao, Wei; Zhang, Hua; Wang, Xueqing; Fang, Weigang; Zhang, Qiang
2016-03-01
Nowadays, concern about the use of nanotechnology for biomedical application is unprecedentedly increasing. In fact, nanosystems applied for various potential clinical uses always have to cross the primary biological barrier consisting of epithelial cells. However, little is really known currently in terms of the influence of the dynamic bio-adhesion of nanosystems on bio-membranes as well as on endocytosis and transcytosis. This was investigated here using polymer nanoparticles (PNs) and MDCK epithelial cells as the models. Firstly, the adhesion of PNs on cell membranes was found to be time-dependent with a shift of both location and dispersion pattern, from the lateral adhesion of mainly mono-dispersed PNs initially to the apical coverage of the PN aggregate later. Then, it was interesting to observe in this study that the dynamic bio-adhesion of PNs only affected their endocytosis but not their transcytosis. It was important to find that the endocytosis of PNs was not a constant process. A GM1 dependent CDE (caveolae dependent endocytosis) pathway was dominant in the preliminary stage, followed by the co-existence of a CME (clathrin-mediated endocytosis) pathway for the PN aggregate at a later stage, in accordance with the adhesion features of PNs, suggesting the modification of PN adhesion patterns on the endocytosis pathways. Next, the PN adhesion was noticed to affect the structure of cell junctions, via altering the extra- and intra-cellular calcium levels, leading to the enhanced paracellular transport of small molecules, but not favorably enough for the obviously increased passing of PNs themselves. Finally, FRAP and other techniques all demonstrated the obvious impact of PN adhesion on the membrane confirmation, independent of the adhesion location and time, which might lower the threshold for the internalization of PNs, even their aggregates. Generally, these findings confirm that the transport pathway mechanism of PNs through epithelial cells is rather
He, Bing; Yuan, Lan; Dai, Wenbing; Gao, Wei; Zhang, Hua; Wang, Xueqing; Fang, Weigang; Zhang, Qiang
2016-03-21
Nowadays, concern about the use of nanotechnology for biomedical application is unprecedentedly increasing. In fact, nanosystems applied for various potential clinical uses always have to cross the primary biological barrier consisting of epithelial cells. However, little is really known currently in terms of the influence of the dynamic bio-adhesion of nanosystems on bio-membranes as well as on endocytosis and transcytosis. This was investigated here using polymer nanoparticles (PNs) and MDCK epithelial cells as the models. Firstly, the adhesion of PNs on cell membranes was found to be time-dependent with a shift of both location and dispersion pattern, from the lateral adhesion of mainly mono-dispersed PNs initially to the apical coverage of the PN aggregate later. Then, it was interesting to observe in this study that the dynamic bio-adhesion of PNs only affected their endocytosis but not their transcytosis. It was important to find that the endocytosis of PNs was not a constant process. A GM1 dependent CDE (caveolae dependent endocytosis) pathway was dominant in the preliminary stage, followed by the co-existence of a CME (clathrin-mediated endocytosis) pathway for the PN aggregate at a later stage, in accordance with the adhesion features of PNs, suggesting the modification of PN adhesion patterns on the endocytosis pathways. Next, the PN adhesion was noticed to affect the structure of cell junctions, via altering the extra- and intra-cellular calcium levels, leading to the enhanced paracellular transport of small molecules, but not favorably enough for the obviously increased passing of PNs themselves. Finally, FRAP and other techniques all demonstrated the obvious impact of PN adhesion on the membrane confirmation, independent of the adhesion location and time, which might lower the threshold for the internalization of PNs, even their aggregates. Generally, these findings confirm that the transport pathway mechanism of PNs through epithelial cells is rather
The Use of Behavior Models for Predicting Complex Operations
NASA Technical Reports Server (NTRS)
Gore, Brian F.
2010-01-01
Modeling and simulation (M&S) plays an important role when complex human-system notions are being proposed, developed and tested within the system design process. National Aeronautics and Space Administration (NASA) as an agency uses many different types of M&S approaches for predicting human-system interactions, especially when it is early in the development phase of a conceptual design. NASA Ames Research Center possesses a number of M&S capabilities ranging from airflow, flight path models, aircraft models, scheduling models, human performance models (HPMs), and bioinformatics models among a host of other kinds of M&S capabilities that are used for predicting whether the proposed designs will benefit the specific mission criteria. The Man-Machine Integration Design and Analysis System (MIDAS) is a NASA ARC HPM software tool that integrates many models of human behavior with environment models, equipment models, and procedural / task models. The challenge to model comprehensibility is heightened as the number of models that are integrated and the requisite fidelity of the procedural sets are increased. Model transparency is needed for some of the more complex HPMs to maintain comprehensibility of the integrated model performance. This will be exemplified in a recent MIDAS v5 application model and plans for future model refinements will be presented.
González-Henríquez, C M; Pizarro-Guerra, G C; Córdova-Alarcón, E N; Sarabia-Vallejos, M A; Terraza-Inostroza, C A
2016-03-01
Hydrogel films possess the ability of retain water and deliver it to a phospholipid bilayer mainly composed by DPPC (1,2-dipalmitoyl-sn-glycero-3-phosphocholine); moisture of the medium favors the stability of an artificial biomembrane when it is subjected to repetitive heating cycles. This hypothesis is valid when the hydrogel film, used as scaffold, present a flat surface morphology and a high ability for water releasing. On the other hand, when the sample presents a wrinkle topography (periodic undulations), free lateral molecular movement of the bilayer becomes lower, disfavoring the occurrence of clear phases/phase transitions according to applied temperature. Hydrogel films were prepared using HEMA (hydroxyethylmetacrylate), different crosslinking agents and initiators. This reaction mixture was spread over hydrophilic silicon wafers using spin coating technique. Resultant films were then exposed to UV light favoring polymeric chain crosslinking and interactions between hydrogel and substrate; this process is also known to generate tensile stress mismatch between different hydrogel strata, producing out-of-plane net force that generate ordered undulations or collapsed crystals at surface level. DPPC bilayers were then placed over hydrogel using Langmuir-Blodgett technique. Surface morphology was detected in order to clarify the behavior of these films. Obtained data corroborate DPPC membrane stability making possible to detect phases/phase transitions by ellipsometric methods and Atomic Force Microscopy due to their high hydration level. This system is intended to be used as biosensor through the insertion of transmembrane proteins or peptides that detect minimal variations of some analyte in the environment; artificial biomembrane stability and behavior is fundamental for this purpose.
Modeling of Protein Binary Complexes Using Structural Mass Spectrometry Data
Amisha Kamal,J.; Chance, M.
2008-01-01
In this article, we describe a general approach to modeling the structure of binary protein complexes using structural mass spectrometry data combined with molecular docking. In the first step, hydroxyl radical mediated oxidative protein footprinting is used to identify residues that experience conformational reorganization due to binding or participate in the binding interface. In the second step, a three-dimensional atomic structure of the complex is derived by computational modeling. Homology modeling approaches are used to define the structures of the individual proteins if footprinting detects significant conformational reorganization as a function of complex formation. A three-dimensional model of the complex is constructed from these binary partners using the ClusPro program, which is composed of docking, energy filtering, and clustering steps. Footprinting data are used to incorporate constraints--positive and/or negative--in the docking step and are also used to decide the type of energy filter--electrostatics or desolvation--in the successive energy-filtering step. By using this approach, we examine the structure of a number of binary complexes of monomeric actin and compare the results to crystallographic data. Based on docking alone, a number of competing models with widely varying structures are observed, one of which is likely to agree with crystallographic data. When the docking steps are guided by footprinting data, accurate models emerge as top scoring. We demonstrate this method with the actin/gelsolin segment-1 complex. We also provide a structural model for the actin/cofilin complex using this approach which does not have a crystal or NMR structure.
Geometric modeling of subcellular structures, organelles, and multiprotein complexes.
Feng, Xin; Xia, Kelin; Tong, Yiying; Wei, Guo-Wei
2012-12-01
Recently, the structure, function, stability, and dynamics of subcellular structures, organelles, and multiprotein complexes have emerged as a leading interest in structural biology. Geometric modeling not only provides visualizations of shapes for large biomolecular complexes but also fills the gap between structural information and theoretical modeling, and enables the understanding of function, stability, and dynamics. This paper introduces a suite of computational tools for volumetric data processing, information extraction, surface mesh rendering, geometric measurement, and curvature estimation of biomolecular complexes. Particular emphasis is given to the modeling of cryo-electron microscopy data. Lagrangian-triangle meshes are employed for the surface presentation. On the basis of this representation, algorithms are developed for surface area and surface-enclosed volume calculation, and curvature estimation. Methods for volumetric meshing have also been presented. Because the technological development in computer science and mathematics has led to multiple choices at each stage of the geometric modeling, we discuss the rationales in the design and selection of various algorithms. Analytical models are designed to test the computational accuracy and convergence of proposed algorithms. Finally, we select a set of six cryo-electron microscopy data representing typical subcellular complexes to demonstrate the efficacy of the proposed algorithms in handling biomolecular surfaces and explore their capability of geometric characterization of binding targets. This paper offers a comprehensive protocol for the geometric modeling of subcellular structures, organelles, and multiprotein complexes.
Network model of bilateral power markets based on complex networks
NASA Astrophysics Data System (ADS)
Wu, Yang; Liu, Junyong; Li, Furong; Yan, Zhanxin; Zhang, Li
2014-06-01
The bilateral power transaction (BPT) mode becomes a typical market organization with the restructuring of electric power industry, the proper model which could capture its characteristics is in urgent need. However, the model is lacking because of this market organization's complexity. As a promising approach to modeling complex systems, complex networks could provide a sound theoretical framework for developing proper simulation model. In this paper, a complex network model of the BPT market is proposed. In this model, price advantage mechanism is a precondition. Unlike other general commodity transactions, both of the financial layer and the physical layer are considered in the model. Through simulation analysis, the feasibility and validity of the model are verified. At same time, some typical statistical features of BPT network are identified. Namely, the degree distribution follows the power law, the clustering coefficient is low and the average path length is a bit long. Moreover, the topological stability of the BPT network is tested. The results show that the network displays a topological robustness to random market member's failures while it is fragile against deliberate attacks, and the network could resist cascading failure to some extent. These features are helpful for making decisions and risk management in BPT markets.
Using fMRI to Test Models of Complex Cognition
ERIC Educational Resources Information Center
Anderson, John R.; Carter, Cameron S.; Fincham, Jon M.; Qin, Yulin; Ravizza, Susan M.; Rosenberg-Lee, Miriam
2008-01-01
This article investigates the potential of fMRI to test assumptions about different components in models of complex cognitive tasks. If the components of a model can be associated with specific brain regions, one can make predictions for the temporal course of the BOLD response in these regions. An event-locked procedure is described for dealing…
Tips on Creating Complex Geometry Using Solid Modeling Software
ERIC Educational Resources Information Center
Gow, George
2008-01-01
Three-dimensional computer-aided drafting (CAD) software, sometimes referred to as "solid modeling" software, is easy to learn, fun to use, and becoming the standard in industry. However, many users have difficulty creating complex geometry with the solid modeling software. And the problem is not entirely a student problem. Even some teachers and…
Between complexity of modelling and modelling of complexity: An essay on econophysics
NASA Astrophysics Data System (ADS)
Schinckus, C.
2013-09-01
Econophysics is an emerging field dealing with complex systems and emergent properties. A deeper analysis of themes studied by econophysicists shows that research conducted in this field can be decomposed into two different computational approaches: “statistical econophysics” and “agent-based econophysics”. This methodological scission complicates the definition of the complexity used in econophysics. Therefore, this article aims to clarify what kind of emergences and complexities we can find in econophysics in order to better understand, on one hand, the current scientific modes of reasoning this new field provides; and on the other hand, the future methodological evolution of the field.
Zebrafish as an emerging model for studying complex brain disorders
Kalueff, Allan V.; Stewart, Adam Michael; Gerlai, Robert
2014-01-01
The zebrafish (Danio rerio) is rapidly becoming a popular model organism in pharmacogenetics and neuropharmacology. Both larval and adult zebrafish are currently used to increase our understanding of brain function, dysfunction, and their genetic and pharmacological modulation. Here we review the developing utility of zebrafish in the analysis of complex brain disorders (including, for example, depression, autism, psychoses, drug abuse and cognitive disorders), also covering zebrafish applications towards the goal of modeling major human neuropsychiatric and drug-induced syndromes. We argue that zebrafish models of complex brain disorders and drug-induced conditions have become a rapidly emerging critical field in translational neuropharmacology research. PMID:24412421
Complexation and molecular modeling studies of europium(III)-gallic acid-amino acid complexes.
Taha, Mohamed; Khan, Imran; Coutinho, João A P
2016-04-01
With many metal-based drugs extensively used today in the treatment of cancer, attention has focused on the development of new coordination compounds with antitumor activity with europium(III) complexes recently introduced as novel anticancer drugs. The aim of this work is to design new Eu(III) complexes with gallic acid, an antioxida'nt phenolic compound. Gallic acid was chosen because it shows anticancer activity without harming health cells. As antioxidant, it helps to protect human cells against oxidative damage that implicated in DNA damage, cancer, and accelerated cell aging. In this work, the formation of binary and ternary complexes of Eu(III) with gallic acid, primary ligand, and amino acids alanine, leucine, isoleucine, and tryptophan was studied by glass electrode potentiometry in aqueous solution containing 0.1M NaNO3 at (298.2 ± 0.1) K. Their overall stability constants were evaluated and the concentration distributions of the complex species in solution were calculated. The protonation constants of gallic acid and amino acids were also determined at our experimental conditions and compared with those predicted by using conductor-like screening model for realistic solvation (COSMO-RS) model. The geometries of Eu(III)-gallic acid complexes were characterized by the density functional theory (DFT). The spectroscopic UV-visible and photoluminescence measurements are carried out to confirm the formation of Eu(III)-gallic acid complexes in aqueous solutions.
Multiscale Model for the Assembly Kinetics of Protein Complexes.
Xie, Zhong-Ru; Chen, Jiawen; Wu, Yinghao
2016-02-04
The assembly of proteins into high-order complexes is a general mechanism for these biomolecules to implement their versatile functions in cells. Natural evolution has developed various assembling pathways for specific protein complexes to maintain their stability and proper activities. Previous studies have provided numerous examples of the misassembly of protein complexes leading to severe biological consequences. Although the research focusing on protein complexes has started to move beyond the static representation of quaternary structures to the dynamic aspect of their assembly, the current understanding of the assembly mechanism of protein complexes is still largely limited. To tackle this problem, we developed a new multiscale modeling framework. This framework combines a lower-resolution rigid-body-based simulation with a higher-resolution Cα-based simulation method so that protein complexes can be assembled with both structural details and computational efficiency. We applied this model to a homotrimer and a heterotetramer as simple test systems. Consistent with experimental observations, our simulations indicated very different kinetics between protein oligomerization and dimerization. The formation of protein oligomers is a multistep process that is much slower than dimerization but thermodynamically more stable. Moreover, we showed that even the same protein quaternary structure can have very diverse assembly pathways under different binding constants between subunits, which is important for regulating the functions of protein complexes. Finally, we revealed that the binding between subunits in a complex can be synergistically strengthened during assembly without considering allosteric regulation or conformational changes. Therefore, our model provides a useful tool to understand the general principles of protein complex assembly.
Pedigree models for complex human traits involving the mitochrondrial genome
Schork, N.J.; Guo, S.W. )
1993-12-01
Recent biochemical and molecular-genetic discoveries concerning variations in human mtDNA have suggested a role for mtDNA mutations in a number of human traits and disorders. Although the importance of these discoveries cannot be emphasized enough, the complex natures of mitochondrial biogenesis, mutant mtDNA phenotype expression, and the maternal inheritance pattern exhibited by mtDNA transmission make it difficult to develop models that can be used routinely in pedigree analyses to quantify and test hypotheses about the role of mtDNA in the expression of a trait. In the present paper, the authors describe complexities inherent in mitochondrial biogenesis and genetic transmission and show how these complexities can be incorporated into appropriate mathematical models. The authors offer a variety of likelihood-based models which account for the complexities discussed. The derivation of the models is meant to stimulate the construction of statistical tests for putative mtDNA contribution to a trait. Results of simulation studies which make use of the proposed models are described. The results of the simulation studies suggest that, although pedigree models of mtDNA effects can be reliable, success in mapping chromosomal determinants of a trait does not preclude the possibility that mtDNA determinants exist for the trait as well. Shortcomings inherent in the proposed models are described in an effort to expose areas in need of additional research. 58 refs., 5 figs., 2 tabs.
Systems Engineering Metrics: Organizational Complexity and Product Quality Modeling
NASA Technical Reports Server (NTRS)
Mog, Robert A.
1997-01-01
Innovative organizational complexity and product quality models applicable to performance metrics for NASA-MSFC's Systems Analysis and Integration Laboratory (SAIL) missions and objectives are presented. An intensive research effort focuses on the synergistic combination of stochastic process modeling, nodal and spatial decomposition techniques, organizational and computational complexity, systems science and metrics, chaos, and proprietary statistical tools for accelerated risk assessment. This is followed by the development of a preliminary model, which is uniquely applicable and robust for quantitative purposes. Exercise of the preliminary model using a generic system hierarchy and the AXAF-I architectural hierarchy is provided. The Kendall test for positive dependence provides an initial verification and validation of the model. Finally, the research and development of the innovation is revisited, prior to peer review. This research and development effort results in near-term, measurable SAIL organizational and product quality methodologies, enhanced organizational risk assessment and evolutionary modeling results, and 91 improved statistical quantification of SAIL productivity interests.
Emulator-assisted data assimilation in complex models
NASA Astrophysics Data System (ADS)
Margvelashvili, Nugzar Yu; Herzfeld, Mike; Rizwi, Farhan; Mongin, Mathieu; Baird, Mark E.; Jones, Emlyn; Schaffelke, Britta; King, Edward; Schroeder, Thomas
2016-09-01
Emulators are surrogates of complex models that run orders of magnitude faster than the original model. The utility of emulators for the data assimilation into ocean models is still not well understood. High complexity of ocean models translates into high uncertainty of the corresponding emulators which may undermine the quality of the assimilation schemes based on such emulators. Numerical experiments with a chaotic Lorenz-95 model are conducted to illustrate this point and suggest a strategy to alleviate this problem through the localization of the emulation and data assimilation procedures. Insights gained through these experiments are used to design and implement data assimilation scenario for a 3D fine-resolution sediment transport model of the Great Barrier Reef (GBR), Australia.
Calibration of Complex Subsurface Reaction Models Using a Surrogate-Model Approach
Application of model assessment techniques to complex subsurface reaction models involves numerous difficulties, including non-trivial model selection, parameter non-uniqueness, and excessive computational burden. To overcome these difficulties, this study introduces SAMM (Simult...
Modeling of Complex Adaptive Systems in Air Operations
2006-09-01
control of C3 in an increasingly complex military environment. Control theory is a multidisciplinary science associated with dynamic systems and, while...AFRL-IF-RS-TR-2006-282 In- House Final Technical Report September 2006 MODELING OF COMPLEX ADAPTIVE SYSTEMS IN AIR OPERATIONS...NOTICE AND SIGNATURE PAGE Using Government drawings, specifications, or other data included in this document for any purpose other than Government
Improving a regional model using reduced complexity and parameter estimation
Kelson, Victor A.; Hunt, Randall J.; Haitjema, Henk M.
2002-01-01
The availability of powerful desktop computers and graphical user interfaces for ground water flow models makes possible the construction of ever more complex models. A proposed copper-zinc sulfide mine in northern Wisconsin offers a unique case in which the same hydrologic system has been modeled using a variety of techniques covering a wide range of sophistication and complexity. Early in the permitting process, simple numerical models were used to evaluate the necessary amount of water to be pumped from the mine, reductions in streamflow, and the drawdowns in the regional aquifer. More complex models have subsequently been used in an attempt to refine the predictions. Even after so much modeling effort, questions regarding the accuracy and reliability of the predictions remain. We have performed a new analysis of the proposed mine using the two-dimensional analytic element code GFLOW coupled with the nonlinear parameter estimation code UCODE. The new model is parsimonious, containing fewer than 10 parameters, and covers a region several times larger in areal extent than any of the previous models. The model demonstrates the suitability of analytic element codes for use with parameter estimation codes. The simplified model results are similar to the more complex models; predicted mine inflows and UCODE-derived 95% confidence intervals are consistent with the previous predictions. More important, the large areal extent of the model allowed us to examine hydrological features not included in the previous models, resulting in new insights about the effects that far-field boundary conditions can have on near-field model calibration and parameterization. In this case, the addition of surface water runoff into a lake in the headwaters of a stream while holding recharge constant moved a regional ground watershed divide and resulted in some of the added water being captured by the adjoining basin. Finally, a simple analytical solution was used to clarify the GFLOW model
On explicit algebraic stress models for complex turbulent flows
NASA Technical Reports Server (NTRS)
Gatski, T. B.; Speziale, C. G.
1992-01-01
Explicit algebraic stress models that are valid for three-dimensional turbulent flows in noninertial frames are systematically derived from a hierarchy of second-order closure models. This represents a generalization of the model derived by Pope who based his analysis on the Launder, Reece, and Rodi model restricted to two-dimensional turbulent flows in an inertial frame. The relationship between the new models and traditional algebraic stress models -- as well as anistropic eddy visosity models -- is theoretically established. The need for regularization is demonstrated in an effort to explain why traditional algebraic stress models have failed in complex flows. It is also shown that these explicit algebraic stress models can shed new light on what second-order closure models predict for the equilibrium states of homogeneous turbulent flows and can serve as a useful alternative in practical computations.
Complex groundwater flow systems as traveling agent models.
López Corona, Oliver; Padilla, Pablo; Escolero, Oscar; González, Tomas; Morales-Casique, Eric; Osorio-Olvera, Luis
2014-01-01
Analyzing field data from pumping tests, we show that as with many other natural phenomena, groundwater flow exhibits complex dynamics described by 1/f power spectrum. This result is theoretically studied within an agent perspective. Using a traveling agent model, we prove that this statistical behavior emerges when the medium is complex. Some heuristic reasoning is provided to justify both spatial and dynamic complexity, as the result of the superposition of an infinite number of stochastic processes. Even more, we show that this implies that non-Kolmogorovian probability is needed for its study, and provide a set of new partial differential equations for groundwater flow.
Complex groundwater flow systems as traveling agent models
Padilla, Pablo; Escolero, Oscar; González, Tomas; Morales-Casique, Eric; Osorio-Olvera, Luis
2014-01-01
Analyzing field data from pumping tests, we show that as with many other natural phenomena, groundwater flow exhibits complex dynamics described by 1/f power spectrum. This result is theoretically studied within an agent perspective. Using a traveling agent model, we prove that this statistical behavior emerges when the medium is complex. Some heuristic reasoning is provided to justify both spatial and dynamic complexity, as the result of the superposition of an infinite number of stochastic processes. Even more, we show that this implies that non-Kolmogorovian probability is needed for its study, and provide a set of new partial differential equations for groundwater flow. PMID:25337455
A Complex Systems Model Approach to Quantified Mineral Resource Appraisal
Gettings, M.E.; Bultman, M.W.; Fisher, F.S.
2004-01-01
For federal and state land management agencies, mineral resource appraisal has evolved from value-based to outcome-based procedures wherein the consequences of resource development are compared with those of other management options. Complex systems modeling is proposed as a general framework in which to build models that can evaluate outcomes. Three frequently used methods of mineral resource appraisal (subjective probabilistic estimates, weights of evidence modeling, and fuzzy logic modeling) are discussed to obtain insight into methods of incorporating complexity into mineral resource appraisal models. Fuzzy logic and weights of evidence are most easily utilized in complex systems models. A fundamental product of new appraisals is the production of reusable, accessible databases and methodologies so that appraisals can easily be repeated with new or refined data. The data are representations of complex systems and must be so regarded if all of their information content is to be utilized. The proposed generalized model framework is applicable to mineral assessment and other geoscience problems. We begin with a (fuzzy) cognitive map using (+1,0,-1) values for the links and evaluate the map for various scenarios to obtain a ranking of the importance of various links. Fieldwork and modeling studies identify important links and help identify unanticipated links. Next, the links are given membership functions in accordance with the data. Finally, processes are associated with the links; ideally, the controlling physical and chemical events and equations are found for each link. After calibration and testing, this complex systems model is used for predictions under various scenarios.
Goldsby, Michael E.; Mayo, Jackson R.; Bhattacharyya, Arnab; Armstrong, Robert C.; Vanderveen, Keith
2008-09-01
The goal of this research was to examine foundational methods, both computational and theoretical, that can improve the veracity of entity-based complex system models and increase confidence in their predictions for emergent behavior. The strategy was to seek insight and guidance from simplified yet realistic models, such as cellular automata and Boolean networks, whose properties can be generalized to production entity-based simulations. We have explored the usefulness of renormalization-group methods for finding reduced models of such idealized complex systems. We have prototyped representative models that are both tractable and relevant to Sandia mission applications, and quantified the effect of computational renormalization on the predictive accuracy of these models, finding good predictivity from renormalized versions of cellular automata and Boolean networks. Furthermore, we have theoretically analyzed the robustness properties of certain Boolean networks, relevant for characterizing organic behavior, and obtained precise mathematical constraints on systems that are robust to failures. In combination, our results provide important guidance for more rigorous construction of entity-based models, which currently are often devised in an ad-hoc manner. Our results can also help in designing complex systems with the goal of predictable behavior, e.g., for cybersecurity.
Multikernel linear mixed models for complex phenotype prediction
Weissbrod, Omer; Geiger, Dan; Rosset, Saharon
2016-01-01
Linear mixed models (LMMs) and their extensions have recently become the method of choice in phenotype prediction for complex traits. However, LMM use to date has typically been limited by assuming simple genetic architectures. Here, we present multikernel linear mixed model (MKLMM), a predictive modeling framework that extends the standard LMM using multiple-kernel machine learning approaches. MKLMM can model genetic interactions and is particularly suitable for modeling complex local interactions between nearby variants. We additionally present MKLMM-Adapt, which automatically infers interaction types across multiple genomic regions. In an analysis of eight case-control data sets from the Wellcome Trust Case Control Consortium and more than a hundred mouse phenotypes, MKLMM-Adapt consistently outperforms competing methods in phenotype prediction. MKLMM is as computationally efficient as standard LMMs and does not require storage of genotypes, thus achieving state-of-the-art predictive power without compromising computational feasibility or genomic privacy. PMID:27302636
A Compact Model for the Complex Plant Circadian Clock
De Caluwé, Joëlle; Xiao, Qiying; Hermans, Christian; Verbruggen, Nathalie; Leloup, Jean-Christophe; Gonze, Didier
2016-01-01
The circadian clock is an endogenous timekeeper that allows organisms to anticipate and adapt to the daily variations of their environment. The plant clock is an intricate network of interlocked feedback loops, in which transcription factors regulate each other to generate oscillations with expression peaks at specific times of the day. Over the last decade, mathematical modeling approaches have been used to understand the inner workings of the clock in the model plant Arabidopsis thaliana. Those efforts have produced a number of models of ever increasing complexity. Here, we present an alternative model that combines a low number of equations and parameters, similar to the very earliest models, with the complex network structure found in more recent ones. This simple model describes the temporal evolution of the abundance of eight clock gene mRNA/protein and captures key features of the clock on a qualitative level, namely the entrained and free-running behaviors of the wild type clock, as well as the defects found in knockout mutants (such as altered free-running periods, lack of entrainment, or changes in the expression of other clock genes). Additionally, our model produces complex responses to various light cues, such as extreme photoperiods and non-24 h environmental cycles, and can describe the control of hypocotyl growth by the clock. Our model constitutes a useful tool to probe dynamical properties of the core clock as well as clock-dependent processes. PMID:26904049
Further thoughts on simplicity and complexity in population projection models.
Smith, S K
1997-12-01
"This article is a review of--and response to--a special issue of Mathematical Population Studies that focused on the relative performance of simpler vs. more complex population projection models. I do not attempt to summarize or comment on each of the articles in the special issue, but rather present an additional perspective on several points: definitions of simplicity and complexity, empirical evidence regarding population forecast accuracy, the costs and benefits of disaggregation, the potential benefits of combining forecasts, criteria for evaluating projection models, and issues of economic efficiency in the production of population projections."
Vacuum structure of the Higgs complex singlet-doublet model
NASA Astrophysics Data System (ADS)
Ferreira, P. M.
2016-11-01
The complex singlet-doublet model is a popular theory to account for dark matter and electroweak baryogenesis, wherein the Standard Model particle content is supplemented by a complex scalar gauge singlet, with certain discrete symmetries imposed. The scalar potential which results thereof can have seven different types of minima at tree level, which may coexist for specific choices of parameters. There is therefore the possibility that a given minimum is not global but rather a local one, and may tunnel to a deeper extremum, thus causing vacuum instability. This rich vacuum structure is explained and discussed in detail.
Predictive modelling of complex agronomic and biological systems.
Keurentjes, Joost J B; Molenaar, Jaap; Zwaan, Bas J
2013-09-01
Biological systems are tremendously complex in their functioning and regulation. Studying the multifaceted behaviour and describing the performance of such complexity has challenged the scientific community for years. The reduction of real-world intricacy into simple descriptive models has therefore convinced many researchers of the usefulness of introducing mathematics into biological sciences. Predictive modelling takes such an approach another step further in that it takes advantage of existing knowledge to project the performance of a system in alternating scenarios. The ever growing amounts of available data generated by assessing biological systems at increasingly higher detail provide unique opportunities for future modelling and experiment design. Here we aim to provide an overview of the progress made in modelling over time and the currently prevalent approaches for iterative modelling cycles in modern biology. We will further argue for the importance of versatility in modelling approaches, including parameter estimation, model reduction and network reconstruction. Finally, we will discuss the difficulties in overcoming the mathematical interpretation of in vivo complexity and address some of the future challenges lying ahead.
ERIC Educational Resources Information Center
Stålne, Kristian; Kjellström, Sofia; Utriainen, Jukka
2016-01-01
An important aspect of higher education is to educate students who can manage complex relationships and solve complex problems. Teachers need to be able to evaluate course content with regard to complexity, as well as evaluate students' ability to assimilate complex content and express it in the form of a learning outcome. One model for evaluating…
Computer models of complex multiloop branched pipeline systems
NASA Astrophysics Data System (ADS)
Kudinov, I. V.; Kolesnikov, S. V.; Eremin, A. V.; Branfileva, A. N.
2013-11-01
This paper describes the principal theoretical concepts of the method used for constructing computer models of complex multiloop branched pipeline networks, and this method is based on the theory of graphs and two Kirchhoff's laws applied to electrical circuits. The models make it possible to calculate velocities, flow rates, and pressures of a fluid medium in any section of pipeline networks, when the latter are considered as single hydraulic systems. On the basis of multivariant calculations the reasons for existing problems can be identified, the least costly methods of their elimination can be proposed, and recommendations for planning the modernization of pipeline systems and construction of their new sections can be made. The results obtained can be applied to complex pipeline systems intended for various purposes (water pipelines, petroleum pipelines, etc.). The operability of the model has been verified on an example of designing a unified computer model of the heat network for centralized heat supply of the city of Samara.
Modeling the propagation of mobile malware on complex networks
NASA Astrophysics Data System (ADS)
Liu, Wanping; Liu, Chao; Yang, Zheng; Liu, Xiaoyang; Zhang, Yihao; Wei, Zuxue
2016-08-01
In this paper, the spreading behavior of malware across mobile devices is addressed. By introducing complex networks to model mobile networks, which follows the power-law degree distribution, a novel epidemic model for mobile malware propagation is proposed. The spreading threshold that guarantees the dynamics of the model is calculated. Theoretically, the asymptotic stability of the malware-free equilibrium is confirmed when the threshold is below the unity, and the global stability is further proved under some sufficient conditions. The influences of different model parameters as well as the network topology on malware propagation are also analyzed. Our theoretical studies and numerical simulations show that networks with higher heterogeneity conduce to the diffusion of malware, and complex networks with lower power-law exponents benefit malware spreading.
Surface complexation modeling or organic acid sorption to goethite
Evanko, C.R.; Dzombak, D.A.
1999-06-15
Surface complexation modeling was performed using the Generalized Two-Layer Model for a series of low molecular weight organic acids. Sorption of these organic acids to goethite was investigated in a previous study to assess the influence of particular structural features on sorption. Here, the ability to describe the observed sorption behavior for compounds with similar structural features using surface complexation modeling was investigated. A set of surface reactions and equilibrium constants yielding optimal data fits was obtained for each organic acid over a range of total sorbate concentrations. Surface complexation modeling successfully described sorption of a number of the simple organic acids, but an additional hydrophobic component was needed to describe sorption behavior of some compounds with significant hydrophobic character. These compounds exhibited sorption behavior of some compounds with significant hydrophobic character. These compounds exhibited sorption behavior that was inconsistent with ligand exchange mechanisms since sorption behavior of some compounds with significant hydrophobic character. These compounds exhibited sorption behavior that was inconsistent with ligand exchange mechanisms since sorption did not decrease with increasing total sorbate concentration and/or exceeded surface site saturation. Hydrophobic interactions appeared to be most significant for the compound containing a 5-carbon aliphatic chain. Comparison of optimized equilibrium constants for similar surface species showed that model results were consistent with observed sorption behavior: equilibrium constants were highest for compounds having adjacent carboxylic groups, lower for compounds with adjacent phenolic groups, and lowest for compounds with phenolic groups in the ortho position relative to a carboxylic group. Surface complexation modeling was also performed to fit sorption data for Suwannee River fulvic acid. The data could be described well using reactions and
Surface Complexation Modeling of Organic Acid Sorption to Goethite.
Evanko; Dzombak
1999-06-15
Surface complexation modeling was performed using the Generalized Two-Layer Model for a series of low molecular weight organic acids. Sorption of these organic acids to goethite was investigated in a previous study to assess the influence of particular structural features on sorption. Here, the ability to describe the observed sorption behavior for compounds with similar structural features using surface complexation modeling was investigated. A set of surface reactions and equilibrium constants yielding optimal data fits was obtained for each organic acid over a range of total sorbate concentrations. Surface complexation modeling successfully described sorption of a number of the simple organic acids, but an additional hydrophobic component was needed to describe sorption behavior of some compounds with significant hydrophobic character. These compounds exhibited sorption behavior that was inconsistent with ligand exchange mechanisms since sorption did not decrease with increasing total sorbate concentration and/or exceeded surface site saturation. Hydrophobic interactions appeared to be most significant for the compound containing a 5-carbon aliphatic chain. Comparison of optimized equilibrium constants for similar surface species showed that model results were consistent with observed sorption behavior: equilibrium constants were highest for compounds having adjacent carboxylic groups, lower for compounds with adjacent phenolic groups, and lowest for compounds with phenolic groups in the ortho position relative to a carboxylic group. Surface complexation modeling was also performed to fit sorption data for Suwannee River fulvic acid. The data could be described well using reactions and constants similar to those for pyromellitic acid. This four-carboxyl group compound may be useful as a model for fulvic acid with respect to sorption. Other simple organic acids having multiple carboxylic and phenolic functional groups were identified as potential models for humic
Petri net model for analysis of concurrently processed complex algorithms
NASA Technical Reports Server (NTRS)
Stoughton, John W.; Mielke, Roland R.
1986-01-01
This paper presents a Petri-net model suitable for analyzing the concurrent processing of computationally complex algorithms. The decomposed operations are to be processed in a multiple processor, data driven architecture. Of particular interest is the application of the model to both the description of the data/control flow of a particular algorithm, and to the general specification of the data driven architecture. A candidate architecture is also presented.
Surface complexation modeling of americium sorption onto volcanic tuff.
Ding, M; Kelkar, S; Meijer, A
2014-10-01
Results of a surface complexation model (SCM) for americium sorption on volcanic rocks (devitrified and zeolitic tuff) are presented. The model was developed using PHREEQC and based on laboratory data for americium sorption on quartz. Available data for sorption of americium on quartz as a function of pH in dilute groundwater can be modeled with two surface reactions involving an americium sulfate and an americium carbonate complex. It was assumed in applying the model to volcanic rocks from Yucca Mountain, that the surface properties of volcanic rocks can be represented by a quartz surface. Using groundwaters compositionally representative of Yucca Mountain, americium sorption distribution coefficient (Kd, L/Kg) values were calculated as function of pH. These Kd values are close to the experimentally determined Kd values for americium sorption on volcanic rocks, decreasing with increasing pH in the pH range from 7 to 9. The surface complexation constants, derived in this study, allow prediction of sorption of americium in a natural complex system, taking into account the inherent uncertainty associated with geochemical conditions that occur along transport pathways.
Performance of Random Effects Model Estimators under Complex Sampling Designs
ERIC Educational Resources Information Center
Jia, Yue; Stokes, Lynne; Harris, Ian; Wang, Yan
2011-01-01
In this article, we consider estimation of parameters of random effects models from samples collected via complex multistage designs. Incorporation of sampling weights is one way to reduce estimation bias due to unequal probabilities of selection. Several weighting methods have been proposed in the literature for estimating the parameters of…
Fitting Meta-Analytic Structural Equation Models with Complex Datasets
ERIC Educational Resources Information Center
Wilson, Sandra Jo; Polanin, Joshua R.; Lipsey, Mark W.
2016-01-01
A modification of the first stage of the standard procedure for two-stage meta-analytic structural equation modeling for use with large complex datasets is presented. This modification addresses two common problems that arise in such meta-analyses: (a) primary studies that provide multiple measures of the same construct and (b) the correlation…
The Complexity of Developmental Predictions from Dual Process Models
ERIC Educational Resources Information Center
Stanovich, Keith E.; West, Richard F.; Toplak, Maggie E.
2011-01-01
Drawing developmental predictions from dual-process theories is more complex than is commonly realized. Overly simplified predictions drawn from such models may lead to premature rejection of the dual process approach as one of many tools for understanding cognitive development. Misleading predictions can be avoided by paying attention to several…
Fischer and Schrock Carbene Complexes: A Molecular Modeling Exercise
ERIC Educational Resources Information Center
Montgomery, Craig D.
2015-01-01
An exercise in molecular modeling that demonstrates the distinctive features of Fischer and Schrock carbene complexes is presented. Semi-empirical calculations (PM3) demonstrate the singlet ground electronic state, restricted rotation about the C-Y bond, the positive charge on the carbon atom, and hence, the electrophilic nature of the Fischer…
Conceptual Complexity, Teaching Style and Models of Teaching.
ERIC Educational Resources Information Center
Joyce, Bruce; Weil, Marsha
The focus of this paper is on the relative roles of personality and training in enabling teachers to carry out the kinds of complex learning models which are envisioned by curriculum reformers in the social sciences. The paper surveys some of the major research done in this area and concludes that: 1) Most teachers do not manifest the complex…
A random interacting network model for complex networks
Goswami, Bedartha; Shekatkar, Snehal M.; Rheinwalt, Aljoscha; Ambika, G.; Kurths, Jürgen
2015-01-01
We propose a RAndom Interacting Network (RAIN) model to study the interactions between a pair of complex networks. The model involves two major steps: (i) the selection of a pair of nodes, one from each network, based on intra-network node-based characteristics, and (ii) the placement of a link between selected nodes based on the similarity of their relative importance in their respective networks. Node selection is based on a selection fitness function and node linkage is based on a linkage probability defined on the linkage scores of nodes. The model allows us to relate within-network characteristics to between-network structure. We apply the model to the interaction between the USA and Schengen airline transportation networks (ATNs). Our results indicate that two mechanisms: degree-based preferential node selection and degree-assortative link placement are necessary to replicate the observed inter-network degree distributions as well as the observed inter-network assortativity. The RAIN model offers the possibility to test multiple hypotheses regarding the mechanisms underlying network interactions. It can also incorporate complex interaction topologies. Furthermore, the framework of the RAIN model is general and can be potentially adapted to various real-world complex systems. PMID:26657032
A perspective on modeling and simulation of complex dynamical systems
NASA Astrophysics Data System (ADS)
Åström, K. J.
2011-09-01
There has been an amazing development of modeling and simulation from its beginning in the 1920s, when the technology was available only at a handful of University groups who had access to a mechanical differential analyzer. Today, tools for modeling and simulation are available for every student and engineer. This paper gives a perspective on the development with particular emphasis on technology and paradigm shifts. Modeling is increasingly important for design and operation of complex natural and man-made systems. Because of the increased use of model based control such as Kalman filters and model predictive control, models are also appearing as components of feedback systems. Modeling and simulation are multidisciplinary, it is used in a wide variety of fields and their development have been strongly influenced by mathematics, numerics, computer science and computer technology.
Bayesian Case-deletion Model Complexity and Information Criterion
Zhu, Hongtu; Ibrahim, Joseph G.; Chen, Qingxia
2015-01-01
We establish a connection between Bayesian case influence measures for assessing the influence of individual observations and Bayesian predictive methods for evaluating the predictive performance of a model and comparing different models fitted to the same dataset. Based on such a connection, we formally propose a new set of Bayesian case-deletion model complexity (BCMC) measures for quantifying the effective number of parameters in a given statistical model. Its properties in linear models are explored. Adding some functions of BCMC to a conditional deviance function leads to a Bayesian case-deletion information criterion (BCIC) for comparing models. We systematically investigate some properties of BCIC and its connection with other information criteria, such as the Deviance Information Criterion (DIC). We illustrate the proposed methodology on linear mixed models with simulations and a real data example. PMID:26180578
Complexity vs. simplicity: groundwater model ranking using information criteria.
Engelhardt, I; De Aguinaga, J G; Mikat, H; Schüth, C; Liedl, R
2014-01-01
A groundwater model characterized by a lack of field data about hydraulic model parameters and boundary conditions combined with many observation data sets for calibration purpose was investigated concerning model uncertainty. Seven different conceptual models with a stepwise increase from 0 to 30 adjustable parameters were calibrated using PEST. Residuals, sensitivities, the Akaike information criterion (AIC and AICc), Bayesian information criterion (BIC), and Kashyap's information criterion (KIC) were calculated for a set of seven inverse calibrated models with increasing complexity. Finally, the likelihood of each model was computed. Comparing only residuals of the different conceptual models leads to an overparameterization and certainty loss in the conceptual model approach. The model employing only uncalibrated hydraulic parameters, estimated from sedimentological information, obtained the worst AIC, BIC, and KIC values. Using only sedimentological data to derive hydraulic parameters introduces a systematic error into the simulation results and cannot be recommended for generating a valuable model. For numerical investigations with high numbers of calibration data the BIC and KIC select as optimal a simpler model than the AIC. The model with 15 adjusted parameters was evaluated by AIC as the best option and obtained a likelihood of 98%. The AIC disregards the potential model structure error and the selection of the KIC is, therefore, more appropriate. Sensitivities to piezometric heads were highest for the model with only five adjustable parameters and sensitivity coefficients were directly influenced by the changes in extracted groundwater volumes.
On Using Meta-Modeling and Multi-Modeling to Address Complex Problems
ERIC Educational Resources Information Center
Abu Jbara, Ahmed
2013-01-01
Models, created using different modeling techniques, usually serve different purposes and provide unique insights. While each modeling technique might be capable of answering specific questions, complex problems require multiple models interoperating to complement/supplement each other; we call this Multi-Modeling. To address the syntactic and…
Probabilistic Analysis Techniques Applied to Complex Spacecraft Power System Modeling
NASA Technical Reports Server (NTRS)
Hojnicki, Jeffrey S.; Rusick, Jeffrey J.
2005-01-01
Electric power system performance predictions are critical to spacecraft, such as the International Space Station (ISS), to ensure that sufficient power is available to support all the spacecraft s power needs. In the case of the ISS power system, analyses to date have been deterministic, meaning that each analysis produces a single-valued result for power capability because of the complexity and large size of the model. As a result, the deterministic ISS analyses did not account for the sensitivity of the power capability to uncertainties in model input variables. Over the last 10 years, the NASA Glenn Research Center has developed advanced, computationally fast, probabilistic analysis techniques and successfully applied them to large (thousands of nodes) complex structural analysis models. These same techniques were recently applied to large, complex ISS power system models. This new application enables probabilistic power analyses that account for input uncertainties and produce results that include variations caused by these uncertainties. Specifically, N&R Engineering, under contract to NASA, integrated these advanced probabilistic techniques with Glenn s internationally recognized ISS power system model, System Power Analysis for Capability Evaluation (SPACE).
Evaluation of a puff dispersion model in complex terrain
Thuillier, R.H. )
1992-03-01
California's Pacific Gas and Electric Company has many power plant operations situated in complex terrain, prominent examples being the Geysers geothermal plant in Lake and Sonoma Counties, and the Diablo Canyon nuclear plant in San Luis Obispo County. Procedures ranging from plant licensing to emergency response require a dispersion modeling capability in a complex terrain environment. This paper describes the performance evaluation of such a capability, the Pacific Gas and Electric Company Modeling System (PGEMS), a fast response Gaussian puff model with a three-dimensional wind field generator. Performance of the model was evaluated for ground level and short stack elevated release on the basis of a special intensive tracer experiment in the complex coastal terrain surrounding the Diablo Canyon Nuclear Power Plant in San Luis Obispo County, California. The model performed well under a variety of meteorological and release conditions within the test region of 20-kilometer radius surrounding the nuclear plant, and turned in a superior performance in the wake of the nuclear plant, using a new wake correction algorithm for ground level and roof-vent releases a that location.
Boolean modeling of collective effects in complex networks
Norrell, Johannes; Socolar, Joshua E. S.
2009-01-01
Complex systems are often modeled as Boolean networks in attempts to capture their logical structure and reveal its dynamical consequences. Approximating the dynamics of continuous variables by discrete values and Boolean logic gates may, however, introduce dynamical possibilities that are not accessible to the original system. We show that large random networks of variables coupled through continuous transfer functions often fail to exhibit the complex dynamics of corresponding Boolean models in the disordered (chaotic) regime, even when each individual function appears to be a good candidate for Boolean idealization. A suitably modified Boolean theory explains the behavior of systems in which information does not propagate faithfully down certain chains of nodes. Model networks incorporating calculated or directly measured transfer functions reported in the literature on transcriptional regulation of genes are described by the modified theory. PMID:19658525
Activity-Dependent Neuronal Model on Complex Networks
de Arcangelis, Lucilla; Herrmann, Hans J.
2012-01-01
Neuronal avalanches are a novel mode of activity in neuronal networks, experimentally found in vitro and in vivo, and exhibit a robust critical behavior: these avalanches are characterized by a power law distribution for the size and duration, features found in other problems in the context of the physics of complex systems. We present a recent model inspired in self-organized criticality, which consists of an electrical network with threshold firing, refractory period, and activity-dependent synaptic plasticity. The model reproduces the critical behavior of the distribution of avalanche sizes and durations measured experimentally. Moreover, the power spectra of the electrical signal reproduce very robustly the power law behavior found in human electroencephalogram (EEG) spectra. We implement this model on a variety of complex networks, i.e., regular, small-world, and scale-free and verify the robustness of the critical behavior. PMID:22470347
Deciphering the complexity of acute inflammation using mathematical models.
Vodovotz, Yoram
2006-01-01
Various stresses elicit an acute, complex inflammatory response, leading to healing but sometimes also to organ dysfunction and death. We constructed both equation-based models (EBM) and agent-based models (ABM) of various degrees of granularity--which encompass the dynamics of relevant cells, cytokines, and the resulting global tissue dysfunction--in order to begin to unravel these inflammatory interactions. The EBMs describe and predict various features of septic shock and trauma/hemorrhage (including the response to anthrax, preconditioning phenomena, and irreversible hemorrhage) and were used to simulate anti-inflammatory strategies in clinical trials. The ABMs that describe the interrelationship between inflammation and wound healing yielded insights into intestinal healing in necrotizing enterocolitis, vocal fold healing during phonotrauma, and skin healing in the setting of diabetic foot ulcers. Modeling may help in understanding the complex interactions among the components of inflammation and response to stress, and therefore aid in the development of novel therapies and diagnostics.
Entropy, complexity, and Markov diagrams for random walk cancer models.
Newton, Paul K; Mason, Jeremy; Hurt, Brian; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Kuhn, Peter
2014-12-19
The notion of entropy is used to compare the complexity associated with 12 common cancers based on metastatic tumor distribution autopsy data. We characterize power-law distributions, entropy, and Kullback-Liebler divergence associated with each primary cancer as compared with data for all cancer types aggregated. We then correlate entropy values with other measures of complexity associated with Markov chain dynamical systems models of progression. The Markov transition matrix associated with each cancer is associated with a directed graph model where nodes are anatomical locations where a metastatic tumor could develop, and edge weightings are transition probabilities of progression from site to site. The steady-state distribution corresponds to the autopsy data distribution. Entropy correlates well with the overall complexity of the reduced directed graph structure for each cancer and with a measure of systemic interconnectedness of the graph, called graph conductance. The models suggest that grouping cancers according to their entropy values, with skin, breast, kidney, and lung cancers being prototypical high entropy cancers, stomach, uterine, pancreatic and ovarian being mid-level entropy cancers, and colorectal, cervical, bladder, and prostate cancers being prototypical low entropy cancers, provides a potentially useful framework for viewing metastatic cancer in terms of predictability, complexity, and metastatic potential.
Entropy, complexity, and Markov diagrams for random walk cancer models
NASA Astrophysics Data System (ADS)
Newton, Paul K.; Mason, Jeremy; Hurt, Brian; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Kuhn, Peter
2014-12-01
The notion of entropy is used to compare the complexity associated with 12 common cancers based on metastatic tumor distribution autopsy data. We characterize power-law distributions, entropy, and Kullback-Liebler divergence associated with each primary cancer as compared with data for all cancer types aggregated. We then correlate entropy values with other measures of complexity associated with Markov chain dynamical systems models of progression. The Markov transition matrix associated with each cancer is associated with a directed graph model where nodes are anatomical locations where a metastatic tumor could develop, and edge weightings are transition probabilities of progression from site to site. The steady-state distribution corresponds to the autopsy data distribution. Entropy correlates well with the overall complexity of the reduced directed graph structure for each cancer and with a measure of systemic interconnectedness of the graph, called graph conductance. The models suggest that grouping cancers according to their entropy values, with skin, breast, kidney, and lung cancers being prototypical high entropy cancers, stomach, uterine, pancreatic and ovarian being mid-level entropy cancers, and colorectal, cervical, bladder, and prostate cancers being prototypical low entropy cancers, provides a potentially useful framework for viewing metastatic cancer in terms of predictability, complexity, and metastatic potential.
Quantum scattering model of energy transfer in photosynthetic complexes
NASA Astrophysics Data System (ADS)
Ai, Bao-quan; Zhu, Shi-Liang
2015-12-01
We develop a quantum scattering model to describe the exciton transport through the Fenna-Matthews-Olson (FMO) complex. It is found that the exciton transport involving the optimal quantum coherence is more efficient than that involving classical behaviour alone. Furthermore, we also find that the quantum resonance condition is easier to be fulfilled in multiple pathways than that in one pathway. We then definitely demonstrate that the optimal distribution of the pigments, the multitude of energy delivery pathways and the quantum effects are combined together to contribute to the perfect energy transport in the FMO complex.
Complex reaction noise in a molecular quasispecies model
NASA Astrophysics Data System (ADS)
Hochberg, David; Zorzano, María-Paz; Morán, Federico
2006-05-01
We have derived exact Langevin equations for a model of quasispecies dynamics. The inherent multiplicative reaction noise is complex and its statistical properties are specified completely. The numerical simulation of the complex Langevin equations is carried out using the Cholesky decomposition for the noise covariance matrix. This internal noise, which is due to diffusion-limited reactions, produces unavoidable spatio-temporal density fluctuations about the mean field value. In two dimensions, this noise strictly vanishes only in the perfectly mixed limit, a situation difficult to attain in practice.
Modeling and Algorithmic Approaches to Constitutively-Complex, Microstructured Fluids
Miller, Gregory H.; Forest, Gregory
2014-05-01
We present a new multiscale model for complex fluids based on three scales: microscopic, kinetic, and continuum. We choose the microscopic level as Kramers' bead-rod model for polymers, which we describe as a system of stochastic differential equations with an implicit constraint formulation. The associated Fokker-Planck equation is then derived, and adiabatic elimination removes the fast momentum coordinates. Approached in this way, the kinetic level reduces to a dispersive drift equation. The continuum level is modeled with a finite volume Godunov-projection algorithm. We demonstrate computation of viscoelastic stress divergence using this multiscale approach.
Complex Behavior in Simple Models of Biological Coevolution
NASA Astrophysics Data System (ADS)
Rikvold, Per Arne
We explore the complex dynamical behavior of simple predator-prey models of biological coevolution that account for interspecific and intraspecific competition for resources, as well as adaptive foraging behavior. In long kinetic Monte Carlo simulations of these models we find quite robust 1/f-like noise in species diversity and population sizes, as well as power-law distributions for the lifetimes of individual species and the durations of quiet periods of relative evolutionary stasis. In one model, based on the Holling Type II functional response, adaptive foraging produces a metastable low-diversity phase and a stable high-diversity phase.
Modeling of Carbohydrate Binding Modules Complexed to Cellulose
Nimlos, M. R.; Beckham, G. T.; Bu, L.; Himmel, M. E.; Crowley, M. F.; Bomble, Y. J.
2012-01-01
Modeling results are presented for the interaction of two carbohydrate binding modules (CBMs) with cellulose. The family 1 CBM from Trichoderma reesei's Cel7A cellulase was modeled using molecular dynamics to confirm that this protein selectively binds to the hydrophobic (100) surface of cellulose fibrils and to determine the energetics and mechanisms for locating this surface. Modeling was also conducted of binding of the family 4 CBM from the CbhA complex from Clostridium thermocellum. There is a cleft in this protein, which may accommodate a cellulose chain that is detached from crystalline cellulose. This possibility is explored using molecular dynamics.
NASA Astrophysics Data System (ADS)
Hughes, J. D.; White, J.
2013-12-01
For many numerical hydrologic models it is a challenge to quantitatively demonstrate that complex models are preferable to simpler models. Typically, a decision is made to develop and calibrate a complex model at the beginning of a study. The value of selecting a complex model over simpler models is commonly inferred from use of a model with fewer simplifications of the governing equations because it can be time consuming to develop another numerical code with data processing and parameter estimation functionality. High-level programming languages like Python can greatly reduce the effort required to develop and calibrate simple models that can be used to quantitatively demonstrate the increased value of a complex model. We have developed and calibrated a spatially-distributed surface-water/groundwater flow model for managed basins in southeast Florida, USA, to (1) evaluate the effect of municipal groundwater pumpage on surface-water/groundwater exchange, (2) investigate how the study area will respond to sea-level rise, and (3) explore combinations of these forcing functions. To demonstrate the increased value of this complex model, we developed a two-parameter conceptual-benchmark-discharge model for each basin in the study area. The conceptual-benchmark-discharge model includes seasonal scaling and lag parameters and is driven by basin rainfall. The conceptual-benchmark-discharge models were developed in the Python programming language and used weekly rainfall data. Calibration was implemented with the Broyden-Fletcher-Goldfarb-Shanno method available in the Scientific Python (SciPy) library. Normalized benchmark efficiencies calculated using output from the complex model and the corresponding conceptual-benchmark-discharge model indicate that the complex model has more explanatory power than the simple model driven only by rainfall.
Bridging Mechanistic and Phenomenological Models of Complex Biological Systems
Transtrum, Mark K.; Qiu, Peng
2016-01-01
The inherent complexity of biological systems gives rise to complicated mechanistic models with a large number of parameters. On the other hand, the collective behavior of these systems can often be characterized by a relatively small number of phenomenological parameters. We use the Manifold Boundary Approximation Method (MBAM) as a tool for deriving simple phenomenological models from complicated mechanistic models. The resulting models are not black boxes, but remain expressed in terms of the microscopic parameters. In this way, we explicitly connect the macroscopic and microscopic descriptions, characterize the equivalence class of distinct systems exhibiting the same range of collective behavior, and identify the combinations of components that function as tunable control knobs for the behavior. We demonstrate the procedure for adaptation behavior exhibited by the EGFR pathway. From a 48 parameter mechanistic model, the system can be effectively described by a single adaptation parameter τ characterizing the ratio of time scales for the initial response and recovery time of the system which can in turn be expressed as a combination of microscopic reaction rates, Michaelis-Menten constants, and biochemical concentrations. The situation is not unlike modeling in physics in which microscopically complex processes can often be renormalized into simple phenomenological models with only a few effective parameters. The proposed method additionally provides a mechanistic explanation for non-universal features of the behavior. PMID:27187545
Mathematical and Computational Modeling in Complex Biological Systems
Li, Wenyang; Zhu, Xiaoliang
2017-01-01
The biological process and molecular functions involved in the cancer progression remain difficult to understand for biologists and clinical doctors. Recent developments in high-throughput technologies urge the systems biology to achieve more precise models for complex diseases. Computational and mathematical models are gradually being used to help us understand the omics data produced by high-throughput experimental techniques. The use of computational models in systems biology allows us to explore the pathogenesis of complex diseases, improve our understanding of the latent molecular mechanisms, and promote treatment strategy optimization and new drug discovery. Currently, it is urgent to bridge the gap between the developments of high-throughput technologies and systemic modeling of the biological process in cancer research. In this review, we firstly studied several typical mathematical modeling approaches of biological systems in different scales and deeply analyzed their characteristics, advantages, applications, and limitations. Next, three potential research directions in systems modeling were summarized. To conclude, this review provides an update of important solutions using computational modeling approaches in systems biology. PMID:28386558
An Adaptive Complex Network Model for Brain Functional Networks
Gomez Portillo, Ignacio J.; Gleiser, Pablo M.
2009-01-01
Brain functional networks are graph representations of activity in the brain, where the vertices represent anatomical regions and the edges their functional connectivity. These networks present a robust small world topological structure, characterized by highly integrated modules connected sparsely by long range links. Recent studies showed that other topological properties such as the degree distribution and the presence (or absence) of a hierarchical structure are not robust, and show different intriguing behaviors. In order to understand the basic ingredients necessary for the emergence of these complex network structures we present an adaptive complex network model for human brain functional networks. The microscopic units of the model are dynamical nodes that represent active regions of the brain, whose interaction gives rise to complex network structures. The links between the nodes are chosen following an adaptive algorithm that establishes connections between dynamical elements with similar internal states. We show that the model is able to describe topological characteristics of human brain networks obtained from functional magnetic resonance imaging studies. In particular, when the dynamical rules of the model allow for integrated processing over the entire network scale-free non-hierarchical networks with well defined communities emerge. On the other hand, when the dynamical rules restrict the information to a local neighborhood, communities cluster together into larger ones, giving rise to a hierarchical structure, with a truncated power law degree distribution. PMID:19738902
Mathematical modelling of complex contagion on clustered networks
NASA Astrophysics Data System (ADS)
O'sullivan, David J.; O'Keeffe, Gary; Fennell, Peter; Gleeson, James
2015-09-01
The spreading of behavior, such as the adoption of a new innovation, is influenced bythe structure of social networks that interconnect the population. In the experiments of Centola (Science, 2010), adoption of new behavior was shown to spread further and faster across clustered-lattice networks than across corresponding random networks. This implies that the “complex contagion” effects of social reinforcement are important in such diffusion, in contrast to “simple” contagion models of disease-spread which predict that epidemics would grow more efficiently on random networks than on clustered networks. To accurately model complex contagion on clustered networks remains a challenge because the usual assumptions (e.g. of mean-field theory) regarding tree-like networks are invalidated by the presence of triangles in the network; the triangles are, however, crucial to the social reinforcement mechanism, which posits an increased probability of a person adopting behavior that has been adopted by two or more neighbors. In this paper we modify the analytical approach that was introduced by Hebert-Dufresne et al. (Phys. Rev. E, 2010), to study disease-spread on clustered networks. We show how the approximation method can be adapted to a complex contagion model, and confirm the accuracy of the method with numerical simulations. The analytical results of the model enable us to quantify the level of social reinforcement that is required to observe—as in Centola’s experiments—faster diffusion on clustered topologies than on random networks.
RHIC injector complex online model status and plans
Schoefer,V.; Ahrens, L.; Brown, K.; Morris, J.; Nemesure, S.
2009-05-04
An online modeling system is being developed for the RHIC injector complex, which consists of the Booster, the AGS and the transfer lines connecting the Booster to the AGS and the AGS to RHIC. Historically the injectors have been operated using static values from design specifications or offline model runs, but tighter beam optics constraints required by polarized proton operations (e.g, accelerating with near-integer tunes) have necessitated a more dynamic system. An online model server for the AGS has been implemented using MAD-X [1] as the model engine, with plans to extend the system to the Booster and the injector transfer lines and to add the option of calculating optics using the Polymorphic Tracking Code (PTC [2]) as the model engine.
Lateral organization of complex lipid mixtures from multiscale modeling
NASA Astrophysics Data System (ADS)
Tumaneng, Paul W.; Pandit, Sagar A.; Zhao, Guijun; Scott, H. L.
2010-02-01
The organizational properties of complex lipid mixtures can give rise to functionally important structures in cell membranes. In model membranes, ternary lipid-cholesterol (CHOL) mixtures are often used as representative systems to investigate the formation and stabilization of localized structural domains ("rafts"). In this work, we describe a self-consistent mean-field model that builds on molecular dynamics simulations to incorporate multiple lipid components and to investigate the lateral organization of such mixtures. The model predictions reveal regions of bimodal order on ternary plots that are in good agreement with experiment. Specifically, we have applied the model to ternary mixtures composed of dioleoylphosphatidylcholine:18:0 sphingomyelin:CHOL. This work provides insight into the specific intermolecular interactions that drive the formation of localized domains in these mixtures. The model makes use of molecular dynamics simulations to extract interaction parameters and to provide chain configuration order parameter libraries.
Simplifying complex clinical element models to encourage adoption.
Freimuth, Robert R; Zhu, Qian; Pathak, Jyotishman; Chute, Christopher G
2014-01-01
Clinical Element Models (CEMs) were developed to provide a normalized form for the exchange of clinical data. The CEM specification is quite complex and specialized knowledge is required to understand and implement the models, which presents a significant barrier to investigators and study designers. To encourage the adoption of CEMs at the time of data collection and reduce the need for retrospective normalization efforts, we developed an approach that provides a simplified view of CEMs for non-experts while retaining the full semantic detail of the underlying logical models. This allows investigators to approach CEMs through generalized representations that are intended to be more intuitive than the native models, and it permits them to think conceptually about their data elements without worrying about details related to the CEM logical models and syntax. We demonstrate our approach using data elements from the Pharmacogenomics Research Network (PGRN).
Mechanistic modeling confronts the complexity of molecular cell biology.
Phair, Robert D
2014-11-05
Mechanistic modeling has the potential to transform how cell biologists contend with the inescapable complexity of modern biology. I am a physiologist-electrical engineer-systems biologist who has been working at the level of cell biology for the past 24 years. This perspective aims 1) to convey why we build models, 2) to enumerate the major approaches to modeling and their philosophical differences, 3) to address some recurrent concerns raised by experimentalists, and then 4) to imagine a future in which teams of experimentalists and modelers build-and subject to exhaustive experimental tests-models covering the entire spectrum from molecular cell biology to human pathophysiology. There is, in my view, no technical obstacle to this future, but it will require some plasticity in the biological research mind-set.
Cx-02 Program, workshop on modeling complex systems
Mossotti, Victor G.; Barragan, Jo Ann; Westergard, Todd D.
2003-01-01
This publication contains the abstracts and program for the workshop on complex systems that was held on November 19-21, 2002, in Reno, Nevada. Complex systems are ubiquitous within the realm of the earth sciences. Geological systems consist of a multiplicity of linked components with nested feedback loops; the dynamics of these systems are non-linear, iterative, multi-scale, and operate far from equilibrium. That notwithstanding, It appears that, with the exception of papers on seismic studies, geology and geophysics work has been disproportionally underrepresented at regional and national meetings on complex systems relative to papers in the life sciences. This is somewhat puzzling because geologists and geophysicists are, in many ways, preadapted to thinking of complex system mechanisms. Geologists and geophysicists think about processes involving large volumes of rock below the sunlit surface of Earth, the accumulated consequence of processes extending hundreds of millions of years in the past. Not only do geologists think in the abstract by virtue of the vast time spans, most of the evidence is out-of-sight. A primary goal of this workshop is to begin to bridge the gap between the Earth sciences and life sciences through demonstration of the universality of complex systems science, both philosophically and in model structures.
Computational and analytical modeling of cationic lipid-DNA complexes.
Farago, Oded; Grønbech-Jensen, Niels
2007-05-01
We present a theoretical study of the physical properties of cationic lipid-DNA (CL-DNA) complexes--a promising synthetically based nonviral carrier of DNA for gene therapy. The study is based on a coarse-grained molecular model, which is used in Monte Carlo simulations of mesoscopically large systems over timescales long enough to address experimental reality. In the present work, we focus on the statistical-mechanical behavior of lamellar complexes, which in Monte Carlo simulations self-assemble spontaneously from a disordered random initial state. We measure the DNA-interaxial spacing, d(DNA), and the local cationic area charge density, sigma(M), for a wide range of values of the parameter (c) representing the fraction of cationic lipids. For weakly charged complexes (low values of (c)), we find that d(DNA) has a linear dependence on (c)(-1), which is in excellent agreement with x-ray diffraction experimental data. We also observe, in qualitative agreement with previous Poisson-Boltzmann calculations of the system, large fluctuations in the local area charge density with a pronounced minimum of sigma(M) halfway between adjacent DNA molecules. For highly-charged complexes (large (c)), we find moderate charge density fluctuations and observe deviations from linear dependence of d(DNA) on (c)(-1). This last result, together with other findings such as the decrease in the effective stretching modulus of the complex and the increased rate at which pores are formed in the complex membranes, are indicative of the gradual loss of mechanical stability of the complex, which occurs when (c) becomes large. We suggest that this may be the origin of the recently observed enhanced transfection efficiency of lamellar CL-DNA complexes at high charge densities, because the completion of the transfection process requires the disassembly of the complex and the release of the DNA into the cytoplasm. Some of the structural properties of the system are also predicted by a continuum
NASA Astrophysics Data System (ADS)
Henriot, abel; Blavoux, bernard; Travi, yves; Lachassagne, patrick; Beon, olivier; Dewandel, benoit; Ladouche, bernard
2013-04-01
The Evian Natural Mineral Water (NMW) aquifer is a highly heterogeneous Quaternary glacial deposits complex composed of three main units, from bottom to top: - The "Inferior Complex" mainly composed of basal and impermeable till lying on the Alpine rocks. It outcrops only at the higher altitudes but is known in depth through drilled holes. - The "Gavot Plateau Complex" is an interstratified complex of mainly basal and lateral till up to 400 m thick. It outcrops at heights above approximately 850 m a.m.s.l. and up to 1200 m a.m.s.l. over a 30 km² area. It is the main recharge area known for the hydromineral system. - The "Terminal Complex" from which the Evian NMW is emerging at 410 m a.m.s.l. It is composed of sand and gravel Kame terraces that allow water to flow from the deep "Gavot Plateau Complex" permeable layers to the "Terminal Complex". A thick and impermeable terminal till caps and seals the system. Aquifer is then confined at its downstream area. Because of heterogeneity and complexity of this hydrosystem, distributed modeling tools are difficult to implement at the whole system scale: important hypothesis would have to be made about geometry, hydraulic properties, boundary conditions for example and extrapolation would lead with no doubt to unacceptable errors. Consequently a modeling strategy is being developed and leads also to improve the conceptual model of the hydrosystem. Lumped models mainly based on tritium time series allow the whole hydrosystem to be modeled combining in series: an exponential model (superficial aquifers of the "Gavot Plateau Complex"), a dispersive model (Gavot Plateau interstratified complex) and a piston flow model (sand and gravel from the Kame terraces) respectively 8, 60 and 2.5 years of mean transit time. These models provide insight on the governing parameters for the whole mineral aquifer. They help improving the current conceptual model and are to be improved with other environmental tracers such as CFC, SF6. A
Modelling of Rare Earth Elements Complexation With Humic Acid
NASA Astrophysics Data System (ADS)
Pourret, O.; Davranche, M.; Gruau, G.; Dia, A.
2006-12-01
The binding of rare earth elements (REE) to humic acid (HA) was studied by combining Ultrafiltration and ICP- MS techniques. REE-HA complexation experiments were performed at various pH conditions (ranging from 2 to 10.5) using a standard batch equilibration method. Results show that the amount of REE bound to HA strongly increase with increasing pH. Moreover, a Middle REE (MREE) downward concavity is evidenced by REE distribution patterns at acidic pH. Modelling of the experimental data using Humic Ion Binding Model VI provided a set of log KMA values (i.e. the REE-HA complexation constants specific to Model VI) for the entire REE series. The log KMA pattern obtained displays a MREE downward concavity. Log KMA values range from 2.42 to 2.79. These binding constants are in good agreement with the few existing datasets quantifying the binding of REE with humic substances except a recently published study which evidence a lanthanide contraction effect (i.e. continuous increase of the constant from La to Lu). The MREE downward concavity displayed by REE-HA complexation pattern determined in this study compares well with results from REE-fulvic acid (FA) and REE-acetic acid complexation studies. This similarity in the REE complexation pattern shapes suggests that carboxylic groups are the main binding sites of REE in HA. This conclusion is further supported by a detailed review of published studies for natural, organic-rich, river- and ground-waters which show no evidence of a lanthanide contraction effect in REE pattern shape. Finally, application of Model VI using the new, experimentally determined log KMA values to World Average River Water confirms earlier suggestions that REE occur predominantly as organic complexes (> 60 %) in the pH range between 5-5.5 and 7-8.5 (i.e. in circumneutral pH waters). The only significant difference as compared to earlier model predictions made using estimated log KMA values is that the experimentally determined log KMA values
A Corticothalamic Circuit Model for Sound Identification in Complex Scenes
Otazu, Gonzalo H.; Leibold, Christian
2011-01-01
The identification of the sound sources present in the environment is essential for the survival of many animals. However, these sounds are not presented in isolation, as natural scenes consist of a superposition of sounds originating from multiple sources. The identification of a source under these circumstances is a complex computational problem that is readily solved by most animals. We present a model of the thalamocortical circuit that performs level-invariant recognition of auditory objects in complex auditory scenes. The circuit identifies the objects present from a large dictionary of possible elements and operates reliably for real sound signals with multiple concurrently active sources. The key model assumption is that the activities of some cortical neurons encode the difference between the observed signal and an internal estimate. Reanalysis of awake auditory cortex recordings revealed neurons with patterns of activity corresponding to such an error signal. PMID:21931668
An Ontology for Modeling Complex Inter-relational Organizations
NASA Astrophysics Data System (ADS)
Wautelet, Yves; Neysen, Nicolas; Kolp, Manuel
This paper presents an ontology for organizational modeling through multiple complementary aspects. The primary goal of the ontology is to dispose of an adequate set of related concepts for studying complex organizations involved in a lot of relationships at the same time. In this paper, we define complex organizations as networked organizations involved in a market eco-system that are playing several roles simultaneously. In such a context, traditional approaches focus on the macro analytic level of transactions; this is supplemented here with a micro analytic study of the actors' rationale. At first, the paper overviews enterprise ontologies literature to position our proposal and exposes its contributions and limitations. The ontology is then brought to an advanced level of formalization: a meta-model in the form of a UML class diagram allows to overview the ontology concepts and their relationships which are formally defined. Finally, the paper presents the case study on which the ontology has been validated.
Polygonal Shapes Detection in 3d Models of Complex Architectures
NASA Astrophysics Data System (ADS)
Benciolini, G. B.; Vitti, A.
2015-02-01
A sequential application of two global models defined on a variational framework is proposed for the detection of polygonal shapes in 3D models of complex architectures. As a first step, the procedure involves the use of the Mumford and Shah (1989) 1st-order variational model in dimension two (gridded height data are processed). In the Mumford-Shah model an auxiliary function detects the sharp changes, i.e., the discontinuities, of a piecewise smooth approximation of the data. The Mumford-Shah model requires the global minimization of a specific functional to simultaneously produce both the smooth approximation and its discontinuities. In the proposed procedure, the edges of the smooth approximation derived by a specific processing of the auxiliary function are then processed using the Blake and Zisserman (1987) 2nd-order variational model in dimension one (edges are processed in the plane). This second step permits to describe the edges of an object by means of piecewise almost-linear approximation of the input edges themselves and to detects sharp changes of the first-derivative of the edges so to detect corners. The Mumford-Shah variational model is used in two dimensions accepting the original data as primary input. The Blake-Zisserman variational model is used in one dimension for the refinement of the description of the edges. The selection among all the boundaries detected by the Mumford-Shah model of those that present a shape close to a polygon is performed by considering only those boundaries for which the Blake-Zisserman model identified discontinuities in their first derivative. The output of the procedure are hence shapes, coming from 3D geometric data, that can be considered as polygons. The application of the procedure is suitable for, but not limited to, the detection of objects such as foot-print of polygonal buildings, building facade boundaries or windows contours. v The procedure is applied to a height model of the building of the Engineering
Bhakta, Dipita; Siva, Ramamoorthy
2012-06-01
Plant dyes have been in use for coloring and varied purposes since prehistoric times. A red dye found in the roots of plants belonging to genus Morinda is a well recognized coloring ingredient. The dye fraction obtained from the methanolic extract of the roots of Morinda tinctoria was explored for its role in attenuating damages caused by H(2)O(2)-induced oxidative stress. The antioxidant potential of the dye fraction was assessed through DPPH radical scavenging, deoxyribose degradation and inhibition of lipid peroxidation in mice liver. It was subsequently screened for its efficiency in extenuating damage incurred to biomembrane (using erythrocytes and their ghost membranes) and macromolecules (pBR322 DNA, lipids and proteins) from exposure to hydrogen peroxide. In addition, the non-toxic nature of the dye was supported by the histological evaluation conducted on the tissue sections from the major organs of Swiss Albino mice as well as effect on Hep3B cell line (human hepatic carcinoma). The LC-MS confirms the dye fraction to be morindone. Our study strongly suggests that morindone present in the root extracts of M. tinctoria, in addition to being a colorant, definitely holds promise in the pharmaceutical industry.
2013-01-01
Despite a long history in medical and dental application, the molecular mechanism and precise site of action are still arguable for local anesthetics. Their effects are considered to be induced by acting on functional proteins, on membrane lipids, or on both. Local anesthetics primarily interact with sodium channels embedded in cell membranes to reduce the excitability of nerve cells and cardiomyocytes or produce a malfunction of the cardiovascular system. However, the membrane protein-interacting theory cannot explain all of the pharmacological and toxicological features of local anesthetics. The administered drug molecules must diffuse through the lipid barriers of nerve sheaths and penetrate into or across the lipid bilayers of cell membranes to reach the acting site on transmembrane proteins. Amphiphilic local anesthetics interact hydrophobically and electrostatically with lipid bilayers and modify their physicochemical property, with the direct inhibition of membrane functions, and with the resultant alteration of the membrane lipid environments surrounding transmembrane proteins and the subsequent protein conformational change, leading to the inhibition of channel functions. We review recent studies on the interaction of local anesthetics with biomembranes consisting of phospholipids and cholesterol. Understanding the membrane interactivity of local anesthetics would provide novel insights into their anesthetic and cardiotoxic effects. PMID:24174934
A Hybridization Model for the Plasmon Response of Complex Nanostructures
NASA Astrophysics Data System (ADS)
Prodan, E.; Radloff, C.; Halas, N. J.; Nordlander, P.
2003-10-01
We present a simple and intuitive picture, an electromagnetic analog of molecular orbital theory, that describes the plasmon response of complex nanostructures of arbitrary shape. Our model can be understood as the interaction or ``hybridization'' of elementary plasmons supported by nanostructures of elementary geometries. As an example, the approach is applied to the important case of a four-layer concentric nanoshell, where the hybridization of the plasmons of the inner and outer nanoshells determines the resonant frequencies of the multilayer nanostructure.
A hybridization model for the plasmon response of complex nanostructures.
Prodan, E; Radloff, C; Halas, N J; Nordlander, P
2003-10-17
We present a simple and intuitive picture, an electromagnetic analog of molecular orbital theory, that describes the plasmon response of complex nanostructures of arbitrary shape. Our model can be understood as the interaction or "hybridization" of elementary plasmons supported by nanostructures of elementary geometries. As an example, the approach is applied to the important case of a four-layer concentric nanoshell, where the hybridization of the plasmons of the inner and outer nanoshells determines the resonant frequencies of the multilayer nanostructure.
Termination of Multipartite Graph Series Arising from Complex Network Modelling
NASA Astrophysics Data System (ADS)
Latapy, Matthieu; Phan, Thi Ha Duong; Crespelle, Christophe; Nguyen, Thanh Qui
An intense activity is nowadays devoted to the definition of models capturing the properties of complex networks. Among the most promising approaches, it has been proposed to model these graphs via their clique incidence bipartite graphs. However, this approach has, until now, severe limitations resulting from its incapacity to reproduce a key property of this object: the overlapping nature of cliques in complex networks. In order to get rid of these limitations we propose to encode the structure of clique overlaps in a network thanks to a process consisting in iteratively factorising the maximal bicliques between the upper level and the other levels of a multipartite graph. We show that the most natural definition of this factorising process leads to infinite series for some instances. Our main result is to design a restriction of this process that terminates for any arbitrary graph. Moreover, we show that the resulting multipartite graph has remarkable combinatorial properties and is closely related to another fundamental combinatorial object. Finally, we show that, in practice, this multipartite graph is computationally tractable and has a size that makes it suitable for complex network modelling.
A Simple Model for Complex Dynamical Transitions in Epidemics
NASA Astrophysics Data System (ADS)
Earn, David J. D.; Rohani, Pejman; Bolker, Benjamin M.; Grenfell, Bryan T.
2000-01-01
Dramatic changes in patterns of epidemics have been observed throughout this century. For childhood infectious diseases such as measles, the major transitions are between regular cycles and irregular, possibly chaotic epidemics, and from regionally synchronized oscillations to complex, spatially incoherent epidemics. A simple model can explain both kinds of transitions as the consequences of changes in birth and vaccination rates. Measles is a natural ecological system that exhibits different dynamical transitions at different times and places, yet all of these transitions can be predicted as bifurcations of a single nonlinear model.
The evaluative imaging of mental models - Visual representations of complexity
NASA Technical Reports Server (NTRS)
Dede, Christopher
1989-01-01
The paper deals with some design issues involved in building a system that could visually represent the semantic structures of training materials and their underlying mental models. In particular, hypermedia-based semantic networks that instantiate classification problem solving strategies are thought to be a useful formalism for such representations; the complexity of these web structures can be best managed through visual depictions. It is also noted that a useful approach to implement in these hypermedia models would be some metrics of conceptual distance.
Complex tephra dispersion from 3D plume modeling using ATHAM
NASA Astrophysics Data System (ADS)
Nicholson, B. C.; Kobs-Nawotniak, S. E.
2014-12-01
Most volcanic hazard assessments are based on a classic inversion tool for tephra deposits that relies on a simple integral model to explain the eruption plume. While this tool is adequate for first-order predictions of tephra deposition under no-wind conditions, the simplifying assumptions make it unreliable for ambient winds >10 m/s. Advances in computational power now make it possible to improve the inversion tool using 3D fluid dynamics. We do this with the physics-based Active Tracer High-resolution Atmospheric Model (ATHAM) to model tephra dispersion and deposition from volcanic eruption columns. The model, when run in 3D, is able to capture the complex morphology of bent plumes. Tephra distributions produced by these morphologies differ significantly from distributions created by idealized advection solutions, reflecting the effects of counter-rotating vortex pairs, puffing modes, or plume bifurcation. The modeled tephra deposition better captures the complex effects of wind-plume interaction, allowing us to update classic inversion tools with more realistic weak plume conditions consistent with typical historical explosive eruptions.
Cornish; Wood; Windle
1999-11-01
The physical characteristics of rubber particles from the four rubber (cis-1,4-polyisoprene) producing species Euphorbia lactiflua Phil., Ficus elastica Roxb., Hevea brasiliensis Mull. Arg., and Parthenium argentatum Gray, were investigated using transmission electron microscopy (TEM) and electron-paramagnetic-resonance (EPR) spin labeling spectroscopy. Transmission electron microscopy showed the rubber particles to be composed of a spherical, homogeneous, core of rubber enclosed by a contiguous, electron-dense, single-track surface layer. The biochemical composition of the surface layer and its single-track TEM suggested that a monolayer biomembrane was the surface structure most compatible with the hydrophobic rubber core. The EPR spectra for a series of positional isomers of doxyl stearic acid, used to label the surface layer of the rubber particles, exhibited flexibility gradients and evidence for lipid-protein interactions for all four rubber particle types that is consistent with a biomembrane-like surface. The EPR spectra confirmed that the surface biomembrane is a monolayer. Thus, rubber particles appear similar to oil bodies in their basic architecture. The EPR spectra also provided information on protein location and degree of biomembrane penetration that correlated with the known properties of the rubber-particle-bound proteins. The monolayer biomembrane serves as an interface between the hydrophobic rubber interior and the aqueous cytosol and prevents aggregation of the particles. An unexpected observation for the probes in pure polyisoprene was evidence of an intrinsic flexibility gradient associated with the stearic acid molecule itself.
Hierarchical Model for the Evolution of Cloud Complexes
NASA Astrophysics Data System (ADS)
Sánchez D., Néstor M.; Parravano, Antonio
1999-01-01
The structure of cloud complexes appears to be well described by a tree structure (i.e., a simplified ``stick man'') representation when the image is partitioned into ``clouds.'' In this representation, the parent-child relationships are assigned according to containment. Based on this picture, a hierarchical model for the evolution of cloud complexes, including star formation, is constructed. The model follows the mass evolution of each substructure by computing its mass exchange with its parent and children. The parent-child mass exchange (evaporation or condensation) depends on the radiation density at the interphase. At the end of the ``lineage,'' stars may be born or die, so that there is a nonstationary mass flow in the hierarchical structure. For a variety of parameter sets the system follows the same series of steps to transform diffuse gas into stars, and the regulation of the mass flux in the tree by previously formed stars dominates the evolution of the star formation. For the set of parameters used here as a reference model, the system tends to produce initial mass functions (IMFs) that have a maximum at a mass that is too high (~2 Msolar) and the characteristic times for evolution seem too long. We show that these undesired properties can be improved by adjusting the model parameters. The model requires further physics (e.g., allowing for multiple stellar systems and clump collisions) before a definitive comparison with observations can be made. Instead, the emphasis here is to illustrate some general properties of this kind of complex nonlinear model for the star formation process. Notwithstanding the simplifications involved, the model reveals an essential feature that will likely remain if additional physical processes are included, that is, the detailed behavior of the system is very sensitive to the variations on the initial and external conditions, suggesting that a ``universal'' IMF is very unlikely. When an ensemble of IMFs corresponding to a
Hybrid Structural Model of the Complete Human ESCRT-0 Complex
Ren, Xuefeng; Kloer, Daniel P.; Kim, Young C.; Ghirlando, Rodolfo; Saidi, Layla F.; Hummer, Gerhard; Hurley, James H.
2009-03-31
The human Hrs and STAM proteins comprise the ESCRT-0 complex, which sorts ubiquitinated cell surface receptors to lysosomes for degradation. Here we report a model for the complete ESCRT-0 complex based on the crystal structure of the Hrs-STAM core complex, previously solved domain structures, hydrodynamic measurements, and Monte Carlo simulations. ESCRT-0 expressed in insect cells has a hydrodynamic radius of R{sub H} = 7.9 nm and is a 1:1 heterodimer. The 2.3 {angstrom} crystal structure of the ESCRT-0 core complex reveals two domain-swapped GAT domains and an antiparallel two-stranded coiled-coil, similar to yeast ESCRT-0. ESCRT-0 typifies a class of biomolecular assemblies that combine structured and unstructured elements, and have dynamic and open conformations to ensure versatility in target recognition. Coarse-grained Monte Carlo simulations constrained by experimental R{sub H} values for ESCRT-0 reveal a dynamic ensemble of conformations well suited for diverse functions.
Context dependent preferential attachment model for complex networks
NASA Astrophysics Data System (ADS)
Pandey, Pradumn Kumar; Adhikari, Bibhas
2015-10-01
In this paper, we propose a growing random complex network model, which we call context dependent preferential attachment model (CDPAM), when the preference of a new node to get attached to old nodes is determined by the local and global property of the old nodes. We consider that local and global properties of a node as the degree and relative average degree of the node respectively. We prove that the degree distribution of complex networks generated by CDPAM follow power law with exponent lies in the interval [2,3] and the expected diameter grows logarithmically with the size of new nodes added to the initial small network. Numerical results show that the expected diameter stabilizes when alike weights to the local and global properties are assigned by the new nodes. Computing various measures including clustering coefficient, assortativity, number of triangles, algebraic connectivity, spectral radius, we show that the proposed model replicates properties of real networks when alike weights are given to local and global property. Finally, we observe that the BA model is a limiting case of CDPAM when new nodes tend to give large weight to the local property compared to the weight given to the global property during link formation.
Molecular modeling of the neurophysin I/oxytocin complex
NASA Astrophysics Data System (ADS)
Kazmierkiewicz, R.; Czaplewski, C.; Lammek, B.; Ciarkowski, J.
1997-01-01
Neurophysins I and II (NPI and NPII) act in the neurosecretory granules as carrier proteinsfor the neurophyseal hormones oxytocin (OT) and vasopressin (VP), respectively. The NPI/OTfunctional unit, believed to be an (NPI/OT)2 heterotetramer, was modeled using low-resolution structure information, viz. the Cα carbon atom coordinates of the homologousNPII/dipeptide complex (file 1BN2 in the Brookhaven Protein Databank) as a template. Itsall-atom representation was obtained using standard modeling tools available within theINSIGHT/Biopolymer modules supplied by Biosym Technologies Inc. A conformation of theNPI-bound OT, similar to that recently proposed in a transfer NOE experiment, was dockedinto the ligand-binding site by a superposition of its Cys1-Tyr2 fragment onto the equivalentportion of the dipeptide in the template. The starting complex for the initial refinements wasprepared by two alternative strategies, termed Model I and Model II, each ending with a˜100 ps molecular dynamics (MD) simulation in water using the AMBER 4.1 force field. The freehomodimer NPI2 was obtained by removal of the two OT subunits from their sites, followedby a similar structure refinement. The use of Model I, consisting of a constrained simulatedannealing, resulted in a structure remarkably similar to both the NPII/dipeptide complex anda recently published solid-state structure of the NPII/OT complex. Thus, Model I isrecommended as the method of choice for the preparation of the starting all-atom data forMD. The MD simulations indicate that, both in the homodimer and in the heterotetramer, the310-helices demonstrate an increased mobility relative to the remaining body of the protein.Also, the C-terminal domains in the NPI2 homodimer are more mobile than the N-terminalones. Finally, a distinct intermonomer interaction is identified, concentrated around its mostprominent, although not unique, contribution provided by an H-bond from Ser25Oγ in one NPI unit to Glu81 Oɛ in the other
Semiotic aspects of control and modeling relations in complex systems
Joslyn, C.
1996-08-01
A conceptual analysis of the semiotic nature of control is provided with the goal of elucidating its nature in complex systems. Control is identified as a canonical form of semiotic relation of a system to its environment. As a form of constraint between a system and its environment, its necessary and sufficient conditions are established, and the stabilities resulting from control are distinguished from other forms of stability. These result from the presence of semantic coding relations, and thus the class of control systems is hypothesized to be equivalent to that of semiotic systems. Control systems are contrasted with models, which, while they have the same measurement functions as control systems, do not necessarily require semantic relations because of the lack of the requirement of an interpreter. A hybrid construction of models in control systems is detailed. Towards the goal of considering the nature of control in complex systems, the possible relations among collections of control systems are considered. Powers arguments on conflict among control systems and the possible nature of control in social systems are reviewed, and reconsidered based on our observations about hierarchical control. Finally, we discuss the necessary semantic functions which must be present in complex systems for control in this sense to be present at all.
Tripathi, Ashutosh; Fornabaio, Micaela; Spyrakis, Francesca; Mozzarelli, Andrea; Cozzini, Pietro; Kellogg, Glen E
2007-11-01
The computational-titration (CT) algorithm based on the 'natural' Hydropathic INTeractions (HINT) force field is described. The HINT software model is an empirical, non-Newtonian force field derived from experimentally measured partition coefficients for solvent transfer between octanol and H(2)O (log P(o/w)). The CT algorithm allows the identification, modeling, and optimization of multiple protonation states of residues and ligand functional groups at the protein-ligand active site. The importance of taking into account pH and ionization states of residues, which strongly affect the process of ligand binding, for correctly predicting binding free energies is discussed. The application of the CT protocol to a set of six cyclic inhibitors in their complexes with HIV-1 protease is presented, and the advance of HINT as a virtual-screening tool is outlined.
Spatio-temporal modelling of lightning climatologies for complex terrain
NASA Astrophysics Data System (ADS)
Simon, Thorsten; Umlauf, Nikolaus; Zeileis, Achim; Mayr, Georg J.; Schulz, Wolfgang; Diendorfer, Gerhard
2017-03-01
This study develops methods for estimating lightning climatologies on the day-1 km-2 scale for regions with complex terrain and applies them to summertime observations (2010-2015) of the lightning location system ALDIS in the Austrian state of Carinthia in the Eastern Alps. Generalized additive models (GAMs) are used to model both the probability of occurrence and the intensity of lightning. Additive effects are set up for altitude, day of the year (season) and geographical location (longitude/latitude). The performance of the models is verified by 6-fold cross-validation. The altitude effect of the occurrence model suggests higher probabilities of lightning for locations on higher elevations. The seasonal effect peaks in mid-July. The spatial effect models several local features, but there is a pronounced minimum in the north-west and a clear maximum in the eastern part of Carinthia. The estimated effects of the intensity model reveal similar features, though they are not equal. The main difference is that the spatial effect varies more strongly than the analogous effect of the occurrence model. A major asset of the introduced method is that the resulting climatological information varies smoothly over space, time and altitude. Thus, the climatology is capable of serving as a useful tool in quantitative applications, i.e. risk assessment and weather prediction.
Wrapped-around models for the lac operon complex.
La Penna, Giovanni; Perico, Angelo
2010-06-16
The protein-DNA complex, involved in the lac operon of enteric bacteria, is paradigmatic in understanding the extent of DNA bending and plasticity due to interactions with protein assemblies acting as DNA regulators. For the lac operon, two classes of structures have been proposed: 1), with the protein tetramer lying away from the DNA loop (wrapped-away model); and 2), with the protein tetramer lying inside the DNA loop (wrapped-around model). A recently developed electrostatic analytical model shows that the size and net charge of the Lac protein tetramer allow the bending of DNA, which is consistent with another wrapped-around model from the literature. Coarse-grained models, designed based on this observation, are extensively investigated and show three kinds of wrapped-around arrangements of DNA and a lower propensity for wrapped-away configurations. Molecular dynamics simulations of an all-atom model, built on the basis of the most tightly collapsed coarse-grained model, show that most of the DNA double-helical architecture is maintained in the region between O3 and O1 DNA operators, that the DNA distortion is concentrated in the chain beyond the O1 operator, and that the protein tetramer can adapt the N-terminal domains to the DNA tension.
3D model of amphioxus steroid receptor complexed with estradiol
Baker, Michael E.; Chang, David J.
2009-08-28
The origins of signaling by vertebrate steroids are not fully understood. An important advance was the report that an estrogen-binding steroid receptor [SR] is present in amphioxus, a basal chordate with a similar body plan as vertebrates. To investigate the evolution of estrogen-binding to steroid receptors, we constructed a 3D model of amphioxus SR complexed with estradiol. This 3D model indicates that although the SR is activated by estradiol, some interactions between estradiol and human ER{alpha} are not conserved in the SR, which can explain the low affinity of estradiol for the SR. These differences between the SR and ER{alpha} in the steroid-binding domain are sufficient to suggest that another steroid is the physiological regulator of the SR. The 3D model predicts that mutation of Glu-346 to Gln will increase the affinity of testosterone for amphioxus SR and elucidate the evolution of steroid-binding to nuclear receptors.
A two-level complex network model and its application
NASA Astrophysics Data System (ADS)
Yang, Jianmei; Wang, Wenjie; Chen, Guanrong
2009-06-01
This paper investigates the competitive relationship and rivalry of industrial markets, using Chinese household electrical appliance firms as a platform for the study. The common complex network models belong to one-level networks in layered classification, while this paper formulates and evaluates a new two-level network model, in which the first level is the whole unweighted-undirected network useful for macro-analyzing the industrial market structure while the second level is a local weighted-directed network capable of micro-analyzing the inter-firm rivalry in the market. It is believed that the relationship is determined by objective factors whereas the action is rather subjective, and the idea in this paper lies in that the objective relationship and the subjective action subjected to this relationship are being simultaneously considered but at deferent levels of the model which may be applicable to many real applications.
A computational model for cancer growth by using complex networks
NASA Astrophysics Data System (ADS)
Galvão, Viviane; Miranda, José G. V.
2008-09-01
In this work we propose a computational model to investigate the proliferation of cancerous cell by using complex networks. In our model the network represents the structure of available space in the cancer propagation. The computational scheme considers a cancerous cell randomly included in the complex network. When the system evolves the cells can assume three states: proliferative, non-proliferative, and necrotic. Our results were compared with experimental data obtained from three human lung carcinoma cell lines. The computational simulations show that the cancerous cells have a Gompertzian growth. Also, our model simulates the formation of necrosis, increase of density, and resources diffusion to regions of lower nutrient concentration. We obtain that the cancer growth is very similar in random and small-world networks. On the other hand, the topological structure of the small-world network is more affected. The scale-free network has the largest rates of cancer growth due to hub formation. Finally, our results indicate that for different average degrees the rate of cancer growth is related to the available space in the network.
Multiagent model and mean field theory of complex auction dynamics
NASA Astrophysics Data System (ADS)
Chen, Qinghua; Huang, Zi-Gang; Wang, Yougui; Lai, Ying-Cheng
2015-09-01
Recent years have witnessed a growing interest in analyzing a variety of socio-economic phenomena using methods from statistical and nonlinear physics. We study a class of complex systems arising from economics, the lowest unique bid auction (LUBA) systems, which is a recently emerged class of online auction game systems. Through analyzing large, empirical data sets of LUBA, we identify a general feature of the bid price distribution: an inverted J-shaped function with exponential decay in the large bid price region. To account for the distribution, we propose a multi-agent model in which each agent bids stochastically in the field of winner’s attractiveness, and develop a theoretical framework to obtain analytic solutions of the model based on mean field analysis. The theory produces bid-price distributions that are in excellent agreement with those from the real data. Our model and theory capture the essential features of human behaviors in the competitive environment as exemplified by LUBA, and may provide significant quantitative insights into complex socio-economic phenomena.
Preconditioning the bidomain model with almost linear complexity
NASA Astrophysics Data System (ADS)
Pierre, Charles
2012-01-01
The bidomain model is widely used in electro-cardiology to simulate spreading of excitation in the myocardium and electrocardiograms. It consists of a system of two parabolic reaction diffusion equations coupled with an ODE system. Its discretisation displays an ill-conditioned system matrix to be inverted at each time step: simulations based on the bidomain model therefore are associated with high computational costs. In this paper we propose a preconditioning for the bidomain model either for an isolated heart or in an extended framework including a coupling with the surrounding tissues (the torso). The preconditioning is based on a formulation of the discrete problem that is shown to be symmetric positive semi-definite. A block LU decomposition of the system together with a heuristic approximation (referred to as the monodomain approximation) are the key ingredients for the preconditioning definition. Numerical results are provided for two test cases: a 2D test case on a realistic slice of the thorax based on a segmented heart medical image geometry, a 3D test case involving a small cubic slab of tissue with orthotropic anisotropy. The analysis of the resulting computational cost (both in terms of CPU time and of iteration number) shows an almost linear complexity with the problem size, i.e. of type nlog α( n) (for some constant α) which is optimal complexity for such problems.
Troposphere-lower-stratosphere connection in an intermediate complexity model.
NASA Astrophysics Data System (ADS)
Ruggieri, Paolo; King, Martin; Kucharski, Fred; Buizza, Roberto; Visconti, Guido
2016-04-01
The dynamical coupling between the troposphere and the lower stratosphere has been investigated using a low-top, intermediate complexity model provided by the Abdus Salam International Centre for Theoretical Physics (SPEEDY). The key question that we wanted to address is whether a simple model like SPEEDY can be used to understand troposphere-stratosphere interactions, e.g. forced by changes of sea-ice concentration in polar arctic regions. Three sets of experiments have been performed. Firstly, a potential vorticity perspective has been applied to understand the wave-like forcing of the troposphere on the stratosphere and to provide quantitative information on the sub seasonal variability of the coupling. Then, the zonally asymmetric, near-surface response to a lower-stratospheric forcing has been analysed in a set of forced experiments with an artificial heating imposed in the extra-tropical lower stratosphere. Finally, the lower-stratosphere response sensitivity to tropospheric initial conditions has been examined. Results indicate how SPEEDY captures the physics of the troposphere-stratosphere connection but also show the lack of stratospheric variability. Results also suggest that intermediate-complexity models such as SPEEDY could be used to investigate the effects that surface forcing (e.g. due to sea-ice concentration changes) have on the troposphere and the lower stratosphere.
NMR-derived model for a peptide-antibody complex
Zilber, B.; Scherf, T.; Anglister, J. ); Levitt, M. )
1990-10-01
The TE34 monoclonal antibody against cholera toxin peptide 3 (CTP3; VEVPGSQHIDSQKKA) was sequenced and investigated by two-dimensional transferred NOE difference spectroscopy and molecular modeling. The V{sub H} sequence of TE34, which does not bind cholera toxin, shares remarkable homology to that of TE32 and TE33, which are both anti-CTP3 antibodies that bind the toxin. However, due to a shortened heavy chain CDR3, TE34 assumes a radically different combining site structure. The assignment of the combining site interactions to specific peptide residues was completed by use of AcIDSQRKA, a truncated peptide analogue in which lysine-13 was substituted by arginine, specific deuteration of individual polypeptide chains of the antibody, and a computer model for the Fv fragment of TE34. NMR-derived distance restraints were then applied to the calculated model of the Fv to generate a three-dimensional structure of the TE34/CTP3 complex. The combining site was found to be a very hydrophobic cavity composed of seven aromatic residues. Charged residues are found in the periphery of the combining site. The peptide residues HIDSQKKA form a {beta}-turn inside the combining site. The contact area between the peptide and the TE34 antibody is 388 {Angstrom}{sup 2}, about half of the contact area observed in protein-antibody complexes.
Bloch-Redfield equations for modeling light-harvesting complexes.
Jeske, Jan; Ing, David J; Plenio, Martin B; Huelga, Susana F; Cole, Jared H
2015-02-14
We challenge the misconception that Bloch-Redfield equations are a less powerful tool than phenomenological Lindblad equations for modeling exciton transport in photosynthetic complexes. This view predominantly originates from an indiscriminate use of the secular approximation. We provide a detailed description of how to model both coherent oscillations and several types of noise, giving explicit examples. All issues with non-positivity are overcome by a consistent straightforward physical noise model. Herein also lies the strength of the Bloch-Redfield approach because it facilitates the analysis of noise-effects by linking them back to physical parameters of the noise environment. This includes temporal and spatial correlations and the strength and type of interaction between the noise and the system of interest. Finally, we analyze a prototypical dimer system as well as a 7-site Fenna-Matthews-Olson complex in regards to spatial correlation length of the noise, noise strength, temperature, and their connection to the transfer time and transfer probability.
An ice sheet model of reduced complexity for paleoclimate studies
NASA Astrophysics Data System (ADS)
Neff, Basil; Born, Andreas; Stocker, Thomas F.
2016-04-01
IceBern2D is a vertically integrated ice sheet model to investigate the ice distribution on long timescales under different climatic conditions. It is forced by simulated fields of surface temperature and precipitation of the Last Glacial Maximum and present-day climate from a comprehensive climate model. This constant forcing is adjusted to changes in ice elevation. Due to its reduced complexity and computational efficiency, the model is well suited for extensive sensitivity studies and ensemble simulations on extensive temporal and spatial scales. It shows good quantitative agreement with standardized benchmarks on an artificial domain (EISMINT). Present-day and Last Glacial Maximum ice distributions in the Northern Hemisphere are also simulated with good agreement. Glacial ice volume in Eurasia is underestimated due to the lack of ice shelves in our model. The efficiency of the model is utilized by running an ensemble of 400 simulations with perturbed model parameters and two different estimates of the climate at the Last Glacial Maximum. The sensitivity to the imposed climate boundary conditions and the positive degree-day factor β, i.e., the surface mass balance, outweighs the influence of parameters that disturb the flow of ice. This justifies the use of simplified dynamics as a means to achieve computational efficiency for simulations that cover several glacial cycles. Hysteresis simulations over 5 million years illustrate the stability of the simulated ice sheets to variations in surface air temperature.
When do evolutionary food web models generate complex networks?
Allhoff, Korinna T; Drossel, Barbara
2013-10-07
Evolutionary foodweb models are used to build food webs by the repeated addition of new species. Population dynamics leads to the extinction or establishment of a newly added species, and possibly to the extinction of other species. The food web structure that emerges after some time is a highly nontrivial result of the evolutionary and dynamical rules. We investigate the evolutionary food web model introduced by Loeuille and Loreau (2005), which characterizes species by their body mass as the only evolving trait. Our goal is to find the reasons behind the model's remarkable robustness and its capability to generate various and stable networks. In contrast to other evolutionary food web models, this model requires neither adaptive foraging nor allometric scaling of metabolic rates with body mass in order to produce complex networks that do not eventually collapse to trivial structures. Our study shows that this is essentially due to the fact that the difference in niche value between predator and prey as well as the feeding range are constrained so that they remain within narrow limits under evolution. Furthermore, competition between similar species is sufficiently strong, so that a trophic level can accommodate several species. We discuss the implications of these findings and argue that the conditions that stabilize other evolutionary food web models have similar effects because they also prevent the occurrence of extreme specialists or extreme generalists that have in general a higher fitness than species with a moderate niche width.
Modeling the propagation of mobile phone virus under complex network.
Yang, Wei; Wei, Xi-liang; Guo, Hao; An, Gang; Guo, Lei; Yao, Yu
2014-01-01
Mobile phone virus is a rogue program written to propagate from one phone to another, which can take control of a mobile device by exploiting its vulnerabilities. In this paper the propagation model of mobile phone virus is tackled to understand how particular factors can affect its propagation and design effective containment strategies to suppress mobile phone virus. Two different propagation models of mobile phone viruses under the complex network are proposed in this paper. One is intended to describe the propagation of user-tricking virus, and the other is to describe the propagation of the vulnerability-exploiting virus. Based on the traditional epidemic models, the characteristics of mobile phone viruses and the network topology structure are incorporated into our models. A detailed analysis is conducted to analyze the propagation models. Through analysis, the stable infection-free equilibrium point and the stability condition are derived. Finally, considering the network topology, the numerical and simulation experiments are carried out. Results indicate that both models are correct and suitable for describing the spread of two different mobile phone viruses, respectively.
Surface complexation model of uranyl sorption on Georgia kaolinite
Payne, T.E.; Davis, J.A.; Lumpkin, G.R.; Chisari, R.; Waite, T.D.
2004-01-01
The adsorption of uranyl on standard Georgia kaolinites (KGa-1 and KGa-1B) was studied as a function of pH (3-10), total U (1 and 10 ??mol/l), and mass loading of clay (4 and 40 g/l). The uptake of uranyl in air-equilibrated systems increased with pH and reached a maximum in the near-neutral pH range. At higher pH values, the sorption decreased due to the presence of aqueous uranyl carbonate complexes. One kaolinite sample was examined after the uranyl uptake experiments by transmission electron microscopy (TEM), using energy dispersive X-ray spectroscopy (EDS) to determine the U content. It was found that uranium was preferentially adsorbed by Ti-rich impurity phases (predominantly anatase), which are present in the kaolinite samples. Uranyl sorption on the Georgia kaolinites was simulated with U sorption reactions on both titanol and aluminol sites, using a simple non-electrostatic surface complexation model (SCM). The relative amounts of U-binding >TiOH and >AlOH sites were estimated from the TEM/EDS results. A ternary uranyl carbonate complex on the titanol site improved the fit to the experimental data in the higher pH range. The final model contained only three optimised log K values, and was able to simulate adsorption data across a wide range of experimental conditions. The >TiOH (anatase) sites appear to play an important role in retaining U at low uranyl concentrations. As kaolinite often contains trace TiO2, its presence may need to be taken into account when modelling the results of sorption experiments with radionuclides or trace metals on kaolinite. ?? 2004 Elsevier B.V. All rights reserved.
Model Complexity in Diffusion Modeling: Benefits of Making the Model More Parsimonious
Lerche, Veronika; Voss, Andreas
2016-01-01
The diffusion model (Ratcliff, 1978) takes into account the reaction time distributions of both correct and erroneous responses from binary decision tasks. This high degree of information usage allows the estimation of different parameters mapping cognitive components such as speed of information accumulation or decision bias. For three of the four main parameters (drift rate, starting point, and non-decision time) trial-to-trial variability is allowed. We investigated the influence of these variability parameters both drawing on simulation studies and on data from an empirical test-retest study using different optimization criteria and different trial numbers. Our results suggest that less complex models (fixing intertrial variabilities of the drift rate and the starting point at zero) can improve the estimation of the psychologically most interesting parameters (drift rate, threshold separation, starting point, and non-decision time). PMID:27679585
Modeling and Visualizing Flow of Chemical Agents Across Complex Terrain
NASA Technical Reports Server (NTRS)
Kao, David; Kramer, Marc; Chaderjian, Neal
2005-01-01
Release of chemical agents across complex terrain presents a real threat to homeland security. Modeling and visualization tools are being developed that capture flow fluid terrain interaction as well as point dispersal downstream flow paths. These analytic tools when coupled with UAV atmospheric observations provide predictive capabilities to allow for rapid emergency response as well as developing a comprehensive preemptive counter-threat evacuation plan. The visualization tools involve high-end computing and massive parallel processing combined with texture mapping. We demonstrate our approach across a mountainous portion of North California under two contrasting meteorological conditions. Animations depicting flow over this geographical location provide immediate assistance in decision support and crisis management.
The Eemian climate simulated by two models of different complexities
NASA Astrophysics Data System (ADS)
Nikolova, Irina; Yin, Qiuzhen; Berger, Andre; Singh, Umesh; Karami, Pasha
2013-04-01
The Eemian period, also known as MIS-5, experienced warmer than today climate, reduction in ice sheets and important sea-level rise. These interesting features have made the Eemian appropriate to evaluate climate models when forced with astronomical and greenhouse gas forcings different from today. In this work, we present the simulated Eemian climate by two climate models of different complexities, LOVECLIM (LLN Earth system model of intermediate complexity) and CCSM3 (NCAR atmosphere-ocean general circulation model). Feedbacks from sea ice, vegetation, monsoon and ENSO phenomena are discussed to explain the regional similarities/dissimilarities in both models with respect to the pre-industrial (PI) climate. Significant warming (cooling) over almost all the continents during boreal summer (winter) leads to a largely increased (reduced) seasonal contrast in the northern (southern) hemisphere, mainly due to the much higher (lower) insolation received by the whole Earth in boreal summer (winter). The arctic is warmer than at PI through the whole year, resulting from its much higher summer insolation and its remnant effect in the following fall-winter through the interactions between atmosphere, ocean and sea ice. Regional discrepancies exist in the sea-ice formation zones between the two models. Excessive sea-ice formation in CCSM3 results in intense regional cooling. In both models intensified African monsoon and vegetation feedback are responsible for the cooling during summer in North Africa and on the Arabian Peninsula. Over India precipitation maximum is found further west, while in Africa the precipitation maximum migrates further north. Trees and grassland expand north in Sahel/Sahara, trees being more abundant in the results from LOVECLIM than from CCSM3. A mix of forest and grassland occupies continents and expand deep in the high northern latitudes in line with proxy records. Desert areas reduce significantly in Northern Hemisphere, but increase in North
NASA Astrophysics Data System (ADS)
Schöniger, Anneli; Illman, Walter A.; Wöhling, Thomas; Nowak, Wolfgang
2015-12-01
Groundwater modelers face the challenge of how to assign representative parameter values to the studied aquifer. Several approaches are available to parameterize spatial heterogeneity in aquifer parameters. They differ in their conceptualization and complexity, ranging from homogeneous models to heterogeneous random fields. While it is common practice to invest more effort into data collection for models with a finer resolution of heterogeneities, there is a lack of advice which amount of data is required to justify a certain level of model complexity. In this study, we propose to use concepts related to Bayesian model selection to identify this balance. We demonstrate our approach on the characterization of a heterogeneous aquifer via hydraulic tomography in a sandbox experiment (Illman et al., 2010). We consider four increasingly complex parameterizations of hydraulic conductivity: (1) Effective homogeneous medium, (2) geology-based zonation, (3) interpolation by pilot points, and (4) geostatistical random fields. First, we investigate the shift in justified complexity with increasing amount of available data by constructing a model confusion matrix. This matrix indicates the maximum level of complexity that can be justified given a specific experimental setup. Second, we determine which parameterization is most adequate given the observed drawdown data. Third, we test how the different parameterizations perform in a validation setup. The results of our test case indicate that aquifer characterization via hydraulic tomography does not necessarily require (or justify) a geostatistical description. Instead, a zonation-based model might be a more robust choice, but only if the zonation is geologically adequate.
a Range Based Method for Complex Facade Modeling
NASA Astrophysics Data System (ADS)
Adami, A.; Fregonese, L.; Taffurelli, L.
2011-09-01
3d modelling of Architectural Heritage does not follow a very well-defined way, but it goes through different algorithms and digital form according to the shape complexity of the object, to the main goal of the representation and to the starting data. Even if the process starts from the same data, such as a pointcloud acquired by laser scanner, there are different possibilities to realize a digital model. In particular we can choose between two different attitudes: the mesh and the solid model. In the first case the complexity of architecture is represented by a dense net of triangular surfaces which approximates the real surface of the object. In the other -opposite- case the 3d digital model can be realized by the use of simple geometrical shapes, by the use of sweeping algorithm and the Boolean operations. Obviously these two models are not the same and each one is characterized by some peculiarities concerning the way of modelling (the choice of a particular triangulation algorithm or the quasi-automatic modelling by known shapes) and the final results (a more detailed and complex mesh versus an approximate and more simple solid model). Usually the expected final representation and the possibility of publishing lead to one way or the other. In this paper we want to suggest a semiautomatic process to build 3d digital models of the facades of complex architecture to be used for example in city models or in other large scale representations. This way of modelling guarantees also to obtain small files to be published on the web or to be transmitted. The modelling procedure starts from laser scanner data which can be processed in the well known way. Usually more than one scan is necessary to describe a complex architecture and to avoid some shadows on the facades. These have to be registered in a single reference system by the use of targets which are surveyed by topography and then to be filtered in order to obtain a well controlled and homogeneous point cloud of
Complex Wall Boundary Conditions for Modeling Combustion in Catalytic Channels
NASA Astrophysics Data System (ADS)
Zhu, Huayang; Jackson, Gregory
2000-11-01
Monolith catalytic reactors for exothermic oxidation are being used in automobile exhaust clean-up and ultra-low emissions combustion systems. The reactors present a unique coupling between mass, heat, and momentum transport in a channel flow configuration. The use of porous catalytic coatings along the channel wall presents a complex boundary condition when modeled with the two-dimensional channel flow. This current work presents a 2-D transient model for predicting the performance of catalytic combustion systems for methane oxidation on Pd catalysts. The model solves the 2-D compressible transport equations for momentum, species, and energy, which are solved with a porous washcoat model for the wall boundary conditions. A time-splitting algorithm is used to separate the stiff chemical reactions from the convective/diffusive equations for the channel flow. A detailed surface chemistry mechanism is incorporated for the catalytic wall model and is used to predict transient ignition and steady-state conversion of CH4-air flows in the catalytic reactor.
3-D physical modeling of a complex salt canopy
Wiley, R.W.; Sekharan, K.K.
1996-12-31
Recent drilling has confirmed both significant reservoir potential and the presence of commercial hydrocarbons below salt structures in the Gulf of Mexico. Obtaining definitive seismic images with standard processing schemes beneath these salt structures is very difficult if not impossible. Because of the complicated seismic behavior of these structures, full volume 3-D prestack depth migration is required. Unfortunately, carrying out the multitude of calculations needed to create a proper image requires the largest and fastest supercomputers and rather complex numerical algorithms. Furthermore, developing and testing the imaging algorithms is quite involved and requires appropriate test data sets. To better understand the problems and issues of subsalt imaging, Marathon Oil Company and Louisiana Land and Exploration Company contracted with the University of Houston`s Allied Geophysical Laboratories (AGL) to construct a salt canopy physical model. The model is patterned after the SEG/EAEG Salt Model and is made from synthetic materials. It is a full three-dimensional model with an irregularly shaped, lateral salt structure embedded in five distinct sedimentary layers. The model was used to acquire a multi-offset 3-D marine-style survey. These data are being used to address problems of subsalt imaging. In addition to standard processing techniques, the authors investigate algorithms for multiple removal and prestack depth migration.
A subsurface model of the beaver meadow complex
NASA Astrophysics Data System (ADS)
Nash, C.; Grant, G.; Flinchum, B. A.; Lancaster, J.; Holbrook, W. S.; Davis, L. G.; Lewis, S.
2015-12-01
Wet meadows are a vital component of arid and semi-arid environments. These valley spanning, seasonally inundated wetlands provide critical habitat and refugia for wildlife, and may potentially mediate catchment-scale hydrology in otherwise "water challenged" landscapes. In the last 150 years, these meadows have begun incising rapidly, causing the wetlands to drain and much of the ecological benefit to be lost. The mechanisms driving this incision are poorly understood, with proposed means ranging from cattle grazing to climate change, to the removal of beaver. There is considerable interest in identifying cost-effective strategies to restore the hydrologic and ecological conditions of these meadows at a meaningful scale, but effective process based restoration first requires a thorough understanding of the constructional history of these ubiquitous features. There is emerging evidence to suggest that the North American beaver may have had a considerable role in shaping this landscape through the building of dams. This "beaver meadow complex hypothesis" posits that as beaver dams filled with fine-grained sediments, they became large wet meadows on which new dams, and new complexes, were formed, thereby aggrading valley bottoms. A pioneering study done in Yellowstone indicated that 32-50% of the alluvial sediment was deposited in ponded environments. The observed aggradation rates were highly heterogeneous, suggesting spatial variability in the depositional process - all consistent with the beaver meadow complex hypothesis (Polvi and Wohl, 2012). To expand on this initial work, we have probed deeper into these meadow complexes using a combination of geophysical techniques, coring methods and numerical modeling to create a 3-dimensional representation of the subsurface environments. This imaging has given us a unique view into the patterns and processes responsible for the landforms, and may shed further light on the role of beaver in shaping these landscapes.
Modeling Pedestrian's Conformity Violation Behavior: A Complex Network Based Approach
Zhou, Zhuping; Hu, Qizhou; Wang, Wei
2014-01-01
Pedestrian injuries and fatalities present a problem all over the world. Pedestrian conformity violation behaviors, which lead to many pedestrian crashes, are common phenomena at the signalized intersections in China. The concepts and metrics of complex networks are applied to analyze the structural characteristics and evolution rules of pedestrian network about the conformity violation crossings. First, a network of pedestrians crossing the street is established, and the network's degree distributions are analyzed. Then, by using the basic idea of SI model, a spreading model of pedestrian illegal crossing behavior is proposed. Finally, through simulation analysis, pedestrian's illegal crossing behavior trends are obtained in different network structures and different spreading rates. Some conclusions are drawn: as the waiting time increases, more pedestrians will join in the violation crossing once a pedestrian crosses on red firstly. And pedestrian's conformity violation behavior will increase as the spreading rate increases. PMID:25530755
Modeling pedestrian's conformity violation behavior: a complex network based approach.
Zhou, Zhuping; Hu, Qizhou; Wang, Wei
2014-01-01
Pedestrian injuries and fatalities present a problem all over the world. Pedestrian conformity violation behaviors, which lead to many pedestrian crashes, are common phenomena at the signalized intersections in China. The concepts and metrics of complex networks are applied to analyze the structural characteristics and evolution rules of pedestrian network about the conformity violation crossings. First, a network of pedestrians crossing the street is established, and the network's degree distributions are analyzed. Then, by using the basic idea of SI model, a spreading model of pedestrian illegal crossing behavior is proposed. Finally, through simulation analysis, pedestrian's illegal crossing behavior trends are obtained in different network structures and different spreading rates. Some conclusions are drawn: as the waiting time increases, more pedestrians will join in the violation crossing once a pedestrian crosses on red firstly. And pedestrian's conformity violation behavior will increase as the spreading rate increases.
Industrial processing of complex fluids: Formulation and modeling
Scovel, J.C.; Bleasdale, S.; Forest, G.M.; Bechtel, S.
1997-08-01
The production of many important commercial materials involves the evolution of a complex fluid through a cooling phase into a hardened product. Textile fibers, high-strength fibers(KEVLAR, VECTRAN), plastics, chopped-fiber compounds, and fiber optical cable are such materials. Industry desires to replace experiments with on-line, real time models of these processes. Solutions to the problems are not just a matter of technology transfer, but require a fundamental description and simulation of the processes. Goals of the project are to develop models that can be used to optimize macroscopic properties of the solid product, to identify sources of undesirable defects, and to seek boundary-temperature and flow-and-material controls to optimize desired properties.
Uncertainty quantification for quantum chemical models of complex reaction networks.
Proppe, Jonny; Husch, Tamara; Simm, Gregor N; Reiher, Markus
2016-12-22
For the quantitative understanding of complex chemical reaction mechanisms, it is, in general, necessary to accurately determine the corresponding free energy surface and to solve the resulting continuous-time reaction rate equations for a continuous state space. For a general (complex) reaction network, it is computationally hard to fulfill these two requirements. However, it is possible to approximately address these challenges in a physically consistent way. On the one hand, it may be sufficient to consider approximate free energies if a reliable uncertainty measure can be provided. On the other hand, a highly resolved time evolution may not be necessary to still determine quantitative fluxes in a reaction network if one is interested in specific time scales. In this paper, we present discrete-time kinetic simulations in discrete state space taking free energy uncertainties into account. The method builds upon thermo-chemical data obtained from electronic structure calculations in a condensed-phase model. Our kinetic approach supports the analysis of general reaction networks spanning multiple time scales, which is here demonstrated for the example of the formose reaction. An important application of our approach is the detection of regions in a reaction network which require further investigation, given the uncertainties introduced by both approximate electronic structure methods and kinetic models. Such cases can then be studied in greater detail with more sophisticated first-principles calculations and kinetic simulations.
Fish locomotion: insights from both simple and complex mechanical models
NASA Astrophysics Data System (ADS)
Lauder, George
2015-11-01
Fishes are well-known for their ability to swim and maneuver effectively in the water, and recent years have seen great progress in understanding the hydrodynamics of aquatic locomotion. But studying freely-swimming fishes is challenging due to difficulties in controlling fish behavior. Mechanical models of aquatic locomotion have many advantages over studying live animals, including the ability to manipulate and control individual structural or kinematic factors, easier measurement of forces and torques, and the ability to abstract complex animal designs into simpler components. Such simplifications, while not without their drawbacks, facilitate interpretation of how individual traits alter swimming performance and the discovery of underlying physical principles. In this presentation I will discuss the use of a variety of mechanical models for fish locomotion, ranging from simple flexing panels to complex biomimetic designs incorporating flexible, actively moved, fin rays on multiple fins. Mechanical devices have provided great insight into the dynamics of aquatic propulsion and, integrated with studies of locomotion in freely-swimming fishes, provide new insights into how fishes move through the water.
Mutual information model for link prediction in heterogeneous complex networks
Shakibian, Hadi; Moghadam Charkari, Nasrollah
2017-01-01
Recently, a number of meta-path based similarity indices like PathSim, HeteSim, and random walk have been proposed for link prediction in heterogeneous complex networks. However, these indices suffer from two major drawbacks. Firstly, they are primarily dependent on the connectivity degrees of node pairs without considering the further information provided by the given meta-path. Secondly, most of them are required to use a single and usually symmetric meta-path in advance. Hence, employing a set of different meta-paths is not straightforward. To tackle with these problems, we propose a mutual information model for link prediction in heterogeneous complex networks. The proposed model, called as Meta-path based Mutual Information Index (MMI), introduces meta-path based link entropy to estimate the link likelihood and could be carried on a set of available meta-paths. This estimation measures the amount of information through the paths instead of measuring the amount of connectivity between the node pairs. The experimental results on a Bibliography network show that the MMI obtains high prediction accuracy compared with other popular similarity indices. PMID:28344326
Surface Complexation Modelling in Metal-Mineral-Bacteria Systems
NASA Astrophysics Data System (ADS)
Johnson, K. J.; Fein, J. B.
2002-12-01
The reactive surfaces of bacteria and minerals can determine the fate, transport, and bioavailability of aqueous heavy metal cations. Geochemical models are instrumental in accurately accounting for the partitioning of the metals between mineral surfaces and bacteria cell walls. Previous research has shown that surface complexation modelling (SCM) is accurate in two-component systems (metal:mineral and metal:bacteria); however, the ability of SCMs to account for metal distribution in mixed metal-mineral-bacteria systems has not been tested. In this study, we measure aqueous Cd distributions in water-bacteria-mineral systems, and compare these observations with predicted distributions based on a surface complexation modelling approach. We measured Cd adsorption in 2- and 3-component batch adsorption experiments. In the 2-component experiments, we measured the extent of adsorption of 10 ppm aqueous Cd onto either a bacterial or hydrous ferric oxide sorbent. The metal:bacteria experiments contained 1 g/L (wet wt.) of B. subtilis, and were conducted as a function of pH; the metal:mineral experiments were conducted as a function of both pH and HFO content. Two types of 3-component Cd adsorption experiments were also conducted in which both mineral powder and bacteria were present as sorbents: 1) one in which the HFO was physically but not chemically isolated from the system using sealed dialysis tubing, and 2) others where the HFO, Cd and B. subtilis were all in physical contact. The dialysis tubing approach enabled the direct determination of the concentration of Cd on each sorbing surface, after separation and acidification of each sorbent. The experiments indicate that both bacteria and mineral surfaces can dominate adsorption in the system, depending on pH and bacteria:mineral ratio. The stability constants, determined using the data from the 2-component systems, along with those for other surface and aqueous species in the systems, were used with FITEQL to
A modeling process to understand complex system architectures
NASA Astrophysics Data System (ADS)
Robinson, Santiago Balestrini
2009-12-01
In recent decades, several tools have been developed by the armed forces, and their contractors, to test the capability of a force. These campaign level analysis tools, often times characterized as constructive simulations are generally expensive to create and execute, and at best they are extremely difficult to verify and validate. This central observation, that the analysts are relying more and more on constructive simulations to predict the performance of future networks of systems, leads to the two central objectives of this thesis: (1) to enable the quantitative comparison of architectures in terms of their ability to satisfy a capability without resorting to constructive simulations, and (2) when constructive simulations must be created, to quantitatively determine how to spend the modeling effort amongst the different system classes. The first objective led to Hypothesis A, the first main hypotheses, which states that by studying the relationships between the entities that compose an architecture, one can infer how well it will perform a given capability. The method used to test the hypothesis is based on two assumptions: (1) the capability can be defined as a cycle of functions, and that it (2) must be possible to estimate the probability that a function-based relationship occurs between any two types of entities. If these two requirements are met, then by creating random functional networks, different architectures can be compared in terms of their ability to satisfy a capability. In order to test this hypothesis, a novel process for creating representative functional networks of large-scale system architectures was developed. The process, named the Digraph Modeling for Architectures (DiMA), was tested by comparing its results to those of complex constructive simulations. Results indicate that if the inputs assigned to DiMA are correct (in the tests they were based on time-averaged data obtained from the ABM), DiMA is able to identify which of any two
Coevolving complex networks in the model of social interactions
NASA Astrophysics Data System (ADS)
Raducha, Tomasz; Gubiec, Tomasz
2017-04-01
We analyze Axelrod's model of social interactions on coevolving complex networks. We introduce four extensions with different mechanisms of edge rewiring. The models are intended to catch two kinds of interactions-preferential attachment, which can be observed in scientists or actors collaborations, and local rewiring, which can be observed in friendship formation in everyday relations. Numerical simulations show that proposed dynamics can lead to the power-law distribution of nodes' degree and high value of the clustering coefficient, while still retaining the small-world effect in three models. All models are characterized by two phase transitions of a different nature. In case of local rewiring we obtain order-disorder discontinuous phase transition even in the thermodynamic limit, while in case of long-distance switching discontinuity disappears in the thermodynamic limit, leaving one continuous phase transition. In addition, we discover a new and universal characteristic of the second transition point-an abrupt increase of the clustering coefficient, due to formation of many small complete subgraphs inside the network.
Lupus Nephritis: Animal Modeling of a Complex Disease Syndrome Pathology
McGaha, Tracy L; Madaio, Michael P.
2014-01-01
Nephritis as a result of autoimmunity is a common morbidity associated with Systemic Lupus Erythematosus (SLE). There is substantial clinical and industry interest in medicinal intervention in the SLE nephritic process; however, clinical trials to specifically treat lupus nephritis have not resulted in complete and sustained remission in all patients. Multiple mouse models have been used to investigate the pathologic interactions between autoimmune reactivity and SLE pathology. While several models bear a remarkable similarity to SLE-driven nephritis, there are limitations for each that can make the task of choosing the appropriate model for a particular aspect of SLE pathology challenging. This is not surprising given the variable and diverse nature of human disease. In many respects, features among murine strains mimic some (but never all) of the autoimmune and pathologic features of lupus patients. Although the diversity often limits universal conclusions relevant to pathogenesis, they provide insights into the complex process that result in phenotypic manifestations of nephritis. Thus nephritis represents a microcosm of systemic disease, with variable lesions and clinical features. In this review, we discuss some of the most commonly used models of lupus nephritis (LN) and immune-mediated glomerular damage examining their relative strengths and weaknesses, which may provide insight in the human condition. PMID:25722732
Wind Power Curve Modeling in Simple and Complex Terrain
Bulaevskaya, V.; Wharton, S.; Irons, Z.; Qualley, G.
2015-02-09
Our previous work on wind power curve modeling using statistical models focused on a location with a moderately complex terrain in the Altamont Pass region in northern California (CA). The work described here is the follow-up to that work, but at a location with a simple terrain in northern Oklahoma (OK). The goal of the present analysis was to determine the gain in predictive ability afforded by adding information beyond the hub-height wind speed, such as wind speeds at other heights, as well as other atmospheric variables, to the power prediction model at this new location and compare the results to those obtained at the CA site in the previous study. While we reach some of the same conclusions at both sites, many results reported for the CA site do not hold at the OK site. In particular, using the entire vertical profile of wind speeds improves the accuracy of wind power prediction relative to using the hub-height wind speed alone at both sites. However, in contrast to the CA site, the rotor equivalent wind speed (REWS) performs almost as well as the entire profile at the OK site. Another difference is that at the CA site, adding wind veer as a predictor significantly improved the power prediction accuracy. The same was true for that site when air density was added to the model separately instead of using the standard air density adjustment. At the OK site, these additional variables result in no significant benefit for the prediction accuracy.
Complex Geometry Creation and Turbulent Conjugate Heat Transfer Modeling
Bodey, Isaac T; Arimilli, Rao V; Freels, James D
2011-01-01
The multiphysics capabilities of COMSOL provide the necessary tools to simulate the turbulent thermal-fluid aspects of the High Flux Isotope Reactor (HFIR). Version 4.1, and later, of COMSOL provides three different turbulence models: the standard k-{var_epsilon} closure model, the low Reynolds number (LRN) k-{var_epsilon} model, and the Spalart-Allmaras model. The LRN meets the needs of the nominal HFIR thermal-hydraulic requirements for 2D and 3D simulations. COMSOL also has the capability to create complex geometries. The circular involute fuel plates used in the HFIR require the use of algebraic equations to generate an accurate geometrical representation in the simulation environment. The best-estimate simulation results show that the maximum fuel plate clad surface temperatures are lower than those predicted by the legacy thermal safety code used at HFIR by approximately 17 K. The best-estimate temperature distribution determined by COMSOL was then used to determine the necessary increase in the magnitude of the power density profile (PDP) to produce a similar clad surface temperature as compared to the legacy thermal safety code. It was determined and verified that a 19% power increase was sufficient to bring the two temperature profiles to relatively good agreement.
Modeling and minimizing CAPRI round 30 symmetrical protein complexes from CASP-11 structural models.
El Houasli, Marwa; Maigret, Bernard; Devignes, Marie-Dominique; Ghoorah, Anisah W; Grudinin, Sergei; Ritchie, David W
2017-03-01
Many of the modeling targets in the blind CASP-11/CAPRI-30 experiment were protein homo-dimers and homo-tetramers. Here, we perform a retrospective docking-based analysis of the perfectly symmetrical CAPRI Round 30 targets whose crystal structures have been published. Starting from the CASP "stage-2" fold prediction models, we show that using our recently developed "SAM" polar Fourier symmetry docking algorithm combined with NAMD energy minimization often gives acceptable or better 3D models of the target complexes. We also use SAM to analyze the overall quality of all CASP structural models for the selected targets from a docking-based perspective. We demonstrate that docking only CASP "center" structures for the selected targets provides a fruitful and economical docking strategy. Furthermore, our results show that many of the CASP models are dockable in the sense that they can lead to acceptable or better models of symmetrical complexes. Even though SAM is very fast, using docking and NAMD energy minimization to pull out acceptable docking models from a large ensemble of docked CASP models is computationally expensive. Nonetheless, thanks to our SAM docking algorithm, we expect that applying our docking protocol on a modern computer cluster will give us the ability to routinely model 3D structures of symmetrical protein complexes from CASP-quality models. Proteins 2017; 85:463-469. © 2016 Wiley Periodicals, Inc.
Water Balance Modelling - Does The Required Model Complexity Change With Scale?
NASA Astrophysics Data System (ADS)
Blöschl, G.; Merz, R.
An important issue in modelling the water balance of catchments is what is the suitable model complexity. Anecdotal evidence suggests that the model complexity required to model the water balance accurately decreases with catchment scale but so far very few studies have quantified these possible effects. In this paper we examine the model per- formance as a function of catchment scale for a given model complexity which allows us to infer, whether the required model complexity changes with scale. We also exam- ine whether the calibrated parameter values change with scale or are scale invariant. In a case study we analysed 700 catchments in Austria with catchment sizes ranging from 10 to 100 000 km2. 30 years of daily data (runoff, precipitation, air temperature, air humidity) were analysed. A spatially lumped, conceptual, HBV style soil mois- ture accounting scheme was used which involved fifteen model parameters including snow processes. Five parameters were preset and ten parameters were calibrated on observed daily streamflow. The calibration period was about 10 years and the verifi- cation period was about 20 years. Model performance (in terms of Nash-Sutcliffe effi- ciency) was examined both for the calibration and the verification periods. The mean efficiency over all catchments only decreased slightly when moving from the calibra- tion to the verification (from R2 = 0.65 to 0.60). The results suggest that the model efficiencies (both for the calibration and the verification) do not change which catch- ment scale for scales smaller than 10 000 km2 but beyond this scale there is a slight decrease in model performance. This means that for these very large scales, a spatial subdivision of the lumped model is needed to allow for spatial differences in rainfall. The results also suggest that the model parameters are not scale dependent. We con- clude that the complexity required for water balance models of catchments does not change with scale for catchment sizes
Wind Tunnel Modeling Of Wind Flow Over Complex Terrain
NASA Astrophysics Data System (ADS)
Banks, D.; Cochran, B.
2010-12-01
This presentation will describe the finding of an atmospheric boundary layer (ABL) wind tunnel study conducted as part of the Bolund Experiment. This experiment was sponsored by Risø DTU (National Laboratory for Sustainable Energy, Technical University of Denmark) during the fall of 2009 to enable a blind comparison of various air flow models in an attempt to validate their performance in predicting airflow over complex terrain. Bohlund hill sits 12 m above the water level at the end of a narrow isthmus. The island features a steep escarpment on one side, over which the airflow can be expected to separate. The island was equipped with several anemometer towers, and the approach flow over the water was well characterized. This study was one of only two only physical model studies included in the blind model comparison, the other being a water plume study. The remainder were computational fluid dynamics (CFD) simulations, including both RANS and LES. Physical modeling of air flow over topographical features has been used since the middle of the 20th century, and the methods required are well understood and well documented. Several books have been written describing how to properly perform ABL wind tunnel studies, including ASCE manual of engineering practice 67. Boundary layer wind tunnel tests are the only modelling method deemed acceptable in ASCE 7-10, the most recent edition of the American Society of Civil Engineers standard that provides wind loads for buildings and other structures for buildings codes across the US. Since the 1970’s, most tall structures undergo testing in a boundary layer wind tunnel to accurately determine the wind induced loading. When compared to CFD, the US EPA considers a properly executed wind tunnel study to be equivalent to a CFD model with infinitesimal grid resolution and near infinite memory. One key reason for this widespread acceptance is that properly executed ABL wind tunnel studies will accurately simulate flow separation
Modeling Cu2+-Aβ complexes from computational approaches
NASA Astrophysics Data System (ADS)
Alí-Torres, Jorge; Mirats, Andrea; Maréchal, Jean-Didier; Rodríguez-Santiago, Luis; Sodupe, Mariona
2015-09-01
Amyloid plaques formation and oxidative stress are two key events in the pathology of the Alzheimer disease (AD), in which metal cations have been shown to play an important role. In particular, the interaction of the redox active Cu2+ metal cation with Aβ has been found to interfere in amyloid aggregation and to lead to reactive oxygen species (ROS). A detailed knowledge of the electronic and molecular structure of Cu2+-Aβ complexes is thus important to get a better understanding of the role of these complexes in the development and progression of the AD disease. The computational treatment of these systems requires a combination of several available computational methodologies, because two fundamental aspects have to be addressed: the metal coordination sphere and the conformation adopted by the peptide upon copper binding. In this paper we review the main computational strategies used to deal with the Cu2+-Aβ coordination and build plausible Cu2+-Aβ models that will afterwards allow determining physicochemical properties of interest, such as their redox potential.
A resistive force model for complex intrusion in granular media
NASA Astrophysics Data System (ADS)
Zhang, Tingnan; Li, Chen; Goldman, Daniel
2012-11-01
Intrusion forces in granular media (GM) are best understood for simple shapes (like disks and rods) undergoing vertical penetration and horizontal drag. Inspired by a resistive force theory for sand-swimming, we develop a new two-dimensional resistive force model for intruders of arbitrary shape and intrusion path into GM in the vertical plane. We divide an intruder of complex geometry into small segments and approximate segmental forces by measuring forces on small flat plates in experiments. Both lift and drag forces on the plates are proportional to penetration depth, and depend sensitively on the angle of attack and the direction of motion. Summation of segmental forces over the intruder predicts the net forces on a c-leg, a flat leg, and a reversed c-leg rotated into GM about a fixed axle. The stress profiles are similar for GM of different particle sizes, densities, coefficients of friction, and volume fractions. We propose a universal scaling law applicable to all tested GM. By combining the new force model with a multi-body simulator, we can also predict the locomotion dynamics of a small legged robot on GM. Our force laws can provide a strict test of hydrodynamic-like approaches to model dense granular flows. Also affiliated to: School of Physics, Georgia Institute of Technology.
Simulation and Processing Seismic Data in Complex Geological Models
NASA Astrophysics Data System (ADS)
Forestieri da Gama Rodrigues, S.; Moreira Lupinacci, W.; Martins de Assis, C. A.
2014-12-01
Seismic simulations in complex geological models are interesting to verify some limitations of seismic data. In this project, different geological models were designed to analyze some difficulties encountered in the interpretation of seismic data. Another idea is these data become available for LENEP/UENF students to test new tools to assist in seismic data processing. The geological models were created considering some characteristics found in oil exploration. We simulated geological medium with volcanic intrusions, salt domes, fault, pinch out and layers more distante from surface (Kanao, 2012). We used the software Tesseral Pro to simulate the seismic acquisitions. The acquisition geometries simulated were of the type common offset, end-on and split-spread. (Figure 1) Data acquired with constant offset require less processing routines. The processing flow used with tools available in Seismic Unix package (for more details, see Pennington et al., 2005) was geometric spreading correction, deconvolution, attenuation correction and post-stack depth migration. In processing of the data acquired with end-on and split-spread geometries, we included velocity analysis and NMO correction routines. Although we analyze synthetic data and carefully applied each processing routine, we can observe some limitations of the seismic reflection in imaging thin layers, great surface depth layers, layers with low impedance contrast and faults.
Deposition parameterizations for the Industrial Source Complex (ISC3) model
Wesely, Marvin L.; Doskey, Paul V.; Shannon, J. D.
2002-06-01
Improved algorithms have been developed to simulate the dry and wet deposition of hazardous air pollutants (HAPs) with the Industrial Source Complex version 3 (ISC3) model system. The dry deposition velocities (concentrations divided by downward flux at a specified height) of the gaseous HAPs are modeled with algorithms adapted from existing dry deposition modules. The dry deposition velocities are described in a conventional resistance scheme, for which micrometeorological formulas are applied to describe the aerodynamic resistances above the surface. Pathways to uptake at the ground and in vegetative canopies are depicted with several resistances that are affected by variations in air temperature, humidity, solar irradiance, and soil moisture. The role of soil moisture variations in affecting the uptake of gases through vegetative plant leaf stomata is assessed with the relative available soil moisture, which is estimated with a rudimentary budget of soil moisture content. Some of the procedures and equations are simplified to be commensurate with the type and extent of information on atmospheric and surface conditions available to the ISC3 model system user. For example, standardized land use types and seasonal categories provide sets of resistances to uptake by various components of the surface. To describe the dry deposition of the large number of gaseous organic HAPS, a new technique based on laboratory study results and theoretical considerations has been developed providing a means of evaluating the role of lipid solubility in uptake by the waxy outer cuticle of vegetative plant leaves.
Neurocomputational Model of EEG Complexity during Mind Wandering
Ibáñez-Molina, Antonio J.; Iglesias-Parro, Sergio
2016-01-01
Mind wandering (MW) can be understood as a transient state in which attention drifts from an external task to internal self-generated thoughts. MW has been associated with the activation of the Default Mode Network (DMN). In addition, it has been shown that the activity of the DMN is anti-correlated with activation in brain networks related to the processing of external events (e.g., Salience network, SN). In this study, we present a mean field model based on weakly coupled Kuramoto oscillators. We simulated the oscillatory activity of the entire brain and explored the role of the interaction between the nodes from the DMN and SN in MW states. External stimulation was added to the network model in two opposite conditions. Stimuli could be presented when oscillators in the SN showed more internal coherence (synchrony) than in the DMN, or, on the contrary, when the coherence in the SN was lower than in the DMN. The resulting phases of the oscillators were analyzed and used to simulate EEG signals. Our results showed that the structural complexity from both simulated and real data was higher when the model was stimulated during periods in which DMN was more coherent than the SN. Overall, our results provided a plausible mechanistic explanation to MW as a state in which high coherence in the DMN partially suppresses the capacity of the system to process external stimuli. PMID:26973505
Integrated modeling tool for performance engineering of complex computer systems
NASA Technical Reports Server (NTRS)
Wright, Gary; Ball, Duane; Hoyt, Susan; Steele, Oscar
1989-01-01
This report summarizes Advanced System Technologies' accomplishments on the Phase 2 SBIR contract NAS7-995. The technical objectives of the report are: (1) to develop an evaluation version of a graphical, integrated modeling language according to the specification resulting from the Phase 2 research; and (2) to determine the degree to which the language meets its objectives by evaluating ease of use, utility of two sets of performance predictions, and the power of the language constructs. The technical approach followed to meet these objectives was to design, develop, and test an evaluation prototype of a graphical, performance prediction tool. The utility of the prototype was then evaluated by applying it to a variety of test cases found in the literature and in AST case histories. Numerous models were constructed and successfully tested. The major conclusion of this Phase 2 SBIR research and development effort is that complex, real-time computer systems can be specified in a non-procedural manner using combinations of icons, windows, menus, and dialogs. Such a specification technique provides an interface that system designers and architects find natural and easy to use. In addition, PEDESTAL's multiview approach provides system engineers with the capability to perform the trade-offs necessary to produce a design that meets timing performance requirements. Sample system designs analyzed during the development effort showed that models could be constructed in a fraction of the time required by non-visual system design capture tools.
Beyond pure parasystole: promises and problems in modeling complex arrhythmias.
Courtemanche, M; Glass, L; Rosengarten, M D; Goldberger, A L
1989-08-01
The dynamics of pure parasystole, a cardiac arrhythmia in which two competing pacemakers fire independently, have recently been fully characterized. This model is now extended in an attempt to account for the more complex dynamics occurring with modulated parasystole, in which there exists nonlinear interaction between the sinus node and the ectopic ventricular focus. Theoretical analysis of modulated parasystole reveals three types of dynamics: entrainment, quasiperiodicity, and chaos. Rhythms associated with quasiperiodicity obey a set of rules derived from pure parasystole. This model is applied to the interpretation of continuous electrocardiographic data sets from three patients with complicated patterns of ventricular ectopic activity. We describe several new statistical properties of these records, related to the number of intervening sinus beats between ectopic events, that are essential in characterizing the dynamics and testing mathematical models. Detailed comparison between data and theory in these cases show substantial areas of agreement as well as potentially important discrepancies. These findings have implications for understanding the dynamics of the heartbeat in normal and pathological conditions.
Evolution of complexity in a resource-based model
NASA Astrophysics Data System (ADS)
Fernández, Lenin; Campos, Paulo R. A.
2017-02-01
Through a resource-based modelling the evolution of organismal complexity is studied. In the model, the cells are characterized by their metabolic rates which, together with the availability of resource, determine the rate at which they divide. The population is structured in groups. Groups are also autonomous entities regarding reproduction and propagation, and so they correspond to a higher biological organization level. The model assumes reproductive altruism as there exists a fitness transfer from the cell level to the group level. Reproductive altruism comes about by inflicting a higher energetic cost to cells belonging to larger groups. On the other hand, larger groups are less prone to extinction. The strength of this benefit arising from group augmentation can be tuned by the synergistic parameter γ. Through extensive computer simulations we make a thorough exploration of the parameter space to find out the domain in which the formation of larger groups is allowed. We show that formation of small groups can be obtained for a low level of synergy. Larger group sizes can only be attained as synergistic interactions surpass a given level of strength. Although the total resource influx rate plays a key role in determining the number of groups coexisting at the equilibrium, its function on driving group size is minor. On the other hand, how the resource is seized by the groups matters.
Grafted biomembranes containing membrane proteins--the case of the leucine transporter.
Jagalski, Vivien; Barker, Robert D; Thygesen, Mikkel B; Gotfryd, Kamil; Krüger, Mie B; Shi, Lei; Maric, Selma; Bovet, Nicolas; Moulin, Martine; Haertlein, Michael; Pomorski, Thomas Günther; Loland, Claus J; Cárdenas, Marité
2015-10-21
Here, we bind the sodium dependent amino acid transporter on nitrilotriacetic acid/polyethylene glycol functionalized gold sensors in detergents and perform a detergent-lipid exchange with phosphatidylcholine. We characterize the LeuT structure in the adsorbed film by magnetic contrast neutron reflection using the predicted model from molecular dynamic simulations.
Methods of Information Geometry to model complex shapes
NASA Astrophysics Data System (ADS)
De Sanctis, A.; Gattone, S. A.
2016-09-01
In this paper, a new statistical method to model patterns emerging in complex systems is proposed. A framework for shape analysis of 2- dimensional landmark data is introduced, in which each landmark is represented by a bivariate Gaussian distribution. From Information Geometry we know that Fisher-Rao metric endows the statistical manifold of parameters of a family of probability distributions with a Riemannian metric. Thus this approach allows to reconstruct the intermediate steps in the evolution between observed shapes by computing the geodesic, with respect to the Fisher-Rao metric, between the corresponding distributions. Furthermore, the geodesic path can be used for shape predictions. As application, we study the evolution of the rat skull shape. A future application in Ophthalmology is introduced.
Kajiwara, Tsuyoshi; Sasaki, Toru; Takeuchi, Yasuhiro
2015-02-01
We present a constructive method for Lyapunov functions for ordinary differential equation models of infectious diseases in vivo. We consider models derived from the Nowak-Bangham models. We construct Lyapunov functions for complex models using those of simpler models. Especially, we construct Lyapunov functions for models with an immune variable from those for models without an immune variable, a Lyapunov functions of a model with absorption effect from that for a model without absorption effect. We make the construction clear for Lyapunov functions proposed previously, and present new results with our method.
Inverse Problems in Complex Models and Applications to Earth Sciences
NASA Astrophysics Data System (ADS)
Bosch, M. E.
2015-12-01
The inference of the subsurface earth structure and properties requires the integration of different types of data, information and knowledge, by combined processes of analysis and synthesis. To support the process of integrating information, the regular concept of data inversion is evolving to expand its application to models with multiple inner components (properties, scales, structural parameters) that explain multiple data (geophysical survey data, well-logs, core data). The probabilistic inference methods provide the natural framework for the formulation of these problems, considering a posterior probability density function (PDF) that combines the information from a prior information PDF and the new sets of observations. To formulate the posterior PDF in the context of multiple datasets, the data likelihood functions are factorized assuming independence of uncertainties for data originating across different surveys. A realistic description of the earth medium requires modeling several properties and structural parameters, which relate to each other according to dependency and independency notions. Thus, conditional probabilities across model components also factorize. A common setting proceeds by structuring the model parameter space in hierarchical layers. A primary layer (e.g. lithology) conditions a secondary layer (e.g. physical medium properties), which conditions a third layer (e.g. geophysical data). In general, less structured relations within model components and data emerge from the analysis of other inverse problems. They can be described with flexibility via direct acyclic graphs, which are graphs that map dependency relations between the model components. Examples of inverse problems in complex models can be shown at various scales. At local scale, for example, the distribution of gas saturation is inferred from pre-stack seismic data and a calibrated rock-physics model. At regional scale, joint inversion of gravity and magnetic data is applied
Complex events in a fault model with interacting asperities
NASA Astrophysics Data System (ADS)
Dragoni, Michele; Tallarico, Andrea
2016-08-01
The dynamics of a fault with heterogeneous friction is studied by employing a discrete fault model with two asperities of different strengths. The average values of stress, friction and slip on each asperity are considered and the state of the fault is described by the slip deficits of the asperities as functions of time. The fault has three different slipping modes, corresponding to the asperities slipping one at a time or simultaneously. Any seismic event produced by the fault is a sequence of n slipping modes. According to initial conditions, seismic events can be different sequences of slipping modes, implying different moment rates and seismic moments. Each event can be represented geometrically in the state space by an orbit that is the union of n damped Lissajous curves. We focus our interest on events that are sequences of two or more slipping modes: they show a complex stress interchange between the asperities and a complex temporal pattern of slip rate. The initial stress distribution producing these events is not uniform on the fault. We calculate the stress drop, the moment rate and the frequency spectrum of the events, showing how these quantities depend on initial conditions. These events have the greatest seismic moments that can be produced by fault slip. As an example, we model the moment rate of the 1992 Landers, California, earthquake that can be described as the consecutive failure of two asperities, one of which has a double strength than the other, and evaluate the evolution of stress distribution on the fault during the event.
Dynamical complexity in the perception-based network formation model
NASA Astrophysics Data System (ADS)
Jo, Hang-Hyun; Moon, Eunyoung
2016-12-01
Many link formation mechanisms for the evolution of social networks have been successful to reproduce various empirical findings in social networks. However, they have largely ignored the fact that individuals make decisions on whether to create links to other individuals based on cost and benefit of linking, and the fact that individuals may use perception of the network in their decision making. In this paper, we study the evolution of social networks in terms of perception-based strategic link formation. Here each individual has her own perception of the actual network, and uses it to decide whether to create a link to another individual. An individual with the least perception accuracy can benefit from updating her perception using that of the most accurate individual via a new link. This benefit is compared to the cost of linking in decision making. Once a new link is created, it affects the accuracies of other individuals' perceptions, leading to a further evolution of the actual network. As for initial actual networks, we consider both homogeneous and heterogeneous cases. The homogeneous initial actual network is modeled by Erdős-Rényi (ER) random networks, while we take a star network for the heterogeneous case. In any cases, individual perceptions of the actual network are modeled by ER random networks with controllable linking probability. Then the stable link density of the actual network is found to show discontinuous transitions or jumps according to the cost of linking. As the number of jumps is the consequence of the dynamical complexity, we discuss the effect of initial conditions on the number of jumps to find that the dynamical complexity strongly depends on how much individuals initially overestimate or underestimate the link density of the actual network. For the heterogeneous case, the role of the highly connected individual as an information spreader is also discussed.
Chitosan and alginate types of bio-membrane in fuel cell application: An overview
NASA Astrophysics Data System (ADS)
Shaari, N.; Kamarudin, S. K.
2015-09-01
The major problems of polymer electrolyte membrane fuel cell technology that need to be highlighted are fuel crossovers (e.g., methanol or hydrogen leaking across fuel cell membranes), CO poisoning, low durability, and high cost. Chitosan and alginate-based biopolymer membranes have recently been used to solve these problems with promising results. Current research in biopolymer membrane materials and systems has focused on the following: 1) the development of novel and efficient biopolymer materials; and 2) increasing the processing capacity of membrane operations. Consequently, chitosan and alginate-based biopolymers seek to enhance fuel cell performance by improving proton conductivity, membrane durability, and reducing fuel crossover and electro-osmotic drag. There are four groups of chitosan-based membranes (categorized according to their reaction and preparation): self-cross-linked and salt-complexed chitosans, chitosan-based polymer blends, chitosan/inorganic filler composites, and chitosan/polymer composites. There are only three alginate-based membranes that have been synthesized for fuel cell application. This work aims to review the state-of-the-art in the growth of chitosan and alginate-based biopolymer membranes for fuel cell applications.
Complex Environmental Data Modelling Using Adaptive General Regression Neural Networks
NASA Astrophysics Data System (ADS)
Kanevski, Mikhail
2015-04-01
The research deals with an adaptation and application of Adaptive General Regression Neural Networks (GRNN) to high dimensional environmental data. GRNN [1,2,3] are efficient modelling tools both for spatial and temporal data and are based on nonparametric kernel methods closely related to classical Nadaraya-Watson estimator. Adaptive GRNN, using anisotropic kernels, can be also applied for features selection tasks when working with high dimensional data [1,3]. In the present research Adaptive GRNN are used to study geospatial data predictability and relevant feature selection using both simulated and real data case studies. The original raw data were either three dimensional monthly precipitation data or monthly wind speeds embedded into 13 dimensional space constructed by geographical coordinates and geo-features calculated from digital elevation model. GRNN were applied in two different ways: 1) adaptive GRNN with the resulting list of features ordered according to their relevancy; and 2) adaptive GRNN applied to evaluate all possible models N [in case of wind fields N=(2^13 -1)=8191] and rank them according to the cross-validation error. In both cases training were carried out applying leave-one-out procedure. An important result of the study is that the set of the most relevant features depends on the month (strong seasonal effect) and year. The predictabilities of precipitation and wind field patterns, estimated using the cross-validation and testing errors of raw and shuffled data, were studied in detail. The results of both approaches were qualitatively and quantitatively compared. In conclusion, Adaptive GRNN with their ability to select features and efficient modelling of complex high dimensional data can be widely used in automatic/on-line mapping and as an integrated part of environmental decision support systems. 1. Kanevski M., Pozdnoukhov A., Timonin V. Machine Learning for Spatial Environmental Data. Theory, applications and software. EPFL Press
NASA Astrophysics Data System (ADS)
Sikder, Md. Kabir Uddin; Stone, Kyle A.; Kumar, P. B. Sunil; Laradji, Mohamed
2014-08-01
We investigate the combined effects of transmembrane proteins and the subjacent cytoskeleton on the dynamics of phase separation in multicomponent lipid bilayers using computer simulations of a particle-based implicit solvent model for lipid membranes with soft-core interactions. We find that microphase separation can be achieved by the protein confinement by the cytoskeleton. Our results have relevance to the finite size of lipid rafts in the plasma membrane of mammalian cells.
Thermophysical Model of S-complex NEAs: 1627 Ivar
NASA Astrophysics Data System (ADS)
Crowell, Jenna L.; Howell, Ellen S.; Magri, Christopher; Fernandez, Yan R.; Marshall, Sean E.; Warner, Brian D.; Vervack, Ronald J.
2015-11-01
We present updates to the thermophysical model of asteroid 1627 Ivar. Ivar is an Amor class near Earth asteroid (NEA) with a taxonomic type of Sqw [1] and a rotation rate of 4.795162 ± 5.4 * 10-6 hours [2]. In 2013, our group observed Ivar in radar, in CCD lightcurves, and in the near-IR’s reflected and thermal regimes (0.8 - 4.1 µm) using the Arecibo Observatory’s 2380 MHz radar, the Palmer Divide Station’s 0.35m telescope, and the SpeX instrument at the NASA IRTF respectively. Using these radar and lightcurve data, we generated a detailed shape model of Ivar using the software SHAPE [3,4]. Our shape model reveals more surface detail compared to earlier models [5] and we found Ivar to be an elongated asteroid with the maximum extended length along the three body-fixed coordinates being 12 x 11.76 x 6 km. For our thermophysical modeling, we have used SHERMAN [6,7] with input parameters such as the asteroid’s IR emissivity, optical scattering law and thermal inertia, in order to complete thermal computations based on our shape model and the known spin state. We then create synthetic near-IR spectra that can be compared to our observed spectra, which cover a wide range of Ivar’s rotational longitudes and viewing geometries. As has been noted [6,8], the use of an accurate shape model is often crucial for correctly interpreting multi-epoch thermal emission observations. We will present what SHERMAN has let us determine about the reflective, thermal, and surface properties for Ivar that best reproduce our spectra. From our derived best-fit thermal parameters, we will learn more about the regolith, surface properties, and heterogeneity of Ivar and how those properties compare to those of other S-complex asteroids. References: [1] DeMeo et al. 2009, Icarus 202, 160-180 [2] Crowell, J. et al. 2015, LPSC 46 [3] Magri C. et al. 2007, Icarus 186, 152-177 [4] Crowell, J. et al. 2014, AAS/DPS 46 [5] Kaasalainen, M. et al. 2004, Icarus 167, 178-196 [6] Crowell, J. et
Compartmental models for apical efflux by P-glycoprotein. Part 1. Evaluation of model complexity
Nagar, Swati; Tucker, Jalia; Weiskircher, Erica A.; Bhoopathy, Siddhartha; Hidalgo, Ismael J.; Korzekwa, Ken
2013-01-01
Purpose With the goal of quantifying P-gp transport kinetics, Part 1 of these manuscripts evaluates different compartmental models and Part 2 applies these models to kinetic data. Methods Models were developed to simulate the effect of apical efflux transporters on intracellular concentrations of six drugs. The effect of experimental variability on model predictions was evaluated. Several models were evaluated, and characteristics including membrane configuration, lipid content, and apical surface area (asa) were varied. Results Passive permeabilities from MDCK-MDR1 cells in the presence of cyclosporine gave lower model errors than from MDCK control cells. Consistent with the results in Part 2, model configuration had little impact on calculated model errors. The 5-compartment model was the simplest model that reproduced experimental lag times. Lipid content and asa had minimal effect on model errors, predicted lag times, and intracellular concentrations. Including endogenous basolateral uptake activity can decrease model errors. Models with and without explicit membrane barriers differed markedly in their predicted intracellular concentrations for basolateral drug exposure. Single point data resulted in clearances similar to time course data. Conclusions Compartmental models are useful to evaluate the impact of efflux transporters on intracellular concentrations. Whereas a 3-compartment model may be sufficient to predict the impact of transporters that efflux drugs from the cell, a 5-compartment model with explicit membranes may be required to predict intracellular concentrations when efflux occurs from the membrane. More complex models including additional compartments may be unnecessary. PMID:24019023
Rumor spreading model with noise interference in complex social networks
NASA Astrophysics Data System (ADS)
Zhu, Liang; Wang, Youguo
2017-03-01
In this paper, a modified susceptible-infected-removed (SIR) model has been proposed to explore rumor diffusion on complex social networks. We take variation of connectivity into consideration and assume the variation as noise. On the basis of related literature on virus networks, the noise is described as standard Brownian motion while stochastic differential equations (SDE) have been derived to characterize dynamics of rumor diffusion both on homogeneous networks and heterogeneous networks. Then, theoretical analysis on homogeneous networks has been demonstrated to investigate the solution of SDE model and the steady state of rumor diffusion. Simulations both on Barabási-Albert (BA) network and Watts-Strogatz (WS) network display that the addition of noise accelerates rumor diffusion and expands diffusion size, meanwhile, the spreading speed on BA network is much faster than on WS network under the same noise intensity. In addition, there exists a rumor diffusion threshold in statistical average meaning on homogeneous network which is absent on heterogeneous network. Finally, we find a positive correlation between peak value of infected individuals and noise intensity while a negative correlation between rumor lifecycle and noise intensity overall.
A minimal model for congestion phenomena on complex networks
NASA Astrophysics Data System (ADS)
DeMartino, Daniele; Dall'Asta, Luca; Bianconi, Ginestra; Marsili, Matteo
2009-08-01
We study a minimal model of traffic flows in complex networks, simple enough for getting analytical results, but with a very rich phenomenology, presenting continuous, discontinuous as well as hybrid phase transitions between a free-flow phase and a congested phase, critical points and different behaviors of scaling with the system size. It consists of random walkers on a queuing network with one-range repulsion, where particles can be destroyed only if they can move. We focus on the dependence on the topology as well as on the level of traffic control. We are able to obtain transition curves and phase diagrams at an analytical level for the ensemble of uncorrelated networks and numerically for single instances. We find that traffic control improves global performance, enlarging the free-flow region in parameter space only in heterogeneous networks. Traffic control introduces non-linear effects and, beyond a critical strength, may trigger the appearance of a congested phase in a discontinuous manner. The model also reproduces the crossover in the scaling of traffic fluctuations empirically observed in the Internet, and moreover, a conserved version can reproduce qualitatively some stylized facts of traffic in transportation networks.
Electromagnetic modelling of Ground Penetrating Radar responses to complex targets
NASA Astrophysics Data System (ADS)
Pajewski, Lara; Giannopoulos, Antonis
2014-05-01
This work deals with the electromagnetic modelling of composite structures for Ground Penetrating Radar (GPR) applications. It was developed within the Short-Term Scientific Mission ECOST-STSM-TU1208-211013-035660, funded by COST Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar". The Authors define a set of test concrete structures, hereinafter called cells. The size of each cell is 60 x 100 x 18 cm and the content varies with growing complexity, from a simple cell with few rebars of different diameters embedded in concrete at increasing depths, to a final cell with a quite complicated pattern, including a layer of tendons between two overlying meshes of rebars. Other cells, of intermediate complexity, contain pvc ducts (air filled or hosting rebars), steel objects commonly used in civil engineering (as a pipe, an angle bar, a box section and an u-channel), as well as void and honeycombing defects. One of the cells has a steel mesh embedded in it, overlying two rebars placed diagonally across the comers of the structure. Two cells include a couple of rebars bent into a right angle and placed on top of each other, with a square/round circle lying at the base of the concrete slab. Inspiration for some of these cells is taken from the very interesting experimental work presented in Ref. [1]. For each cell, a subset of models with growing complexity is defined, starting from a simple representation of the cell and ending with a more realistic one. In particular, the model's complexity increases from the geometrical point of view, as well as in terms of how the constitutive parameters of involved media and GPR antennas are described. Some cells can be simulated in both two and three dimensions; the concrete slab can be approximated as a finite-thickness layer having infinite extension on the transverse plane, thus neglecting how edges affect radargrams, or else its finite size can be fully taken into account. The permittivity of concrete can be
Analysis of a Mouse Skin Model of Tuberous Sclerosis Complex
Guo, Yanan; Dreier, John R.; Cao, Juxiang; Du, Heng; Granter, Scott R.; Kwiatkowski, David J.
2016-01-01
Tuberous Sclerosis Complex (TSC) is an autosomal dominant tumor suppressor gene syndrome in which patients develop several types of tumors, including facial angiofibroma, subungual fibroma, Shagreen patch, angiomyolipomas, and lymphangioleiomyomatosis. It is due to inactivating mutations in TSC1 or TSC2. We sought to generate a mouse model of one or more of these tumor types by targeting deletion of the Tsc1 gene to fibroblasts using the Fsp-Cre allele. Mutant, Tsc1ccFsp-Cre+ mice survived a median of nearly a year, and developed tumors in multiple sites but did not develop angiomyolipoma or lymphangioleiomyomatosis. They did develop a prominent skin phenotype with marked thickening of the dermis with accumulation of mast cells, that was minimally responsive to systemic rapamycin therapy, and was quite different from the pathology seen in human TSC skin lesions. Recombination and loss of Tsc1 was demonstrated in skin fibroblasts in vivo and in cultured skin fibroblasts. Loss of Tsc1 in fibroblasts in mice does not lead to a model of angiomyolipoma or lymphangioleiomyomatosis. PMID:27907099
Fitting meta-analytic structural equation models with complex datasets.
Wilson, Sandra Jo; Polanin, Joshua R; Lipsey, Mark W
2016-06-01
A modification of the first stage of the standard procedure for two-stage meta-analytic structural equation modeling for use with large complex datasets is presented. This modification addresses two common problems that arise in such meta-analyses: (a) primary studies that provide multiple measures of the same construct and (b) the correlation coefficients that exhibit substantial heterogeneity, some of which obscures the relationships between the constructs of interest or undermines the comparability of the correlations across the cells. One component of this approach is a three-level random effects model capable of synthesizing a pooled correlation matrix with dependent correlation coefficients. Another component is a meta-regression that can be used to generate covariate-adjusted correlation coefficients that reduce the influence of selected unevenly distributed moderator variables. A non-technical presentation of these techniques is given, along with an illustration of the procedures with a meta-analytic dataset. Copyright © 2016 John Wiley & Sons, Ltd.
The complexity of model checking for belief revision and update
Liberatore, P.; Schaerf, M.
1996-12-31
One of the main challenges in the formal modeling of common-sense reasoning is the ability to cope with the dynamic nature of the world. Among the approaches put forward to address this problem are belief revision and update. Given a knowledge base T, representing our knowledge of the {open_quotes}state of affairs{close_quotes} of the world of interest, it is possible that we are lead to trust another piece of information P, possibly inconsistent with the old one T. The aim of revision and update operators is to characterize the revised knowledge base T{prime} that incorporates the new formula P into the old one T while preserving consistency and, at the same time, avoiding the loss of too much information in this process. In this paper we study the computational complexity of one of the main computational problems of belief revision and update: deciding if an interpretation M is a model of the revised knowledge base.
Reliable modeling of the electronic spectra of realistic uranium complexes
NASA Astrophysics Data System (ADS)
Tecmer, Paweł; Govind, Niranjan; Kowalski, Karol; de Jong, Wibe A.; Visscher, Lucas
2013-07-01
We present an EOMCCSD (equation of motion coupled cluster with singles and doubles) study of excited states of the small [UO2]2+ and [UO2]+ model systems as well as the larger UVIO2(saldien) complex. In addition, the triples contribution within the EOMCCSDT and CR-EOMCCSD(T) (completely renormalized EOMCCSD with non-iterative triples) approaches for the [UO2]2+ and [UO2]+ systems as well as the active-space variant of the CR-EOMCCSD(T) method—CR-EOMCCSd(t)—for the UVIO2(saldien) molecule are investigated. The coupled cluster data were employed as benchmark to choose the "best" appropriate exchange-correlation functional for subsequent time-dependent density functional (TD-DFT) studies on the transition energies for closed-shell species. Furthermore, the influence of the saldien ligands on the electronic structure and excitation energies of the [UO2]+ molecule is discussed. The electronic excitations as well as their oscillator dipole strengths modeled with TD-DFT approach using the CAM-B3LYP exchange-correlation functional for the [UVO2(saldien)]- with explicit inclusion of two dimethyl sulfoxide molecules are in good agreement with the experimental data of Takao et al. [Inorg. Chem. 49, 2349 (2010), 10.1021/ic902225f].
Lohner, K; Latal, A; Lehrer, R I; Ganz, T
1997-02-11
alpha-Defensins are antimicrobial peptides with 29-35 amino acid residues and cysteine-stabilized amphiphilic, triple-stranded beta-sheet structures. We used high-precision differential scanning microcalorimetry to investigate the effects of a human neutrophil alpha-defensin, HNP-2, on the phase behavior of model membranes mimicking bacterial and erythrocyte cell membranes. In the presence of this positively charged peptide, the phase behavior of liposomes containing negatively charged phosphatidylglycerol was markedly altered even at a high lipid-to-peptide molar ratio of 500:1. Addition of HNP-2 to liposomes mimicking bacterial membranes (mixtures of dipalmitoylphosphatidylglycerol and -ethanolamine) resulted in phase separation owing to some domains being peptide-poor and others peptide-rich. The latter are characterized by an increase of the main transition temperature, most likely arising from electric shielding of the phospholipid headgroups by the peptide. On the other hand, HNP-2 did not affect the phase behavior of membranes mimicking erythrocyte membranes (equimolar mixtures of dipalmitoylphosphatidylcholine and sphingomyelin) as well as the pure single components. This is in contrast to melittin, which significantly affected the phase behavior of choline phospholipids in accordance with its unspecific lytic activity. These results support the hypothesis of preferential interaction of defensins with negatively charged membrane cell surfaces, a common feature of bacterial cell membranes, and demonstrate that HNP-2 discriminates between model membrane systems mimicking prokaryotic and eukaryotic cell membranes.
Monzel, C; Schmidt, D; Kleusch, C; Kirchenbüchler, D; Seifert, U; Smith, A-S; Sengupta, K; Merkel, R
2015-10-06
Stochastic displacements or fluctuations of biological membranes are increasingly recognized as an important aspect of many physiological processes, but hitherto their precise quantification in living cells was limited due to a lack of tools to accurately record them. Here we introduce a novel technique--dynamic optical displacement spectroscopy (DODS), to measure stochastic displacements of membranes with unprecedented combined spatiotemporal resolution of 20 nm and 10 μs. The technique was validated by measuring bending fluctuations of model membranes. DODS was then used to explore the fluctuations in human red blood cells, which showed an ATP-induced enhancement of non-Gaussian behaviour. Plasma membrane fluctuations of human macrophages were quantified to this accuracy for the first time. Stimulation with a cytokine enhanced non-Gaussian contributions to these fluctuations. Simplicity of implementation, and high accuracy make DODS a promising tool for comprehensive understanding of stochastic membrane processes.
Neves, Ana Rute; Nunes, Cláudia; Reis, Salette
2016-01-01
Resveratrol is a polyphenol compound with great value in cancer therapy, cardiovascular protection, and neurodegenerative disorders. The mechanism by which resveratrol exerts such pleiotropic effects is not yet clear and there is a huge need to understand the influence of this compound on the regulation of lipid domains formation on membrane structure. The aim of the present study was to reveal potential molecular interactions between resveratrol and lipid rafts found in cell membranes by means of Förster resonance energy transfer, DPH fluorescence quenching, and triton X-100 detergent resistance assay. Liposomes composed of egg phosphatidylcholine, cholesterol, and sphingomyelin were used as model membranes. The results revealed that resveratrol induces phase separation and formation of liquid-ordered domains in bilayer structures. The formation of such tightly packed lipid rafts is important for different signal transduction pathways, through the regulation of membrane-associating proteins, that can justify several pharmacological activities of this compound.
NASA Astrophysics Data System (ADS)
Monzel, C.; Schmidt, D.; Kleusch, C.; Kirchenbüchler, D.; Seifert, U.; Smith, A.-S.; Sengupta, K.; Merkel, R.
2015-10-01
Stochastic displacements or fluctuations of biological membranes are increasingly recognized as an important aspect of many physiological processes, but hitherto their precise quantification in living cells was limited due to a lack of tools to accurately record them. Here we introduce a novel technique--dynamic optical displacement spectroscopy (DODS), to measure stochastic displacements of membranes with unprecedented combined spatiotemporal resolution of 20 nm and 10 μs. The technique was validated by measuring bending fluctuations of model membranes. DODS was then used to explore the fluctuations in human red blood cells, which showed an ATP-induced enhancement of non-Gaussian behaviour. Plasma membrane fluctuations of human macrophages were quantified to this accuracy for the first time. Stimulation with a cytokine enhanced non-Gaussian contributions to these fluctuations. Simplicity of implementation, and high accuracy make DODS a promising tool for comprehensive understanding of stochastic membrane processes.
Thermophysical Model of S-complex NEAs: 1627 Ivar
NASA Astrophysics Data System (ADS)
Crowell, Jenna; Howell, Ellen S.; Magri, Christopher; Fernandez, Yanga R.; Marshall, Sean E.; Warner, Brian D.; Vervack, Ronald J., Jr.
2016-01-01
We present an updated thermophysical model of 1627 Ivar, an Amor class near Earth asteroid (NEA) with a taxonomic type of Sqw [1]. Ivar's large size and close approach to Earth in 2013 (minimum distance 0.32 AU) provided an opportunity to observe the asteroid over many different viewing angles for an extended period of time, which we have utilized to generate a shape and thermophysical model of Ivar, allowing us to discuss the implications that these results have on the regolith of this asteroid. Using the software SHAPE [2,3], we updated the nonconvex shape model of Ivar, which was constructed by Kaasalainen et al. [4] using photometry. We incorporated 2013 radar data and CCD lightcurves using the Arecibo Observatory's 2380Mz radar and the 0.35m telescope at the Palmer Divide Station respectively, to create a shape model with higher surface detail. We found Ivar to be elongated with maximum extended lengths along principal axes of 12 x 5 x 6 km and a rotation rate of 4.795162 ± 5.4 * 10-6 hrs [5]. In addition to these radar data and lightcurves, we also observed Ivar in the near IR using the SpeX instrument at the NASA IRTF. These data cover a wide range of Ivar's rotational longitudes and viewing geometries. We have used SHERMAN [6,7] with input parameters such as the asteroid's IR emissivity, optical scattering law, and thermal inertia, in order to complete thermal computations based on our shape model and known spin state. Using this procedure, we find which reflective, thermal, and surface properties best reproduce the observed spectra. This allows us to characterize properties of the asteroid's regolith and study heterogeneity of the surface. We will compare these results with those of other S-complex asteroids to better understand this asteroid type and the uniqueness of 1627 Ivar.[1] DeMeo et al. 2009, Icarus 202, 160-180 [2] Magri, C. et al. 2011, Icarus 214, 210-227. [3] Crowell, J. et al. 2014, AAS/DPS 46 [4] Kaasalainen, M. et al. 2004, Icarus 167, 178
A New Approach to Modelling Student Retention through an Application of Complexity Thinking
ERIC Educational Resources Information Center
Forsman, Jonas; Linder, Cedric; Moll, Rachel; Fraser, Duncan; Andersson, Staffan
2014-01-01
Complexity thinking is relatively new to education research and has rarely been used to examine complex issues in physics and engineering education. Issues in higher education such as student retention have been approached from a multiplicity of perspectives and are recognized as complex. The complex system of student retention modelling in higher…
Bonten, Luc T C; Groenenberg, Jan E; Meesenburg, Henning; de Vries, Wim
2011-10-01
Various dynamic soil chemistry models have been developed to gain insight into impacts of atmospheric deposition of sulphur, nitrogen and other elements on soil and soil solution chemistry. Sorption parameters for anions and cations are generally calibrated for each site, which hampers extrapolation in space and time. On the other hand, recently developed surface complexation models (SCMs) have been successful in predicting ion sorption for static systems using generic parameter sets. This study reports the inclusion of an assemblage of these SCMs in the dynamic soil chemistry model SMARTml and applies this model to a spruce forest site in Solling Germany. Parameters for SCMs were taken from generic datasets and not calibrated. Nevertheless, modelling results for major elements matched observations well. Further, trace metals were included in the model, also using the existing framework of SCMs. The model predicted sorption for most trace elements well.
Using Models to Inform Policy: Insights from Modeling the Complexities of Global Polio Eradication
NASA Astrophysics Data System (ADS)
Thompson, Kimberly M.
Drawing on over 20 years of experience modeling risks in complex systems, this talk will challenge SBP participants to develop models that provide timely and useful answers to critical policy questions when decision makers need them. The talk will include reflections on the opportunities and challenges associated with developing integrated models for complex problems and communicating their results effectively. Dr. Thompson will focus the talk largely on collaborative modeling related to global polio eradication and the application of system dynamics tools. After successful global eradication of wild polioviruses, live polioviruses will still present risks that could potentially lead to paralytic polio cases. This talk will present the insights of efforts to use integrated dynamic, probabilistic risk, decision, and economic models to address critical policy questions related to managing global polio risks. Using a dynamic disease transmission model combined with probabilistic model inputs that characterize uncertainty for a stratified world to account for variability, we find that global health leaders will face some difficult choices, but that they can take actions that will manage the risks effectively. The talk will emphasize the need for true collaboration between modelers and subject matter experts, and the importance of working with decision makers as partners to ensure the development of useful models that actually get used.
The centromere-kinetochore complex: a repeat subunit model
1991-01-01
The three-dimensional structure of the kinetochore and the DNA/protein composition of the centromere-kinetochore region was investigated using two novel techniques, caffeine-induced detachment of unreplicated kinetochores and stretching of kinetochores by hypotonic and/or shear forces generated in a cytocentrifuge. Kinetochore detachment was confirmed by EM and immunostaining with CREST autoantibodies. Electron microscopic analyses of serial sections demonstrated that detached kinetochores represented fragments derived from whole kinetochores. This was especially evident for the seven large kinetochores in the male Indian muntjac that gave rise to 80-100 fragments upon detachment. The kinetochore fragments, all of which interacted with spindle microtubules and progressed through the entire repertoire of mitotic movements, provide evidence for a subunit organization within the kinetochore. Further support for a repeat subunit model was obtained by stretching or uncoiling the metaphase centromere-kinetochore complex by hypotonic treatments. When immunostained with CREST autoantibodies and subsequently processed for in situ hybridization using synthetic centromere probes, stretched kinetochores displayed a linear array of fluorescent subunits arranged in a repetitive pattern along a centromeric DNA fiber. In addition to CREST antigens, each repetitive subunit was found to bind tubulin and contain cytoplasmic dynein, a microtubule motor localized in the zone of the corona. Collectively, the data suggest that the kinetochore, a plate-like structure seen by EM on many eukaryotic chromosomes is formed by the folding of a linear DNA fiber consisting of tandemly repeated subunits interspersed by DNA linkers. This model, unlike any previously proposed, can account for the structural and evolutional diversity of the kinetochore and its relationship to the centromere of eukaryotic chromosomes of many species. PMID:1828250
Complex network model of the Treatise on Cold Damage Disorders
NASA Astrophysics Data System (ADS)
Shao, Feng-jing; Sui, Yi; Zhou, Yong-hong; Sun, Ren-cheng
2016-10-01
Investigating the underlying principles of the Treatise on Cold Damage Disorder is meaningful and interesting. In this study, we investigated the symptoms, herbal formulae, herbal drugs, and their relationships in this treatise based on a multi-subnet composited complex network model (MCCN). Syndrome subnets were constructed for the symptoms and a formula subnet for herbal drugs. By subnet compounding using MCCN, a composited network was obtained that described the treatment relationships between syndromes and formulae. The results obtained by topological analysis suggested some prescription laws that could be validated in clinics. After subnet reduction using the MCCN, six channel (Tai-yang, Yang-ming, Shao-yang, Tai-yin, Shao-yin, and Jue-yin) subnets were obtained. By analyzing the strengths of the relationships among these six channel subnets, we found that the Tai-yang channel and Yang-ming channel were related most strongly with each other, and we found symptoms that implied pathogen movements and transformations among the six channels. This study could help therapists to obtain a deeper understanding of this ancient treatise.
How Good Are Statistical Models at Approximating Complex Fitness Landscapes?
du Plessis, Louis; Leventhal, Gabriel E; Bonhoeffer, Sebastian
2016-09-01
Fitness landscapes determine the course of adaptation by constraining and shaping evolutionary trajectories. Knowledge of the structure of a fitness landscape can thus predict evolutionary outcomes. Empirical fitness landscapes, however, have so far only offered limited insight into real-world questions, as the high dimensionality of sequence spaces makes it impossible to exhaustively measure the fitness of all variants of biologically meaningful sequences. We must therefore revert to statistical descriptions of fitness landscapes that are based on a sparse sample of fitness measurements. It remains unclear, however, how much data are required for such statistical descriptions to be useful. Here, we assess the ability of regression models accounting for single and pairwise mutations to correctly approximate a complex quasi-empirical fitness landscape. We compare approximations based on various sampling regimes of an RNA landscape and find that the sampling regime strongly influences the quality of the regression. On the one hand it is generally impossible to generate sufficient samples to achieve a good approximation of the complete fitness landscape, and on the other hand systematic sampling schemes can only provide a good description of the immediate neighborhood of a sequence of interest. Nevertheless, we obtain a remarkably good and unbiased fit to the local landscape when using sequences from a population that has evolved under strong selection. Thus, current statistical methods can provide a good approximation to the landscape of naturally evolving populations.
Modeling complex neuropsychiatric disorders with human induced pluripotent stem cells.
Tobe, Brian T D; Snyder, Evan Y; Nye, Jeffrey S
2011-10-01
Identifying the molecular and cellular basis of complex neuropsychiatric disorders (cNPDs) has been limited by the inaccessibility of central neurons, variability within broad diagnostic classifications, and the interplay of genetic and environmental factors. Recent work utilizing neuronally differentiated human induced pluripotent stem cells (hiPSCs) from Mendelian and polygenic cNPDs is beginning to illuminate neuritic, synaptic or cell body variations accompanied by specific gene or protein expression alterations largely mimicking known pathology. In some cases, phenotypes have only emerged after application of cellular stress or long duration of differentiation. Pathological and cellular expression features are fully or partially responsive to pharmacological treatment highlighting the potential utility of differentiated hiPSCs for discovery of personalized therapeutics and for identifying pathogenetically relevant targets in subgroups of patients within a broad syndromic classification. Because of the inherent variability in developing and differentiating hiPSC lines and the multiple comparisons implicit in 'omics' technologies, rigorous algorithms for assuring statistical significance and independent confirmation of results, will be required for robust modeling of cNPDs.
Slip complexity and frictional heterogeneities in dynamic fault models
NASA Astrophysics Data System (ADS)
Bizzarri, A.
2005-12-01
The numerical modeling of earthquake rupture requires the specification of the fault system geometry, the mechanical properties of the media surrounding the fault, the initial conditions and the constitutive law for fault friction. The latter accounts for the fault zone properties and allows for the description of processes of nucleation, propagation, healing and arrest of a spontaneous rupture. In this work I solve the fundamental elasto-dynamic equation for a planar fault, adopting different constitutive equations (slip-dependent and rate- and state-dependent friction laws). We show that the slip patterns may be complicated by different causes. The spatial heterogeneities of constitutive parameters are able to cause the healing of slip, like barrier-healing or slip pulses. Our numerical experiments show that the heterogeneities of the parameter L affect the dynamic rupture propagation and weakly modify the dynamic stress drop and the rupture velocity. The heterogeneity of a and b parameters affects the dynamic rupture propagation in a more complex way: a velocity strengthening area (a > b) can arrest a dynamic rupture, but can be driven to an instability if suddenly loaded by the dynamic rupture front. Our simulations provide a picture of the complex interactions between fault patches having different frictional properties. Moreover, the slip distribution on the fault plane is complicated considering the effects of the rake rotation during the propagation: depending on the position on the fault plane, the orientation of instantaneous total dynamic traction can change with time with respect to the imposed initial stress direction. These temporal rake rotations depend on the amplitude of the initial stress and on its distribution. They also depend on the curvature and direction of the rupture front with respect to the imposed initial stress direction: this explains why rake rotations are mostly located near the rupture front and within the cohesive zone, where the
Three-dimensional modeling of canopy flow in complex terrain
NASA Astrophysics Data System (ADS)
Xu, X.; Yi, C.; Montagnani, L.
2013-12-01
Flows within and just above forest canopy over mountainous terrain are most complicated, which substantially influence the biosphere-atmosphere interaction of mass and energy. Due to the significant spatial variation, canopy flow in complex terrain is poorly understood based on the point-based tower measurement. We employ numerical model integrated with biogenic CO2 process to examine the impacts of topography, canopy structure, and synoptic atmospheric motion on canopy flow and associated CO2 transport in an alpine forest, with special focus on stable nocturnal condition when biogenic CO2 emission is active. Our model prediction is in better agreement with tower measurements when background synoptic wind is present, which leads to better larger-scale mixing, while local slope flow is just thermal-driven in the modeled domain by ignorance of surround mountain-valley. Our results show that large-scale synoptic wind is modified by local slope-canopy flow within and just above canopy. As the synoptic wind is down-slope (Figure 1a), recirculation is formed on the downwind slope with cool air and high accumulation of CO2 in front of tall and dense vegetation. As the synoptic wind is up-slope(Figure 1b), canopy flow at the higher elevation of the slope is in the same direction of synoptic wind, while canopy flow at the lower part of the slope blows down-slope. The upslope wind causes better mixing in the canopy and leads to smaller CO2 accumulation just close to the slope surface. The local down-slope wind (Figure 1c) causes rich and deep CO2 build-up in the downwind direction on the lower slope. Our numerical performance has demonstrated that three-dimensional CFD approach is a useful tool to understanding relationships between tower-point measurements and surrounding's field distributions. Acknowledgement: This research was supported by NSF Grants ATM-0930015, CNS-0958379 & CNS-0855217, PSC-CUNY ENHC-42-64 & CUNY HPCC. Figure 1 CO2 distribution within and just above
Meteorology and air quality modeling in complex terrain: a literature review
DeMarrais, G.A.; Clark, T.L.
1982-04-01
Modeling air quality in complex terrain has been and remains to be a difficult task simply because of the difficulty in parameterizing the complex wind flow regimes. Due to the complex terrain, significant submesoscale forces are established to perturb the mesoscale wind field. This literature review summarizes over 250 studies of meteorology and air quality modeling in complex terrain for the benefit of those who wish to broaden their knowledge of the subject.
Using SysML to model complex systems for security.
Cano, Lester Arturo
2010-08-01
As security systems integrate more Information Technology the design of these systems has tended to become more complex. Some of the most difficult issues in designing Complex Security Systems (CSS) are: Capturing Requirements: Defining Hardware Interfaces: Defining Software Interfaces: Integrating Technologies: Radio Systems: Voice Over IP Systems: Situational Awareness Systems.
Scalable complexity-distortion model for fast motion estimation
NASA Astrophysics Data System (ADS)
Yi, Xiaoquan; Ling, Nam
2005-07-01
Recently established international video coding standard H.264/AVC and the upcoming standard on scalable video coding (SVC) bring part of the solution to high compression ratio requirement and heterogeneity requirement. However, these algorithms have unbearable complexities for real-time encoding. Therefore, there is an important challenge to reduce encoding complexity, preferably in a scalable manner. Motion estimation and motion compensation techniques provide significant coding gain but are the most time-intensive parts in an encoder system. They present tremendous research challenges to design a flexible, rate-distortion optimized, yet computationally efficient encoder, especially for various applications. In this paper, we present a scalable motion estimation framework for complexitydistortion consideration. We propose a new progressive initial search (PIS) method to generate an accurate initial search point, followed by a fast search method, which can greatly benefit from the tighter bounds of the PIS. Such approach offers not only significant speedup but also an optimal distortion performance for a given complexity constrain. We analyze the relationship between computational complexity and distortion (C-D) through probabilistic distance measure extending from the complexity and distortion theory. A configurable complexity quantization parameter (Q) is introduced. Simulation results demonstrate that the proposed scalable complexity-distortion framework enables video encoder to conveniently adjust its complexity while providing best possible services.
Entropic force between biomembranes
NASA Astrophysics Data System (ADS)
Li, Long; Song, Fan
2016-10-01
Undulation force, an entropic force, stems from thermally excited fluctuations, and plays a key role in the essential interactions between neighboring surfaces of objects. Although the characteristics of the undulation force have been widely studied theoretically and experimentally, the distance dependence of the force, which constitutes its most fundamental characteristic, remains poorly understood. In this paper, first, we obtain a novel expression for the undulation force by employing elasticity and statistical mechanics and prove it to be in good agreement with existing experimental results. Second, we clearly demonstrate that the two representative forms of the undulation force proposed by Helfrich and Freund were respectively the upper and lower bounds of the present expression when the separation between membranes is sufficiently small, which was intrinsically different from the existing results where Helfrich's and Freund's forms of the undulation force were only suitable for the intermediate and small separations. The investigations show that only in a sufficiently small separation does Helfrich's result stand for the undulation force with a large wave number and Freund's result express the force with a small wave number. Finally, a critical acting distance of the undulation force, beyond which the entropic force will rapidly decay approaching zero, is presented.
Parameterizations of Dry Deposition for the Industrial Source Complex Model
NASA Astrophysics Data System (ADS)
Wesely, M. L.; Doskey, P. V.; Touma, J. S.
2002-05-01
Improved algorithms have been developed to simulate the dry deposition of hazardous air pollutants (HAPs) with the Industrial Source Complex model system. The dry deposition velocities are described in conventional resistance schemes, for which micrometeorological formulas are applied to describe the aerodynamic resistances above the surface. Pathways to uptake of gases at the ground and in vegetative canopies are depicted with several resistances that are affected by variations in air temperature, humidity, solar irradiance, and soil moisture. Standardized land use types and seasonal categories provide sets of resistances to uptake by various components of the surface. To describe the dry deposition of the large number of gaseous organic HAPS, a new technique based on laboratory study results and theoretical considerations has been developed to provide a means to evaluate the role of lipid solubility on uptake by the waxy outer cuticle of vegetative plant leaves. The dry deposition velocities of particulate HAPs are simulated with a resistance scheme in which deposition velocity is described for two size modes: a fine mode with particles less than about 2.5 microns in diameter and a coarse mode with larger particles but excluding very coarse particles larger than about 10 microns in diameter. For the fine mode, the deposition velocity is calculated with a parameterization based on observations of sulfate dry deposition. For the coarse mode, a representative settling velocity is assumed. Then the total deposition velocity is estimated as the sum of the two deposition velocities weighted according to the amount of mass expected in the two modes.
Probabilistic Multi-Factor Interaction Model for Complex Material Behavior
NASA Technical Reports Server (NTRS)
Abumeri, Galib H.; Chamis, Christos C.
2010-01-01
Complex material behavior is represented by a single equation of product form to account for interaction among the various factors. The factors are selected by the physics of the problem and the environment that the model is to represent. For example, different factors will be required for each to represent temperature, moisture, erosion, corrosion, etc. It is important that the equation represent the physics of the behavior in its entirety accurately. The Multi-Factor Interaction Model (MFIM) is used to evaluate the divot weight (foam weight ejected) from the external launch tanks. The multi-factor has sufficient degrees of freedom to evaluate a large number of factors that may contribute to the divot ejection. It also accommodates all interactions by its product form. Each factor has an exponent that satisfies only two points - the initial and final points. The exponent describes a monotonic path from the initial condition to the final. The exponent values are selected so that the described path makes sense in the absence of experimental data. In the present investigation, the data used were obtained by testing simulated specimens in launching conditions. Results show that the MFIM is an effective method of describing the divot weight ejected under the conditions investigated. The problem lies in how to represent the divot weight with a single equation. A unique solution to this problem is a multi-factor equation of product form. Each factor is of the following form (1 xi/xf)ei, where xi is the initial value, usually at ambient conditions, xf the final value, and ei the exponent that makes the curve represented unimodal that meets the initial and final values. The exponents are either evaluated by test data or by technical judgment. A minor disadvantage may be the selection of exponents in the absence of any empirical data. This form has been used successfully in describing the foam ejected in simulated space environmental conditions. Seven factors were required
NASA Astrophysics Data System (ADS)
Mattern, Jann Paul; Edwards, Christopher A.
2017-01-01
Parameter estimation is an important part of numerical modeling and often required when a coupled physical-biogeochemical ocean model is first deployed. However, 3-dimensional ocean model simulations are computationally expensive and models typically contain upwards of 10 parameters suitable for estimation. Hence, manual parameter tuning can be lengthy and cumbersome. Here, we present four easy to implement and flexible parameter estimation techniques and apply them to two 3-dimensional biogeochemical models of different complexities. Based on a Monte Carlo experiment, we first develop a cost function measuring the model-observation misfit based on multiple data types. The parameter estimation techniques are then applied and yield a substantial cost reduction over ∼ 100 simulations. Based on the outcome of multiple replicate experiments, they perform on average better than random, uninformed parameter search but performance declines when more than 40 parameters are estimated together. Our results emphasize the complex cost function structure for biogeochemical parameters and highlight dependencies between different parameters as well as different cost function formulations.
40 CFR 80.48 - Augmentation of the complex emission model by vehicle testing.
Code of Federal Regulations, 2010 CFR
2010-07-01
... emission model is deemed valid. (b) To augment the complex emission model described at § 80.45, the... refueling VOC and toxics emissions) shall not be augmented by vehicle testing. (4) The Agency reserves the... petitions to augment the complex model defined at § 80.45 with a new parameter, the effect of the...
40 CFR 80.48 - Augmentation of the complex emission model by vehicle testing.
Code of Federal Regulations, 2011 CFR
2011-07-01
... emission model is deemed valid. (b) To augment the complex emission model described at § 80.45, the... refueling VOC and toxics emissions) shall not be augmented by vehicle testing. (4) The Agency reserves the... petitions to augment the complex model defined at § 80.45 with a new parameter, the effect of the...
Application of surface complexation models to anion adsorption by natural materials
Technology Transfer Automated Retrieval System (TEKTRAN)
Various chemical models of ion adsorption will be presented and discussed. Chemical models, such as surface complexation models, provide a molecular description of anion adsorption reactions using an equilibrium approach. Two such models, the constant capacitance model and the triple layer model w...
Hadaeghi, Fatemeh; Hashemi Golpayegani, Mohammad Reza; Murray, Greg
2015-07-07
Bipolar disorder is characterized by repeated erratic episodes of mania and depression, which can be understood as pathological complex system behavior involving cognitive, affective and psychomotor disturbance. In order to illuminate dynamical aspects of the longitudinal course of the illness, we propose here a novel complex model based on the notion of competition between recurrent maps, which mathematically represent the dynamics of activation in excitatory (Glutamatergic) and inhibitory (GABAergic) pathways. We assume that manic and depressive states can be considered stable sub attractors of a dynamical system through which the mood trajectory moves. The model provides a theoretical framework which can account for a number of complex phenomena of bipolar disorder, including intermittent transition between the two poles of the disorder, rapid and ultra-rapid cycling of episodes and manicogenic effects of antidepressants.
Pavinatto, Adriana; Souza, Adriano L; Delezuk, Jorge A M; Pavinatto, Felippe J; Campana-Filho, Sérgio P; Oliveira, Osvaldo N
2014-02-01
One of the major challenges in establishing the mechanisms responsible for the chitosan action in biomedical applications lies in the determination of the molecular-level interactions with the cell membrane. In this study, we probed hydrophobic interactions and H-bonding in experiments with O,O'-diacetylchitosan (DACT) and O,O'-dipropionylchitosan (DPPCT) incorporated into monolayers of distinct phospholipids, the zwitterionic dipalmitoyl phosphatidyl choline (DPPC), and the negatively charged dipalmitoyl phosphatidyl glycerol (DPPG) and dimyristoyl phosphatidic acid (DMPA). The importance of hydrophobic interactions was confirmed with the larger effects observed for DACT and DPPCT than for parent chitosan (Chi), particularly for the more hydrophobic DPPCT. Such larger effects were noted in surface pressure isotherms and elasticity of the monolayers. Since H-bonding is hampered for the chitosan derivatives, which have part of their hydroxyl groups shielded by O-acylation, these effects indicate that H-bonding does not play an important role in the chitosan-membrane interactions. Using polarization-modulated infrared reflection absorption (PM-IRRAS) spectroscopy, we found that the chitosan derivatives were incorporated into the hydrophobic chain of the phospholipids, even at high surface pressures comparable to those in a real cell membrane. Taken together, these results indicate that the chitosan derivatives containing hydrophobic moieties would probably be more efficient than parent chitosan as antimicrobial agents, where interaction with the cell membrane is crucial.
Penczek, Stanislaw; Pretula, Julia; Kaluzynski, Krzysztof
2005-01-01
Syntheses of poly(alkylene phosphates), with repeating units having two or three methylene groups and phosphoryl groups and mimicking backbones of biomacromolecules, are reviewed. Two major methods elaborated in this laboratory, namely, ring-opening polymerization and transesterification, are described. The resulting polymers were used as carriers of cations (Ca2+ and Mg2+) in membrane processes and in controlling the crystallization of CaCO3, in a process related to biomineralization.
A note on the Dirichlet problem for model complex partial differential equations
NASA Astrophysics Data System (ADS)
Ashyralyev, Allaberen; Karaca, Bahriye
2016-08-01
Complex model partial differential equations of arbitrary order are considered. The uniqueness of the Dirichlet problem is studied. It is proved that the Dirichlet problem for higher order of complex partial differential equations with one complex variable has infinitely many solutions.
Cousin, F; Gummel, J; Combet, S; Boué, F
2011-09-14
We review, based on structural information, the mechanisms involved when putting in contact two nano-objects of opposite electrical charge, in the case of one negatively charged polyion, and a compact charged one. The central case is mixtures of PSS, a strong flexible polyanion (the salt of a strong acid, and with high linear charge density), and Lysozyme, a globular protein with a global positive charge. A wide accurate and consistent set of information in different situations is available on the structure at local scales (5-1000Å), due to the possibility of matching, the reproducibility of the system, its well-defined electrostatics features, and the well-defined structures obtained. We have related these structures to the observations at macroscopic scale of the phase behavior, and to the expected mechanisms of coacervation. On the one hand, PSS/Lysozyme mixtures show accurately many of what is expected in PEL/protein complexation, and phase separation, as reviewed by de Kruif: under certain conditions some well-defined complexes are formed before any phase separation, they are close to neutral; even in excess of one species, complexes are only modestly charged (surface charges in PEL excess). Neutral cores are attracting each other, to form larger objects responsible for large turbidity. They should lead the system to phase separation; this is observed in the more dilute samples, while in more concentrated ones the lack of separation in turbid samples is explained by locking effects between fractal aggregates. On the other hand, although some of the features just listed are the same required for coacervation, this phase transition is not really obtained. The phase separation has all the macroscopic aspects of a fluid (undifferentiated liquid/gas phase) - solid transition, not of a fluid-fluid (liquid-liquid) one, which would correspond to real coacervation). The origin of this can be found in the interaction potential between primary complexes formed (globules
NASA Astrophysics Data System (ADS)
Holzmann, Hubert; Massmann, Carolina
2015-04-01
A plenty of hydrological model types have been developed during the past decades. Most of them used a fixed design to describe the variable hydrological processes assuming to be representative for the whole range of spatial and temporal scales. This assumption is questionable as it is evident, that the runoff formation process is driven by dominant processes which can vary among different basins. Furthermore the model application and the interpretation of results is limited by data availability to identify the particular sub-processes, since most models were calibrated and validated only with discharge data. Therefore it can be hypothesized, that simpler model designs, focusing only on the dominant processes, can achieve comparable results with the benefit of less parameters. In the current contribution a modular model concept will be introduced, which allows the integration and neglection of hydrological sub-processes depending on the catchment characteristics and data availability. Key elements of the process modules refer to (1) storage effects (interception, soil), (2) transfer processes (routing), (3) threshold processes (percolation, saturation overland flow) and (4) split processes (rainfall excess). Based on hydro-meteorological observations in an experimental catchment in the Slovak region of the Carpathian mountains a comparison of several model realizations with different degrees of complexity will be discussed. A special focus is given on model parameter sensitivity estimated by Markov Chain Monte Carlo approach. Furthermore the identification of dominant processes by means of Sobol's method is introduced. It could be shown that a flexible model design - and even the simple concept - can reach comparable and equivalent performance than the standard model type (HBV-type). The main benefit of the modular concept is the individual adaptation of the model structure with respect to data and process availability and the option for parsimonious model design.
Quental, Carlos; Folgado, João; Ambrósio, Jorge; Monteiro, Jacinto
2015-01-01
The inverse dynamics technique applied to musculoskeletal models, and supported by optimisation techniques, is used extensively to estimate muscle and joint reaction forces. However, the solutions of the redundant muscle force sharing problem are sensitive to the detail and modelling assumptions of the models used. This study presents four alternative biomechanical models of the upper limb with different levels of discretisation of muscles by bundles and muscle paths, and their consequences on the estimation of the muscle and joint reaction forces. The muscle force sharing problem is solved for the motions of abduction and anterior flexion, acquired using video imaging, through the minimisation of an objective function describing muscle metabolic energy consumption. While looking for the optimal solution, not only the equations of motion are satisfied but also the stability of the glenohumeral and scapulothoracic joints is preserved. The results show that a lower level of muscle discretisation provides worse estimations regarding the muscle forces. Moreover, the poor discretisation of muscles relevant to the joint in analysis limits the applicability of the biomechanical model. In this study, the biomechanical model of the upper limb describing the infraspinatus by a single bundle could not solve the complete motion of anterior flexion. Despite the small differences in the magnitude of the forces predicted by the biomechanical models with more complex muscular systems, in general, there are no significant variations in the muscular activity of equivalent muscles.
Modelling radiation fluxes in simple and complex environments: basics of the RayMan model.
Matzarakis, Andreas; Rutz, Frank; Mayer, Helmut
2010-03-01
Short- and long-wave radiation flux densities absorbed by people have a significant influence on their energy balance. The heat effect of the absorbed radiation flux densities is parameterised by the mean radiant temperature. This paper presents the physical basis of the RayMan model, which simulates the short- and long-wave radiation flux densities from the three-dimensional surroundings in simple and complex environments. RayMan has the character of a freely available radiation and human-bioclimate model. The aim of the RayMan model is to calculate radiation flux densities, sunshine duration, shadow spaces and thermo-physiologically relevant assessment indices using only a limited number of meteorological and other input data. A comparison between measured and simulated values for global radiation and mean radiant temperature shows that the simulated data closely resemble measured data.
Modelling radiation fluxes in simple and complex environments: basics of the RayMan model
NASA Astrophysics Data System (ADS)
Matzarakis, Andreas; Rutz, Frank; Mayer, Helmut
2010-03-01
Short- and long-wave radiation flux densities absorbed by people have a significant influence on their energy balance. The heat effect of the absorbed radiation flux densities is parameterised by the mean radiant temperature. This paper presents the physical basis of the RayMan model, which simulates the short- and long-wave radiation flux densities from the three-dimensional surroundings in simple and complex environments. RayMan has the character of a freely available radiation and human-bioclimate model. The aim of the RayMan model is to calculate radiation flux densities, sunshine duration, shadow spaces and thermo-physiologically relevant assessment indices using only a limited number of meteorological and other input data. A comparison between measured and simulated values for global radiation and mean radiant temperature shows that the simulated data closely resemble measured data.
An energetic comparison of different models for the oxygen evolving complex of photosystem II.
Siegbahn, Per E M
2009-12-30
The computed total energy from a cluster model DFT calculation is used to discriminate between different suggested models for the oxygen evolving complex of photosystem II. The comparison between different structures rules out several suggestions. Only one suggested structure remains.
Visualizing and modelling complex rockfall slopes using game-engine hosted models
NASA Astrophysics Data System (ADS)
Ondercin, Matthew; Hutchinson, D. Jean; Harrap, Rob
2015-04-01
Innovations in computing in the past few decades have resulted in entirely new ways to collect 3d geological data and visualize it. For example, new tools and techniques relying on high performance computing capabilities have become widely available, allowing us to model rockfalls with more attention to complexity of the rock slope geometry and rockfall path, with significantly higher quality base data, and with more analytical options. Model results are used to design mitigation solutions, considering the potential paths of the rockfall events and the energy they impart on impacted structures. Such models are currently implemented as general-purpose GIS tools and in specialized programs. These tools are used to inspect geometrical and geomechanical data, model rockfalls, and communicate results to researchers and the larger community. The research reported here explores the notion that 3D game engines provide a high speed, widely accessible platform on which to build rockfall modelling workflows and to provide a new and accessible outreach method. Taking advantage of the in-built physics capability of the 3D game codes, and ability to handle large terrains, these models are rapidly deployed and generate realistic visualizations of rockfall trajectories. Their utility in this area is as yet unproven, but preliminary research shows that they are capable of producing results that are comparable to existing approaches. Furthermore, modelling of case histories shows that the output matches the behaviour that is observed in the field. The key advantage of game-engine hosted models is their accessibility to the general public and to people with little to no knowledge of rockfall hazards. With much of the younger generation being very familiar with 3D environments such as Minecraft, the idea of a game-like simulation is intuitive and thus offers new ways to communicate to the general public. We present results from using the Unity game engine to develop 3D voxel worlds
Molecular Models of Ruthenium(II) Organometallic Complexes
ERIC Educational Resources Information Center
Coleman, William F.
2007-01-01
This article presents the featured molecules for the month of March, which appear in the paper by Ozerov, Fafard, and Hoffman, and which are related to the study of the reactions of a number of "piano stool" complexes of ruthenium(II). The synthesis of compound 2a offers students an alternative to the preparation of ferrocene if they are only…
Can Models Capture the Complexity of the Systems Engineering Process?
NASA Astrophysics Data System (ADS)
Boppana, Krishna; Chow, Sam; de Weck, Olivier L.; Lafon, Christian; Lekkakos, Spyridon D.; Lyneis, James; Rinaldi, Matthew; Wang, Zhiyong; Wheeler, Paul; Zborovskiy, Marat; Wojcik, Leonard A.
Many large-scale, complex systems engineering (SE) programs have been problematic; a few examples are listed below (Bar-Yam, 2003 and Cullen, 2004), and many others have been late, well over budget, or have failed: Hilton/Marriott/American Airlines system for hotel reservations and flights; 1988-1992; 125 million; "scrapped"
Modelling Second Language Performance: Integrating Complexity, Accuracy, Fluency, and Lexis
ERIC Educational Resources Information Center
Skehan, Peter
2009-01-01
Complexity, accuracy, and fluency have proved useful measures of second language performance. The present article will re-examine these measures themselves, arguing that fluency needs to be rethought if it is to be measured effectively, and that the three general measures need to be supplemented by measures of lexical use. Building upon this…
Carotenoid binding to proteins: Modeling pigment transport to lipid membranes.
Reszczynska, Emilia; Welc, Renata; Grudzinski, Wojciech; Trebacz, Kazimierz; Gruszecki, Wieslaw I
2015-10-15
Carotenoid pigments play numerous important physiological functions in human organism. Very special is a role of lutein and zeaxanthin in the retina of an eye and in particular in its central part, the macula lutea. In the retina, carotenoids can be directly present in the lipid phase of the membranes or remain bound to the protein-pigment complexes. In this work we address a problem of binding of carotenoids to proteins and possible role of such structures in pigment transport to lipid membranes. Interaction of three carotenoids, beta-carotene, lutein and zeaxanthin with two proteins: bovine serum albumin and glutathione S-transferase (GST) was investigated with application of molecular spectroscopy techniques: UV-Vis absorption, circular dichroism and Fourier transform infrared spectroscopy (FTIR). Interaction of pigment-protein complexes with model lipid bilayers formed with egg yolk phosphatidylcholine was investigated with application of FTIR, Raman imaging of liposomes and electrophysiological technique, in the planar lipid bilayer models. The results show that in all the cases of protein and pigment studied, carotenoids bind to protein and that the complexes formed can interact with membranes. This means that protein-carotenoid complexes are capable of playing physiological role in pigment transport to biomembranes.
First results from the International Urban Energy Balance Model Comparison: Model Complexity
NASA Astrophysics Data System (ADS)
Blackett, M.; Grimmond, S.; Best, M.
2009-04-01
A great variety of urban energy balance models has been developed. These vary in complexity from simple schemes that represent the city as a slab, through those which model various facets (i.e. road, walls and roof) to more complex urban forms (including street canyons with intersections) and features (such as vegetation cover and anthropogenic heat fluxes). Some schemes also incorporate detailed representations of momentum and energy fluxes distributed throughout various layers of the urban canopy layer. The models each differ in the parameters they require to describe the site and the in demands they make on computational processing power. Many of these models have been evaluated using observational datasets but to date, no controlled comparisons have been conducted. Urban surface energy balance models provide a means to predict the energy exchange processes which influence factors such as urban temperature, humidity, atmospheric stability and winds. These all need to be modelled accurately to capture features such as the urban heat island effect and to provide key information for dispersion and air quality modelling. A comparison of the various models available will assist in improving current and future models and will assist in formulating research priorities for future observational campaigns within urban areas. In this presentation we will summarise the initial results of this international urban energy balance model comparison. In particular, the relative performance of the models involved will be compared based on their degree of complexity. These results will inform us on ways in which we can improve the modelling of air quality within, and climate impacts of, global megacities. The methodology employed in conducting this comparison followed that used in PILPS (the Project for Intercomparison of Land-Surface Parameterization Schemes) which is also endorsed by the GEWEX Global Land Atmosphere System Study (GLASS) panel. In all cases, models were run
NASA Astrophysics Data System (ADS)
Courtney, Owen T.; Bianconi, Ginestra
2016-06-01
Simplicial complexes are generalized network structures able to encode interactions occurring between more than two nodes. Simplicial complexes describe a large variety of complex interacting systems ranging from brain networks to social and collaboration networks. Here we characterize the structure of simplicial complexes using their generalized degrees that capture fundamental properties of one, two, three, or more linked nodes. Moreover, we introduce the configuration model and the canonical ensemble of simplicial complexes, enforcing, respectively, the sequence of generalized degrees of the nodes and the sequence of the expected generalized degrees of the nodes. We evaluate the entropy of these ensembles, finding the asymptotic expression for the number of simplicial complexes in the configuration model. We provide the algorithms for the construction of simplicial complexes belonging to the configuration model and the canonical ensemble of simplicial complexes. We give an expression for the structural cutoff of simplicial complexes that for simplicial complexes of dimension d =1 reduces to the structural cutoff of simple networks. Finally, we provide a numerical analysis of the natural correlations emerging in the configuration model of simplicial complexes without structural cutoff.
The Creation of Surrogate Models for Fast Estimation of Complex Model Outcomes.
Pruett, W Andrew; Hester, Robert L
2016-01-01
A surrogate model is a black box model that reproduces the output of another more complex model at a single time point. This is to be distinguished from the method of surrogate data, used in time series. The purpose of a surrogate is to reduce the time necessary for a computation at the cost of rigor and generality. We describe a method of constructing surrogates in the form of support vector machine (SVM) regressions for the purpose of exploring the parameter space of physiological models. Our focus is on the methodology of surrogate creation and accuracy assessment in comparison to the original model. This is done in the context of a simulation of hemorrhage in one model, "Small", and renal denervation in another, HumMod. In both cases, the surrogate predicts the drop in mean arterial pressure following the intervention. We asked three questions concerning surrogate models: (1) how many training examples are necessary to obtain an accurate surrogate, (2) is surrogate accuracy homogeneous, and (3) how much can computation time be reduced when using a surrogate. We found the minimum training set size that would guarantee maximal accuracy was widely variable, but could be algorithmically generated. The average error for the pressure response to the protocols was -0.05±2.47 in Small, and -0.3 +/- 3.94 mmHg in HumMod. In the Small model, error grew with actual pressure drop, and in HumMod, larger pressure drops were overestimated by the surrogates. Surrogate use resulted in a 6 order of magnitude decrease in computation time. These results suggest surrogate modeling is a valuable tool for generating predictions of an integrative model's behavior on densely sampled subsets of its parameter space.
Complex Modelling Scheme Of An Additive Manufacturing Centre
NASA Astrophysics Data System (ADS)
Popescu, Liliana Georgeta
2015-09-01
This paper presents a modelling scheme sustaining the development of an additive manufacturing research centre model and its processes. This modelling is performed using IDEF0, the resulting model process representing the basic processes required in developing such a centre in any university. While the activities presented in this study are those recommended in general, changes may occur in specific existing situations in a research centre.
NASA Astrophysics Data System (ADS)
de Boer, H. J.; Dekker, S. C.; Wassen, M. J.
2009-04-01
Earth System Models of Intermediate Complexity (EMICs) are popular tools for palaeo climate simulations. Recent studies applied these models in comparison to terrestrial proxy records and aimed to reconstruct changes in seasonal climate forced by altered ocean circulation patterns. To strengthen this powerful methodology, we argue that the magnitude of the simulated atmospheric changes should be considered in relation to the internal variability of both the climate system and the intermediate complexity model. To attribute a shift in modelled climate to reality, this ‘signal' should be detectable above the ‘noise' related to the internal variability of the climate system and the internal variability of the model. Both noise and climate signals vary over the globe and change with the seasons. We therefore argue that spatial explicit fields of noise should be considered in relation to the strengths of the simulated signals at a seasonal timescale. We approximated total noise on terrestrial temperature and precipitation from a 29 member simulation with the EMIC PUMA-2 and global temperature and precipitation datasets. To illustrate this approach, we calculate Signal-to-Noise-Ratios (SNRs) in terrestrial temperature and precipitation on simulations of an El Niño warm event, a phase change in Atlantic Meridional Oscillation (AMO) and a Heinrich cooling event. The results of the El Niño and AMO simulations indicate that the chance to accurately detect a climate signal increases with increasing SNRs. Considering the regions and seasons with highest SNRs, the simulated El Niño anomalies show good agreement with observations (r² = 0.8 and 0.6 for temperature and precipitation at SNRs > 4). The AMO signals rarely surpass the noise levels and remain mostly undetected. The simulation of a Heinrich event predicts highest SNRs for temperature (up to 10) over Arabia and Russia during Boreal winter and spring. Highest SNRs for precipitation (up to 12) are predicted over
NASA Technical Reports Server (NTRS)
Befrui, Bizhan A.
1995-01-01
This viewgraph presentation discusses the following: STAR-CD computational features; STAR-CD turbulence models; common features of industrial complex flows; industry-specific CFD development requirements; applications and experiences of industrial complex flows, including flow in rotating disc cavities, diffusion hole film cooling, internal blade cooling, and external car aerodynamics; and conclusions on turbulence modeling needs.
NASA Astrophysics Data System (ADS)
Giannakis, Dimitrios; Majda, Andrew J.; Horenko, Illia
2012-10-01
Many problems in complex dynamical systems involve metastable regimes despite nearly Gaussian statistics with underlying dynamics that is very different from the more familiar flows of molecular dynamics. There is significant theoretical and applied interest in developing systematic coarse-grained descriptions of the dynamics, as well as assessing their skill for both short- and long-range prediction. Clustering algorithms, combined with finite-state processes for the regime transitions, are a natural way to build such models objectively from data generated by either the true model or an imperfect model. The main theme of this paper is the development of new practical criteria to assess the predictability of regimes and the predictive skill of such coarse-grained approximations through empirical information theory in stationary and periodically-forced environments. These criteria are tested on instructive idealized stochastic models utilizing K-means clustering in conjunction with running-average smoothing of the training and initial data for forecasts. A perspective on these clustering algorithms is explored here with independent interest, where improvement in the information content of finite-state partitions of phase space is a natural outcome of low-pass filtering through running averages. In applications with time-periodic equilibrium statistics, recently developed finite-element, bounded-variation algorithms for nonstationary autoregressive models are shown to substantially improve predictive skill beyond standard autoregressive models.
Modelling radiation fluxes in simple and complex environments--application of the RayMan model.
Matzarakis, Andreas; Rutz, Frank; Mayer, Helmut
2007-03-01
The most important meteorological parameter affecting the human energy balance during sunny weather conditions is the mean radiant temperature T(mrt). It considers the uniform temperature of a surrounding surface giving off blackbody radiation, which results in the same energy gain of a human body given the prevailing radiation fluxes. This energy gain usually varies considerably in open space conditions. In this paper, the model 'RayMan', used for the calculation of short- and long-wave radiation fluxes on the human body, is presented. The model, which takes complex urban structures into account, is suitable for several applications in urban areas such as urban planning and street design. The final output of the model is, however, the calculated T(mrt), which is required in the human energy balance model, and thus also for the assessment of the urban bioclimate, with the use of thermal indices such as predicted mean vote (PMV), physiologically equivalent temperature (PET) and standard effective temperature (SET*). The model has been developed based on the German VDI-Guidelines 3789, Part II (environmental meteorology, interactions between atmosphere and surfaces; calculation of short- and long-wave radiation) and VDI-3787 (environmental meteorology, methods for the human-biometeorological evaluation of climate and air quality for urban and regional planning. Part I: climate). The validation of the results of the RayMan model agrees with similar results obtained from experimental studies.
Le, Vu H.; Buscaglia, Robert; Chaires, Jonathan B.; Lewis, Edwin A.
2013-01-01
Isothermal Titration Calorimetry, ITC, is a powerful technique that can be used to estimate a complete set of thermodynamic parameters (e.g. Keq (or ΔG), ΔH, ΔS, and n) for a ligand binding interaction described by a thermodynamic model. Thermodynamic models are constructed by combination of equilibrium constant, mass balance, and charge balance equations for the system under study. Commercial ITC instruments are supplied with software that includes a number of simple interaction models, for example one binding site, two binding sites, sequential sites, and n-independent binding sites. More complex models for example, three or more binding sites, one site with multiple binding mechanisms, linked equilibria, or equilibria involving macromolecular conformational selection through ligand binding need to be developed on a case by case basis by the ITC user. In this paper we provide an algorithm (and a link to our MATLAB program) for the non-linear regression analysis of a multiple binding site model with up to four overlapping binding equilibria. Error analysis demonstrates that fitting ITC data for multiple parameters (e.g. up to nine parameters in the three binding site model) yields thermodynamic parameters with acceptable accuracy. PMID:23262283
Complexity modeling for context-based adaptive binary arithmetic coding (CABAC) in H.264/AVC decoder
NASA Astrophysics Data System (ADS)
Lee, Szu-Wei; Kuo, C.-C. Jay
2007-09-01
One way to save the power consumption in the H.264 decoder is for the H.264 encoder to generate decoderfriendly bit streams. By following this idea, a decoding complexity model of context-based adaptive binary arithmetic coding (CABAC) for H.264/AVC is investigated in this research. Since different coding modes will have an impact on the number of quantized transformed coeffcients (QTCs) and motion vectors (MVs) and, consequently, the complexity of entropy decoding, the encoder with a complexity model can estimate the complexity of entropy decoding and choose the best coding mode to yield the best tradeoff between the rate, distortion and decoding complexity performance. The complexity model consists of two parts: one for source data (i.e. QTCs) and the other for header data (i.e. the macro-block (MB) type and MVs). Thus, the proposed CABAC decoding complexity model of a MB is a function of QTCs and associated MVs, which is verified experimentally. The proposed CABAC decoding complexity model can provide good estimation results for variant bit streams. Practical applications of this complexity model will also be discussed.
The Creation of Surrogate Models for Fast Estimation of Complex Model Outcomes
Pruett, W. Andrew; Hester, Robert L.
2016-01-01
A surrogate model is a black box model that reproduces the output of another more complex model at a single time point. This is to be distinguished from the method of surrogate data, used in time series. The purpose of a surrogate is to reduce the time necessary for a computation at the cost of rigor and generality. We describe a method of constructing surrogates in the form of support vector machine (SVM) regressions for the purpose of exploring the parameter space of physiological models. Our focus is on the methodology of surrogate creation and accuracy assessment in comparison to the original model. This is done in the context of a simulation of hemorrhage in one model, “Small”, and renal denervation in another, HumMod. In both cases, the surrogate predicts the drop in mean arterial pressure following the intervention. We asked three questions concerning surrogate models: (1) how many training examples are necessary to obtain an accurate surrogate, (2) is surrogate accuracy homogeneous, and (3) how much can computation time be reduced when using a surrogate. We found the minimum training set size that would guarantee maximal accuracy was widely variable, but could be algorithmically generated. The average error for the pressure response to the protocols was -0.05±2.47 in Small, and -0.3 +/- 3.94 mmHg in HumMod. In the Small model, error grew with actual pressure drop, and in HumMod, larger pressure drops were overestimated by the surrogates. Surrogate use resulted in a 6 order of magnitude decrease in computation time. These results suggest surrogate modeling is a valuable tool for generating predictions of an integrative model’s behavior on densely sampled subsets of its parameter space. PMID:27258010
A hybridization model for the plasmon response of complex nanostructures
NASA Astrophysics Data System (ADS)
Prodan, Emil; Radloff, Corey; Halas, Naomi; Nordlander, Peter
2004-03-01
We discuss a simple and intuitive method, an electromagnetic analog of molecular orbital theory, to describe the plasmon response of complex nanostructures of arbitrary shape, (Science 302(2003)419-422). The method expresses the plasmon response of complex or composite nanoparticles as resulting from the interaction or "hybridization" of elementary plasmons supported by nanostructures of elementary geometries. As an example, the approach is applied to the important cases of metallic nanoshells and concentric multishell structures, nanomatryushkas. For the nanoshell, the plasmons can be described as resulting from the interaction between the cavity plasmon localized around the inner surface of the shell and a solid sphere plasmon localized around the outer surface of the shell. For the multishell structure, the plasmons can be viewed as resulting from the hybridization of the individual nanoshell plasmons on the different metallic shells. Work supported by ARO, TATP and the Robert A. Welch Foundation
Stability and complexity in model meta-ecosystems.
Gravel, Dominique; Massol, François; Leibold, Mathew A
2016-08-24
The diversity of life and its organization in networks of interacting species has been a long-standing theoretical puzzle for ecologists. Ever since May's provocative paper challenging whether 'large complex systems [are] stable' various hypotheses have been proposed to explain when stability should be the rule, not the exception. Spatial dynamics may be stabilizing and thus explain high community diversity, yet existing theory on spatial stabilization is limited, preventing comparisons of the role of dispersal relative to species interactions. Here we incorporate dispersal of organisms and material into stability-complexity theory. We find that stability criteria from classic theory are relaxed in direct proportion to the number of ecologically distinct patches in the meta-ecosystem. Further, we find the stabilizing effect of dispersal is maximal at intermediate intensity. Our results highlight how biodiversity can be vulnerable to factors, such as landscape fragmentation and habitat loss, that isolate local communities.
40 CFR 80.45 - Complex emissions model.
Code of Federal Regulations, 2012 CFR
2012-07-01
... in terms of weight percent oxygen ETH = Ethanol content of the target fuel in terms of weight percent... parameter Phase I Low end High end Phase II Low end High end SUL 10.0 450.0 10.0 450.0 OLE 3.77 19.0 3.77 19... form of alcohols which are more complex or have higher molecular weights than ethanol shall...
40 CFR 80.45 - Complex emissions model.
Code of Federal Regulations, 2014 CFR
2014-07-01
... in terms of weight percent oxygen ETH = Ethanol content of the target fuel in terms of weight percent... parameter Phase I Low end High end Phase II Low end High end SUL 10.0 450.0 10.0 450.0 OLE 3.77 19.0 3.77 19... form of alcohols which are more complex or have higher molecular weights than ethanol shall...
NREL's System Advisor Model Simplifies Complex Energy Analysis (Fact Sheet)
Not Available
2011-10-01
The energy market is diversifying. In addition to traditional power sources, decision makers can choose among solar, wind, and geothermal technologies as well. Each of these technologies has complex performance characteristics and economics that vary with location and other project specifics, making it difficult to analyze the viability of such projects. But that analysis is easier now, thanks to the National Renewable Energy Laboratory (NREL).
A multi-element cosmological model with a complex space-time topology
NASA Astrophysics Data System (ADS)
Kardashev, N. S.; Lipatova, L. N.; Novikov, I. D.; Shatskiy, A. A.
2015-02-01
Wormhole models with a complex topology having one entrance and two exits into the same space-time of another universe are considered, as well as models with two entrances from the same space-time and one exit to another universe. These models are used to build a model of a multi-sheeted universe (a multi-element model of the "Multiverse") with a complex topology. Spherical symmetry is assumed in all the models. A Reissner-Norström black-hole model having no singularity beyond the horizon is constructed. The strength of the central singularity of the black hole is analyzed.
Design of Low Complexity Model Reference Adaptive Controllers
NASA Technical Reports Server (NTRS)
Hanson, Curt; Schaefer, Jacob; Johnson, Marcus; Nguyen, Nhan
2012-01-01
Flight research experiments have demonstrated that adaptive flight controls can be an effective technology for improving aircraft safety in the event of failures or damage. However, the nonlinear, timevarying nature of adaptive algorithms continues to challenge traditional methods for the verification and validation testing of safety-critical flight control systems. Increasingly complex adaptive control theories and designs are emerging, but only make testing challenges more difficult. A potential first step toward the acceptance of adaptive flight controllers by aircraft manufacturers, operators, and certification authorities is a very simple design that operates as an augmentation to a non-adaptive baseline controller. Three such controllers were developed as part of a National Aeronautics and Space Administration flight research experiment to determine the appropriate level of complexity required to restore acceptable handling qualities to an aircraft that has suffered failures or damage. The controllers consist of the same basic design, but incorporate incrementally-increasing levels of complexity. Derivations of the controllers and their adaptive parameter update laws are presented along with details of the controllers implementations.
A radio-frequency sheath model for complex waveforms
Turner, M. M.; Chabert, P.
2014-04-21
Plasma sheaths driven by radio-frequency voltages occur in contexts ranging from plasma processing to magnetically confined fusion experiments. An analytical understanding of such sheaths is therefore important, both intrinsically and as an element in more elaborate theoretical structures. Radio-frequency sheaths are commonly excited by highly anharmonic waveforms, but no analytical model exists for this general case. We present a mathematically simple sheath model that is in good agreement with earlier models for single frequency excitation, yet can be solved for arbitrary excitation waveforms. As examples, we discuss dual-frequency and pulse-like waveforms. The model employs the ansatz that the time-averaged electron density is a constant fraction of the ion density. In the cases we discuss, the error introduced by this approximation is small, and in general it can be quantified through an internal consistency condition of the model. This simple and accurate model is likely to have wide application.
Network models. Comment on "Control profiles of complex networks".
Campbell, Colin; Shea, Katriona; Albert, Réka
2014-10-31
Ruths and Ruths (Reports, 21 March 2014, p. 1373) find that existing synthetic random network models fail to generate control profiles that match those found in real network models. Here, we show that a straightforward extension to the Barabási-Albert model allows the control profile to be "tuned" across the control profile space, permitting more meaningful control profile analyses of real networks.
A Complex Network Approach to Distributional Semantic Models.
Utsumi, Akira
2015-01-01
A number of studies on network analysis have focused on language networks based on free word association, which reflects human lexical knowledge, and have demonstrated the small-world and scale-free properties in the word association network. Nevertheless, there have been very few attempts at applying network analysis to distributional semantic models, despite the fact that these models have been studied extensively as computational or cognitive models of human lexical knowledge. In this paper, we analyze three network properties, namely, small-world, scale-free, and hierarchical properties, of semantic networks created by distributional semantic models. We demonstrate that the created networks generally exhibit the same properties as word association networks. In particular, we show that the distribution of the number of connections in these networks follows the truncated power law, which is also observed in an association network. This indicates that distributional semantic models can provide a plausible model of lexical knowledge. Additionally, the observed differences in the network properties of various implementations of distributional semantic models are consistently explained or predicted by considering the intrinsic semantic features of a word-context matrix and the functions of matrix weighting and smoothing. Furthermore, to simulate a semantic network with the observed network properties, we propose a new growing network model based on the model of Steyvers and Tenenbaum. The idea underlying the proposed model is that both preferential and random attachments are required to reflect different types of semantic relations in network growth process. We demonstrate that this model provides a better explanation of network behaviors generated by distributional semantic models.
NASA Astrophysics Data System (ADS)
Hrachowitz, M.; Fovet, O.; Ruiz, L.; Euser, T.; Gharari, S.; Nijzink, R.; Freer, J.; Savenije, H. H. G.; Gascuel-Odoux, C.
2014-09-01
Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus, ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study, the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by four calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce a suite of hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by "prior constraints," inferred from expert knowledge to ensure a model which behaves well with respect to the modeler's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model setup exhibited increased performance in the independent test period and skill to better reproduce all tested signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if counter-balanced by prior constraints, can significantly increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge-driven strategy of constraining models.
Reduced complexity structural modeling for automated airframe synthesis
NASA Technical Reports Server (NTRS)
Hajela, Prabhat
1987-01-01
A procedure is developed for the optimum sizing of wing structures based on representing the built-up finite element assembly of the structure by equivalent beam models. The reduced-order beam models are computationally less demanding in an optimum design environment which dictates repetitive analysis of several trial designs. The design procedure is implemented in a computer program requiring geometry and loading information to create the wing finite element model and its equivalent beam model, and providing a rapid estimate of the optimum weight obtained from a fully stressed design approach applied to the beam. The synthesis procedure is demonstrated for representative conventional-cantilever and joined wing configurations.
Adaptive tracking for complex systems using reduced-order models
NASA Technical Reports Server (NTRS)
Carnigan, Craig R.
1990-01-01
Reduced-order models are considered in the context of parameter adaptive controllers for tracking workspace trajectories. A dual-arm manipulation task is used to illustrate the methodology and provide simulation results. A parameter adaptive controller is designed to track a payload trajectory using a four-parameter model instead of the full-order, nine-parameter model. Several simulations with different payload-to-arm mass ratios are used to illustrate the capabilities of the reduced-order model in tracking the desired trajectory.
Adaptive tracking for complex systems using reduced-order models
NASA Technical Reports Server (NTRS)
Carignan, Craig R.
1990-01-01
Reduced-order models are considered in the context of parameter adaptive controllers for tracking workspace trajectories. A dual-arm manipulation task is used to illustrate the methodology and provide simulation results. A parameter adaptive controller is designed to track the desired position trajectory of a payload using a four-parameter model instead of a full-order, nine-parameter model. Several simulations with different payload-to-arm mass ratios are used to illustrate the capabilities of the reduced-order model in tracking the desired trajectory.
NASA Astrophysics Data System (ADS)
Huang, X.; Bandilla, K.; Celia, M. A.; Bachu, S.
2013-12-01
Geological carbon sequestration can significantly contribute to climate-change mitigation only if it is deployed at a very large scale. This means that injection scenarios must occur, and be analyzed, at the basin scale. Various mathematical models of different complexity may be used to assess the fate of injected CO2 and/or resident brine. These models span the range from multi-dimensional, multi-phase numerical simulators to simple single-phase analytical solutions. In this study, we consider a range of models, all based on vertically-integrated governing equations, to predict the basin-scale pressure response to specific injection scenarios. The Canadian section of the Basal Aquifer is used as a test site to compare the different modeling approaches. The model domain covers an area of approximately 811,000 km2, and the total injection rate is 63 Mt/yr, corresponding to 9 locations where large point sources have been identified. Predicted areas of critical pressure exceedance are used as a comparison metric among the different modeling approaches. Comparison of the results shows that single-phase numerical models may be good enough to predict the pressure response over a large aquifer; however, a simple superposition of semi-analytical or analytical solutions is not sufficiently accurate because spatial variability of formation properties plays an important role in the problem, and these variations are not captured properly with simple superposition. We consider two different injection scenarios: injection at the source locations and injection at locations with more suitable aquifer properties. Results indicate that in formations with significant spatial variability of properties, strong variations in injectivity among the different source locations can be expected, leading to the need to transport the captured CO2 to suitable injection locations, thereby necessitating development of a pipeline network. We also consider the sensitivity of porosity and
NASA Astrophysics Data System (ADS)
Brodsky, Yu. I.
2015-01-01
The work is devoted to the application of Bourbaki's structure theory to substantiate the synthesis of simulation models of complex multicomponent systems, where every component may be a complex system itself. An application of the Bourbaki's structure theory offers a new approach to the design and computer implementation of simulation models of complex multicomponent systems—model synthesis and model-oriented programming. It differs from the traditional object-oriented approach. The central concept of this new approach and at the same time, the basic building block for the construction of more complex structures is the concept of models-components. A model-component endowed with a more complicated structure than, for example, the object in the object-oriented analysis. This structure provides to the model-component an independent behavior-the ability of standard responds to standard requests of its internal and external environment. At the same time, the computer implementation of model-component's behavior is invariant under the integration of models-components into complexes. This fact allows one firstly to construct fractal models of any complexity, and secondly to implement a computational process of such constructions uniformly-by a single universal program. In addition, the proposed paradigm allows one to exclude imperative programming and to generate computer code with a high degree of parallelism.
The effects of numerical-model complexity and observation type on estimated porosity values
NASA Astrophysics Data System (ADS)
Starn, J. Jeffrey; Bagtzoglou, Amvrossios C.; Green, Christopher T.
2015-09-01
The relative merits of model complexity and types of observations employed in model calibration are compared. An existing groundwater flow model coupled with an advective transport simulation of the Salt Lake Valley, Utah (USA), is adapted for advective transport, and effective porosity is adjusted until simulated tritium concentrations match concentrations in samples from wells. Two calibration approaches are used: a "complex" highly parameterized porosity field and a "simple" parsimonious model of porosity distribution. The use of an atmospheric tracer (tritium in this case) and apparent ages (from tritium/helium) in model calibration also are discussed. Of the models tested, the complex model (with tritium concentrations and tritium/helium apparent ages) performs best. Although tritium breakthrough curves simulated by complex and simple models are very generally similar, and there is value in the simple model, the complex model is supported by a more realistic porosity distribution and a greater number of estimable parameters. Culling the best quality data did not lead to better calibration, possibly because of processes and aquifer characteristics that are not simulated. Despite many factors that contribute to shortcomings of both the models and the data, useful information is obtained from all the models evaluated. Although any particular prediction of tritium breakthrough may have large errors, overall, the models mimic observed trends.
Modeling Conditional Probabilities in Complex Educational Assessments. CSE Technical Report.
ERIC Educational Resources Information Center
Mislevy, Robert J.; Almond, Russell; Dibello, Lou; Jenkins, Frank; Steinberg, Linda; Yan, Duanli; Senturk, Deniz
An active area in psychometric research is coordinated task design and statistical analysis built around cognitive models. Compared with classical test theory and item response theory, there is often less information from observed data about the measurement-model parameters. On the other hand, there is more information from the grounding…
West Closure Complex Pump Intake Model, New Orleans, Louisiana
2013-02-01
Abstract The WCC pump station intake was evaluated for intake performance using a 1: 20 -scale model of the approach channel, intake bays, contracted... 20 Figure 15. Top view of contracted section and FSI in original design... 20 Figure 16. Model trash rack
Is there hope for multi-site complexation modeling?
Bickmore, Barry R.; Rosso, Kevin M.; Mitchell, S. C.
2006-06-06
It has been shown here that the standard formulation of the MUSIC model does not deliver the molecular-scale insight into oxide surface reactions that it promises. The model does not properly divide long-range electrostatic and short-range contributions to acid-base reaction energies, and it does not treat solvation in a physically realistic manner. However, even if the current MUSIC model does not succeed in its ambitions, its ambitions are still reasonable. It was a pioneering attempt in that Hiemstra and coworkers recognized that intrinsic equilibrium constants, where the effects of long-range electrostatic effects have been removed, must be theoretically constrained prior to model fitting if there is to be any hope of obtaining molecular-scale insights from SCMs. We have also shown, on the other hand, that it may be premature to dismiss all valence-based models of acidity. Not only can some such models accurately predict intrinsic acidity constants, but they can also now be linked to the results of molecular dynamics simulations of solvated systems. Significant challenges remain for those interested in creating SCMs that are accurate at the molecular scale. It will only be after all model parameters can be predicted from theory, and the models validated against titration data that we will be able to begin to have some confidence that we really are adequately describing the chemical systems in question.
Evolving complex dynamics in electronic models of genetic networks
NASA Astrophysics Data System (ADS)
Mason, Jonathan; Linsay, Paul S.; Collins, J. J.; Glass, Leon
2004-09-01
Ordinary differential equations are often used to model the dynamics and interactions in genetic networks. In one particularly simple class of models, the model genes control the production rates of products of other genes by a logical function, resulting in piecewise linear differential equations. In this article, we construct and analyze an electronic circuit that models this class of piecewise linear equations. This circuit combines CMOS logic and RC circuits to model the logical control of the increase and decay of protein concentrations in genetic networks. We use these electronic networks to study the evolution of limit cycle dynamics. By mutating the truth tables giving the logical functions for these networks, we evolve the networks to obtain limit cycle oscillations of desired period. We also investigate the fitness landscapes of our networks to determine the optimal mutation rate for evolution.
Modelling the Complex Conductivity of Charged Porous Media using The Grain Polarization Model
NASA Astrophysics Data System (ADS)
Leroy, P.; Revil, A.; Jougnot, D.; Li, S.
2015-12-01
The low-frequency complex conductivity response of charged porous media reflects a combination of three polarization processes occuring at different frequency ranges. One polarization process corresponds to the membrane polarization phenomenon, which is the polarization mechanism associated with the back-diffusion of salt ions through different pore spaces of the porous material (ions-selective zones and zones with no selectivity). This polarization process generally occurs at the lowest frequency range, typically in the frequency range [mHz Hz] because it involves polarization mechanism occurring over different pore spaces (the relaxation frequency is inversely proportional to the length of the polarization process). Another polarization process corresponds to the electrochemical polarization of the electrical double layer coating the surface of the grains. In the grain polarization model, the diffuse layer is assumed to not polarize because it is assumed to form a continuum in the porous medium. The compact Stern layer is assumed to polarize because the Stern layer is assumed to be discontinuous over multiple grains. The electrochemical polarization of the Stern layer typically occurs in the frequency range [Hz kHz]. The last polarization process corresponds to the Maxwell-Wagner polarization mechanism, which is caused by the formation of field-induced free charge distributions near the interface between the phases of the medium. In this presentation, the grain polarization model based on the O'Konski, Schwarz, Schurr and Sen theories and developed later by Revil and co-workers is showed. This spectral induced polarization model was successfully applied to describe the complex conductivity responses of glass beads, sands, clays, clay-sand mixtures and other minerals. The limits of this model and future developments will also be presented.
Hu, Eric Y.; Bouteiller, Jean-Marie C.; Song, Dong; Baudry, Michel; Berger, Theodore W.
2015-01-01
Chemical synapses are comprised of a wide collection of intricate signaling pathways involving complex dynamics. These mechanisms are often reduced to simple spikes or exponential representations in order to enable computer simulations at higher spatial levels of complexity. However, these representations cannot capture important nonlinear dynamics found in synaptic transmission. Here, we propose an input-output (IO) synapse model capable of generating complex nonlinear dynamics while maintaining low computational complexity. This IO synapse model is an extension of a detailed mechanistic glutamatergic synapse model capable of capturing the input-output relationships of the mechanistic model using the Volterra functional power series. We demonstrate that the IO synapse model is able to successfully track the nonlinear dynamics of the synapse up to the third order with high accuracy. We also evaluate the accuracy of the IO synapse model at different input frequencies and compared its performance with that of kinetic models in compartmental neuron models. Our results demonstrate that the IO synapse model is capable of efficiently replicating complex nonlinear dynamics that were represented in the original mechanistic model and provide a method to replicate complex and diverse synaptic transmission within neuron network simulations. PMID:26441622
Non-consensus Opinion Models on Complex Networks
NASA Astrophysics Data System (ADS)
Li, Qian; Braunstein, Lidia A.; Wang, Huijuan; Shao, Jia; Stanley, H. Eugene; Havlin, Shlomo
2013-04-01
Social dynamic opinion models have been widely studied to understand how interactions among individuals cause opinions to evolve. Most opinion models that utilize spin interaction models usually produce a consensus steady state in which only one opinion exists. Because in reality different opinions usually coexist, we focus on non-consensus opinion models in which above a certain threshold two opinions coexist in a stable relationship. We revisit and extend the non-consensus opinion (NCO) model introduced by Shao et al. (Phys. Rev. Lett. 103:01870, 2009). The NCO model in random networks displays a second order phase transition that belongs to regular mean field percolation and is characterized by the appearance (above a certain threshold) of a large spanning cluster of the minority opinion. We generalize the NCO model by adding a weight factor W to each individual's original opinion when determining their future opinion (NCO W model). We find that as W increases the minority opinion holders tend to form stable clusters with a smaller initial minority fraction than in the NCO model. We also revisit another non-consensus opinion model based on the NCO model, the inflexible contrarian opinion (ICO) model (Li et al. in Phys. Rev. E 84:066101, 2011), which introduces inflexible contrarians to model the competition between two opinions in a steady state. Inflexible contrarians are individuals that never change their original opinion but may influence the opinions of others. To place the inflexible contrarians in the ICO model we use two different strategies, random placement and one in which high-degree nodes are targeted. The inflexible contrarians effectively decrease the size of the largest rival-opinion cluster in both strategies, but the effect is more pronounced under the targeted method. All of the above models have previously been explored in terms of a single network, but human communities are usually interconnected, not isolated. Because opinions propagate not
NASA Astrophysics Data System (ADS)
Chen, F.; Barlage, M. J.; Tewari, M.; Rasmussen, R.; Bao, Y.; Jin, J.; Lettenmaier, D. P.; Livneh, B.; Lin, C.; Miguez-Macho, G.; Niu, G.; Wen, L.; Yang, Z.
2011-12-01
The timing and amount of spring snowmelt runoff in mountainous regions are critical for water resources and managements. Correctly capturing the snow-atmospheric interactions (through albedo and surface energy partitioning) is also important for weather and climate models. This study developed a unique, integrated data set including one-year (2007-2008) snow water equivalent (SWE) observations from 112 SNOTEL sites in the Colorado Headwaters region, 2004-2008 observations (surface heat fluxes, radiation budgets, soil temperature and moisture) from two AmeriFlux sites (Niwot Ridge and GLEES), MODIS snow cover, and river discharge. These observations were used to evaluate the ability of six widely-used land-surface/snow models (Noah, Noah-MP, VIC, CLM, SAST, and LEAF-2) in simulating the seasonal evolution of snowpacks in central Rockies. The overarching goals of this community undertaking are to: 1) understand key processes controlling the evolution of snowpack in this complex terrain and forested region through analyzing field data and various components of snow physics in these models, and 2) improve snowpack modeling in weather and climate models. This comprehensive data set allowed us to address issues that had not been possible in previous snow-model inter-comparison investigations (e.g., SnowMIPs). For instance, models displayed a large disparity in treating radiation and turbulence processes within vegetation canopies. Some models with an overly simplified tree-canopy treatment need to raise snow albedo helped to retain snow on the ground during melting phase. However, comparing modeled radiation and heat fluxes to long-term observations revealed that too-high albedo reduced 75% of solar energy absorbed by the forested surface and resulted in too-low surface sensible heat and longwave radiation returned to the atmosphere, which could be a crucial deficiency for coupled weather and climate models. Large differences were found in simulated SWE by the six LSMs
Modeling Uncertainty and Its Implications in Complex Interdependent Networks
2016-04-30
Networks Anita Raja, Professor, The Cooper Union Mohammad Rashedul Hasan, Assistant Professor of Practice, UNL Robert Flowe, Office of Acquisition...êÅÜ=mêçÖê~ãW= `êÉ~íáåÖ=póåÉêÖó=Ñçê=fåÑçêãÉÇ=`Ü~åÖÉ= - 86 - Panel 13. Setting Requirements and Managing Risk in Complex, Networked Projects Thursday...Army for Acquisition, Logistics and Technology Acquisition in a World of Joint Capabilities: Methods for Understanding Cross-Organizational Network
Bayesian Mixed-Membership Models of Complex and Evolving Networks
2006-12-01
K is the number of mixture components, (~µ1: K ,Σ1: K ) are the corresponding mean vectors and variance-covariance matrices, and Σk = σ2kI...2004) 104 CHAPTER 4. COMPLEXITY AND INTEGRATION E.M. AIROLDI introduce a variant of K - means algorithm that minimizes a non-standard scoring function...clusters. Recall that the K - means unsupervised clustering algorithm searches for K means m1: K that minimize MSE = 1 N K ∑ k =1 N∑ n=1 I (
Mathematical model and software complex for computer simulation of field emission electron sources
Nikiforov, Konstantin
2015-03-10
The software complex developed in MATLAB allows modelling of function of diode and triode structures based on field emission electron sources with complex sub-micron geometry, their volt-ampere characteristics, calculating distribution of electric field for educational and research needs. The goal of this paper is describing the physical-mathematical model, calculation methods and algorithms the software complex is based on, demonstrating the principles of its function and showing results of its work. For getting to know the complex, a demo version with graphical user interface is presented.
Quantum mechanics can reduce the complexity of classical models.
Gu, Mile; Wiesner, Karoline; Rieper, Elisabeth; Vedral, Vlatko
2012-03-27
Mathematical models are an essential component of quantitative science. They generate predictions about the future, based on information available in the present. In the spirit of simpler is better; should two models make identical predictions, the one that requires less input is preferred. Yet, for almost all stochastic processes, even the provably optimal classical models waste information. The amount of input information they demand exceeds the amount of predictive information they output. Here we show how to systematically construct quantum models that break this classical bound, and that the system of minimal entropy that simulates such processes must necessarily feature quantum dynamics. This indicates that many observed phenomena could be significantly simpler than classically possible should quantum effects be involved.
Ensemble Learning of QTL Models Improves Prediction of Complex Traits
Bian, Yang; Holland, James B.
2015-01-01
Quantitative trait locus (QTL) models can provide useful insights into trait genetic architecture because of their straightforward interpretability but are less useful for genetic prediction because of the difficulty in including the effects of numerous small effect loci without overfitting. Tight linkage between markers introduces near collinearity among marker genotypes, complicating the detection of QTL and estimation of QTL effects in linkage mapping, and this problem is exacerbated by very high density linkage maps. Here we developed a thinning and aggregating (TAGGING) method as a new ensemble learning approach to QTL mapping. TAGGING reduces collinearity problems by thinning dense linkage maps, maintains aspects of marker selection that characterize standard QTL mapping, and by ensembling, incorporates information from many more markers-trait associations than traditional QTL mapping. The objective of TAGGING was to improve prediction power compared with QTL mapping while also providing more specific insights into genetic architecture than genome-wide prediction models. TAGGING was compared with standard QTL mapping using cross validation of empirical data from the maize (Zea mays L.) nested association mapping population. TAGGING-assisted QTL mapping substantially improved prediction ability for both biparental and multifamily populations by reducing both the variance and bias in prediction. Furthermore, an ensemble model combining predictions from TAGGING-assisted QTL and infinitesimal models improved prediction abilities over the component models, indicating some complementarity between model assumptions and suggesting that some trait genetic architectures involve a mixture of a few major QTL and polygenic effects. PMID:26276383
Towards a macroscopic modeling of the complexity in traffic flow.
Rosswog, Stephan; Wagner, Peter
2002-03-01
Based on the assumption of a safe velocity U(e)(rho) depending on the vehicle density rho, a macroscopic model for traffic flow is presented that extends the model of the Kühne-Kerner-Konhäuser by an interaction term containing the second derivative of U(e)(rho). We explore two qualitatively different forms of U(e): a conventional Fermi-type function and, motivated by recent experimental findings, a function that exhibits a plateau at intermediate densities, i.e., in this density regime the exact distance to the car ahead is only of minor importance. To solve the fluid-like equations a Lagrangian particle scheme is developed. The suggested model shows a much richer dynamical behavior than the usual fluid-like models. A large variety of encountered effects is known from traffic observations, many of which are usually assigned to the elusive state of "synchronized flow." Furthermore, the model displays alternating regimes of stability and instability at intermediate densities. It can explain data scatter in the fundamental diagram and complicated jam patterns. Within this model, a consistent interpretation of the emergence of very different traffic phenomena is offered: they are determined by the velocity relaxation time, i.e., the time needed to relax towards U(e)(rho). This relaxation time is a measure of the average acceleration capability and can be attributed to the composition (e.g., the percentage of trucks) of the traffic flow.
NASA Technical Reports Server (NTRS)
Ancel, Ersin; Shih, Ann T.
2014-01-01
This paper highlights the development of a model that is focused on the safety issue of increasing complexity and reliance on automation systems in transport category aircraft. Recent statistics show an increase in mishaps related to manual handling and automation errors due to pilot complacency and over-reliance on automation, loss of situational awareness, automation system failures and/or pilot deficiencies. Consequently, the aircraft can enter a state outside the flight envelope and/or air traffic safety margins which potentially can lead to loss-of-control (LOC), controlled-flight-into-terrain (CFIT), or runway excursion/confusion accidents, etc. The goal of this modeling effort is to provide NASA's Aviation Safety Program (AvSP) with a platform capable of assessing the impacts of AvSP technologies and products towards reducing the relative risk of automation related accidents and incidents. In order to do so, a generic framework, capable of mapping both latent and active causal factors leading to automation errors, is developed. Next, the framework is converted into a Bayesian Belief Network model and populated with data gathered from Subject Matter Experts (SMEs). With the insertion of technologies and products, the model provides individual and collective risk reduction acquired by technologies and methodologies developed within AvSP.
Efficient Calibration/Uncertainty Analysis Using Paired Complex/Surrogate Models.
Burrows, Wesley; Doherty, John
2015-01-01
The use of detailed groundwater models to simulate complex environmental processes can be hampered by (1) long run-times and (2) a penchant for solution convergence problems. Collectively, these can undermine the ability of a modeler to reduce and quantify predictive uncertainty, and therefore limit the use of such detailed models in the decision-making context. We explain and demonstrate a novel approach to calibration and the exploration of posterior predictive uncertainty, of a complex model, that can overcome these problems in many modelling contexts. The methodology relies on conjunctive use of a simplified surrogate version of the complex model in combination with the complex model itself. The methodology employs gradient-based subspace analysis and is thus readily adapted for use in highly parameterized contexts. In its most basic form, one or more surrogate models are used for calculation of the partial derivatives that collectively comprise the Jacobian matrix. Meanwhile, testing of parameter upgrades and the making of predictions is done by the original complex model. The methodology is demonstrated using a density-dependent seawater intrusion model in which the model domain is characterized by a heterogeneous distribution of hydraulic conductivity.
Critical O (N ) models in the complex field plane
NASA Astrophysics Data System (ADS)
Litim, Daniel F.; Marchais, Edouard
2017-01-01
Local and global scaling solutions for O (N ) symmetric scalar field theories are studied in the complexified field plane with the help of the renormalization group. Using expansions of the effective action about small, large, and purely imaginary fields, we obtain and solve exact recursion relations for all couplings and determine the 3 d Wilson-Fisher fixed point analytically. For all O (N ) universality classes, we further establish that Wilson-Fisher fixed point solutions display singularities in the complex field plane, which dictate the radius of convergence for real-field expansions of the effective action. At infinite N , we find closed expressions for the convergence-limiting singularities and prove that local expansions of the effective action are powerful enough to uniquely determine the global Wilson-Fisher fixed point for any value of the fields. Implications of our findings for interacting fixed points in more complicated theories are indicated.
Modelling excitonic-energy transfer in light-harvesting complexes
NASA Astrophysics Data System (ADS)
Kramer, Tobias; Kreisbeck, Christoph
2014-01-01
The theoretical and experimental study of energy transfer in photosynthesis has revealed an interesting transport regime, which lies at the borderline between classical transport dynamics and quantum-mechanical interference effects. Dissipation is caused by the coupling of electronic degrees of freedom to vibrational modes and leads to a directional energy transfer from the antenna complex to the target reaction-center. The dissipative driving is robust and does not rely on fine-tuning of specific vibrational modes. For the parameter regime encountered in the biological systems new theoretical tools are required to directly compare theoretical results with experimental spectroscopy data. The calculations require to utilize massively parallel graphics processor units (GPUs) for efficient and exact computations.
Modelling excitonic-energy transfer in light-harvesting complexes
Kramer, Tobias; Kreisbeck, Christoph
2014-01-08
The theoretical and experimental study of energy transfer in photosynthesis has revealed an interesting transport regime, which lies at the borderline between classical transport dynamics and quantum-mechanical interference effects. Dissipation is caused by the coupling of electronic degrees of freedom to vibrational modes and leads to a directional energy transfer from the antenna complex to the target reaction-center. The dissipative driving is robust and does not rely on fine-tuning of specific vibrational modes. For the parameter regime encountered in the biological systems new theoretical tools are required to directly compare theoretical results with experimental spectroscopy data. The calculations require to utilize massively parallel graphics processor units (GPUs) for efficient and exact computations.
Research Strategy for Modeling the Complexities of Turbine Heat Transfer
NASA Technical Reports Server (NTRS)
Simoneau, Robert J.
1996-01-01
The subject of this paper is a NASA research program, known as the Coolant Flow Management Program, which focuses on the interaction between the internal coolant channel and the external film cooling of a turbine blade and/or vane in an aircraft gas turbine engine. The turbine gas path is really a very complex flow field. The combination of strong pressure gradients, abrupt geometry changes and intersecting surfaces, viscous forces, rotation, and unsteady blade/vane interactions all combine to offer a formidable challenge. To this, in the high pressure turbine, we add the necessity of film cooling. The ultimate goal of the turbine designer is to maintain or increase the high level of turbine performance and at the same time reduce the amount of coolant flow needed to achieve this end. Simply stated, coolant flow is a penalty on the cycle and reduces engine thermal efficiency. Accordingly, understanding the flow field and heat transfer associated with the coolant flow is a priority goal. It is important to understand both the film cooling and the internal coolant flow, particularly their interaction. Thus, the motivation for the Coolant Flow Management Program. The paper will begin with a brief discussion of the management and research strategy, will then proceed to discuss the current attack from the internal coolant side, and will conclude by looking at the film cooling effort - at all times keeping sight of the primary goal the interaction between the two. One of the themes of this paper is that complex heat transfer problems of this nature cannot be attacked by single researchers or even groups of researchers, each working alone. It truly needs the combined efforts of a well-coordinated team to make an impact. It is important to note that this is a government/industry/university team effort.
Turing Systems: A General Model for Complex Patterns in Nature
NASA Astrophysics Data System (ADS)
Barrio, R. A.
More than half a century ago Alan Turing showed that a system of nonlinear reaction-diffusion equations could produce spatial patterns that are stationary and robust, a phenomenon known as "diffusion driven instability". This remarkable fact was largely ignored for twenty years. However, in the last decade, Turing systems have been a matter of intense and active research, because they are suitable to model a wide variety of phenomena found in Nature, ranging from Turing's original idea of describing morphogenesis from an egg, and applications to the colouring of skins of animals, to the physics of chemical reactors and catalyzers, the physiology of the heart, semiconductor devices, and even to geological formations. In this paper I review the main properties of the Turing instability using a generic reaction-diffusion model, and I give examples of recent applications of Turing models to different problems of pattern formation.
Hummel, Eva; Hoffmann, Ingrid
2016-01-01
The aim of this article is to demonstrate the complexity of nutritional behavior and to increase understanding of this complex phenomenon. We developed a cause-effect model based on current literature, expert consultation, and instruments dealing with complexity. It presents factors from all dimensions of nutrition and their direct causal relationships with specification of direction, strength, and type. Including the interplay of all relationships, the model reveals cause-effect chains, feedback loops, multicausalities, and side effects. Analyses based on the model can further enhance understanding of nutritional behavior and help identify starting points for measures to modify food consumption.
NASA Technical Reports Server (NTRS)
Hops, J. M.; Sherif, J. S.
1994-01-01
A great deal of effort is now being devoted to the study, analysis, prediction, and minimization of software maintenance expected cost, long before software is delivered to users or customers. It has been estimated that, on the average, the effort spent on software maintenance is as costly as the effort spent on all other software costs. Software design methods should be the starting point to aid in alleviating the problems of software maintenance complexity and high costs. Two aspects of maintenance deserve attention: (1) protocols for locating and rectifying defects, and for ensuring that noe new defects are introduced in the development phase of the software process; and (2) protocols for modification, enhancement, and upgrading. This article focuses primarily on the second aspect, the development of protocols to help increase the quality and reduce the costs associated with modifications, enhancements, and upgrades of existing software. This study developed parsimonious models and a relative complexity metric for complexity measurement of software that were used to rank the modules in the system relative to one another. Some success was achieved in using the models and the relative metric to identify maintenance-prone modules.
Estimating the Optimal Spatial Complexity of a Water Quality Model Using Multi-Criteria Methods
NASA Astrophysics Data System (ADS)
Meixner, T.
2002-12-01
Discretizing the landscape into multiple smaller units appears to be a necessary step for improving the performance of water quality models. However there is a need for adequate falsification methods to discern between discretization that improves model performance and discretization that merely adds to model complexity. Multi-criteria optimization methods promise a way to increase the power of model discrimination and a path to increasing our ability in differentiating between good and bad model discretization methods. This study focuses on the optimal level of spatial discretization of a water quality model, the Alpine Hydrochemical Model of the Emerald Lake watershed in Sequoia National Park, California. The 5 models of the watershed differ in the degree of simplification that they represent from the real watershed. The simplest model is just a lumped model of the entire watershed. The most complex model takes the 5 main soil groups in the watershed and represents each with a modeling subunit as well as having subunits for rock and talus areas in the watershed. Each of these models was calibrated using stream discharge and three chemical fluxes jointly as optimization criteria using a Pareto optimization routine, MOCOM-UA. After optimization the 5 models were compared for their performance using model criteria not used in calibration, the variability of model parameter estimates, and comparison to the mean of observations as a predictor of stream chemical composition. Based on these comparisons, the results indicate that the model with only 2 terrestrial subunits had the optimal level of model complexity. This result shows that increasing model complexity, even using detailed site specific data, is not always rewarded with improved model performance. Additionally, this result indicates that the most important geographic element for modeling water quality in alpine watersheds is accurately delineating the boundary between areas of rock and areas containing either
A novel approach for identifying causal models of complex diseases from family data.
Park, Leeyoung; Kim, Ju H
2015-04-01
Causal models including genetic factors are important for understanding the presentation mechanisms of complex diseases. Familial aggregation and segregation analyses based on polygenic threshold models have been the primary approach to fitting genetic models to the family data of complex diseases. In the current study, an advanced approach to obtaining appropriate causal models for complex diseases based on the sufficient component cause (SCC) model involving combinations of traditional genetics principles was proposed. The probabilities for the entire population, i.e., normal-normal, normal-disease, and disease-disease, were considered for each model for the appropriate handling of common complex diseases. The causal model in the current study included the genetic effects from single genes involving epistasis, complementary gene interactions, gene-environment interactions, and environmental effects. Bayesian inference using a Markov chain Monte Carlo algorithm (MCMC) was used to assess of the proportions of each component for a given population lifetime incidence. This approach is flexible, allowing both common and rare variants within a gene and across multiple genes. An application to schizophrenia data confirmed the complexity of the causal factors. An analysis of diabetes data demonstrated that environmental factors and gene-environment interactions are the main causal factors for type II diabetes. The proposed method is effective and useful for identifying causal models, which can accelerate the development of efficient strategies for identifying causal factors of complex diseases.
Research on the optimal selection method of image complexity assessment model index parameter
NASA Astrophysics Data System (ADS)
Zhu, Yong; Duan, Jin; Qian, Xiaofei; Xiao, Bo
2015-10-01
Target recognition is widely used in national economy, space technology and national defense and other fields. There is great difference between the difficulty of the target recognition and target extraction. The image complexity is evaluating the difficulty level of extracting the target from background. It can be used as a prior evaluation index of the target recognition algorithm's effectiveness. The paper, from the perspective of the target and background characteristics measurement, describe image complexity metrics parameters using quantitative, accurate mathematical relationship. For the collinear problems between each measurement parameters, image complexity metrics parameters are clustered with gray correlation method. It can realize the metrics parameters of extraction and selection, improve the reliability and validity of image complexity description and representation, and optimize the image the complexity assessment calculation model. Experiment results demonstrate that when gray system theory is applied to the image complexity analysis, target characteristics image complexity can be measured more accurately and effectively.
A Deep Stochastic Model for Detecting Community in Complex Networks
NASA Astrophysics Data System (ADS)
Fu, Jingcheng; Wu, Jianliang
2017-01-01
Discovering community structures is an important step to understanding the structure and dynamics of real-world networks in social science, biology and technology. In this paper, we develop a deep stochastic model based on non-negative matrix factorization to identify communities, in which there are two sets of parameters. One is the community membership matrix, of which the elements in a row correspond to the probabilities of the given node belongs to each of the given number of communities in our model, another is the community-community connection matrix, of which the element in the i-th row and j-th column represents the probability of there being an edge between a randomly chosen node from the i-th community and a randomly chosen node from the j-th community. The parameters can be evaluated by an efficient updating rule, and its convergence can be guaranteed. The community-community connection matrix in our model is more precise than the community-community connection matrix in traditional non-negative matrix factorization methods. Furthermore, the method called symmetric nonnegative matrix factorization, is a special case of our model. Finally, based on the experiments on both synthetic and real-world networks data, it can be demonstrated that our algorithm is highly effective in detecting communities.
A Model for Minimizing Numeric Function Generator Complexity and Delay
2007-12-01
model_Linear_NonUniform_Basic with the size of the number system (n) and the number of segments ( mins ). The author’s MATLAB m-file segments.m returns the...Navigator ............................................................20 2. MATLAB ...103 APPENDIX A. MATLAB SOURCE CODE............................................................105 A.1
Measuring Learning Progressions Using Bayesian Modeling in Complex Assessments
ERIC Educational Resources Information Center
Rutstein, Daisy Wise
2012-01-01
This research examines issues regarding model estimation and robustness in the use of Bayesian Inference Networks (BINs) for measuring Learning Progressions (LPs). It provides background information on LPs and how they might be used in practice. Two simulation studies are performed, along with real data examples. The first study examines the case…
Ensemble learning of QTL models improves prediction of complex traits
Technology Transfer Automated Retrieval System (TEKTRAN)
Quantitative trait locus (QTL) models can provide useful insights into trait genetic architecture because of their straightforward interpretability, but are less useful for genetic prediction due to difficulty in including the effects of numerous small effect loci without overfitting. Tight linkage ...
Information and complexity measures for hydrologic model evaluation
Technology Transfer Automated Retrieval System (TEKTRAN)
Hydrological models are commonly evaluated through the residual-based performance measures such as the root-mean square error or efficiency criteria. Such measures, however, do not evaluate the degree of similarity of patterns in simulated and measured time series. The objective of this study was to...
Model-Based Assurance of Diagnostic Procedures for Complex Systems
2010-10-01
this knowledge for fault detection and isolation . In parallel developments, different communities have found value in analytic state-based models...that are used for fault detection and isolation . Figure 2. The schematic of the Electrical Power System (EPS) in the Advanced Diagnostics and
The effects of numerical-model complexity and observation type on estimated porosity values
Starn, Jeffrey; Bagtzoglou, Amvrossios C.; Green, Christopher T.
2015-01-01
The relative merits of model complexity and types of observations employed in model calibration are compared. An existing groundwater flow model coupled with an advective transport simulation of the Salt Lake Valley, Utah (USA), is adapted for advective transport, and effective porosity is adjusted until simulated tritium concentrations match concentrations in samples from wells. Two calibration approaches are used: a “complex” highly parameterized porosity field and a “simple” parsimonious model of porosity distribution. The use of an atmospheric tracer (tritium in this case) and apparent ages (from tritium/helium) in model calibration also are discussed. Of the models tested, the complex model (with tritium concentrations and tritium/helium apparent ages) performs best. Although tritium breakthrough curves simulated by complex and simple models are very generally similar, and there is value in the simple model, the complex model is supported by a more realistic porosity distribution and a greater number of estimable parameters. Culling the best quality data did not lead to better calibration, possibly because of processes and aquifer characteristics that are not simulated. Despite many factors that contribute to shortcomings of both the models and the data, useful information is obtained from all the models evaluated. Although any particular prediction of tritium breakthrough may have large errors, overall, the models mimic observed trends.
Statistical Techniques Complement UML When Developing Domain Models of Complex Dynamical Biosystems
Timmis, Jon; Qwarnstrom, Eva E.
2016-01-01
Computational modelling and simulation is increasingly being used to complement traditional wet-lab techniques when investigating the mechanistic behaviours of complex biological systems. In order to ensure computational models are fit for purpose, it is essential that the abstracted view of biology captured in the computational model, is clearly and unambiguously defined within a conceptual model of the biological domain (a domain model), that acts to accurately represent the biological system and to document the functional requirements for the resultant computational model. We present a domain model of the IL-1 stimulated NF-κB signalling pathway, which unambiguously defines the spatial, temporal and stochastic requirements for our future computational model. Through the development of this model, we observe that, in isolation, UML is not sufficient for the purpose of creating a domain model, and that a number of descriptive and multivariate statistical techniques provide complementary perspectives, in particular when modelling the heterogeneity of dynamics at the single-cell level. We believe this approach of using UML to define the structure and interactions within a complex system, along with statistics to define the stochastic and dynamic nature of complex systems, is crucial for ensuring that conceptual models of complex dynamical biosystems, which are developed using UML, are fit for purpose, and unambiguously define the functional requirements for the resultant computational model. PMID:27571414
Statistical Techniques Complement UML When Developing Domain Models of Complex Dynamical Biosystems.
Williams, Richard A; Timmis, Jon; Qwarnstrom, Eva E
2016-01-01
Computational modelling and simulation is increasingly being used to complement traditional wet-lab techniques when investigating the mechanistic behaviours of complex biological systems. In order to ensure computational models are fit for purpose, it is essential that the abstracted view of biology captured in the computational model, is clearly and unambiguously defined within a conceptual model of the biological domain (a domain model), that acts to accurately represent the biological system and to document the functional requirements for the resultant computational model. We present a domain model of the IL-1 stimulated NF-κB signalling pathway, which unambiguously defines the spatial, temporal and stochastic requirements for our future computational model. Through the development of this model, we observe that, in isolation, UML is not sufficient for the purpose of creating a domain model, and that a number of descriptive and multivariate statistical techniques provide complementary perspectives, in particular when modelling the heterogeneity of dynamics at the single-cell level. We believe this approach of using UML to define the structure and interactions within a complex system, along with statistics to define the stochastic and dynamic nature of complex systems, is crucial for ensuring that conceptual models of complex dynamical biosystems, which are developed using UML, are fit for purpose, and unambiguously define the functional requirements for the resultant computational model.
ERIC Educational Resources Information Center
Kim, Young Rae
2013-01-01
A theoretical model of metacognition in complex modeling activities has been developed based on existing frameworks, by synthesizing the re-conceptualization of metacognition at multiple levels by looking at the three sources that trigger metacognition. Using the theoretical model as a framework, this study was designed to explore how students'…
Microstructure-based modelling of multiphase materials and complex structures
NASA Astrophysics Data System (ADS)
Werner, Ewald; Wesenjak, Robert; Fillafer, Alexander; Meier, Felix; Krempaszky, Christian
2016-09-01
Micromechanical approaches are frequently employed to monitor local and global field quantities and their evolution under varying mechanical and/or thermal loading scenarios. In this contribution, an overview on important methods is given that are currently used to gain insight into the deformational and failure behaviour of multiphase materials and complex structures. First, techniques to represent material microstructures are reviewed. It is common to either digitise images of real microstructures or generate virtual 2D or 3D microstructures using automated procedures (e.g. Voronoï tessellation) for grain generation and colouring algorithms for phase assignment. While the former method allows to capture exactly all features of the microstructure at hand with respect to its morphological and topological features, the latter method opens up the possibility for parametric studies with respect to the influence of individual microstructure features on the local and global stress and strain response. Several applications of these approaches are presented, comprising low and high strain behaviour of multiphase steels, failure and fracture behaviour of multiphase materials and the evolution of surface roughening of the aluminium top metallisation of semiconductor devices.
Model of human collective decision-making in complex environments
NASA Astrophysics Data System (ADS)
Carbone, Giuseppe; Giannoccaro, Ilaria
2015-12-01
A continuous-time Markov process is proposed to analyze how a group of humans solves a complex task, consisting in the search of the optimal set of decisions on a fitness landscape. Individuals change their opinions driven by two different forces: (i) the self-interest, which pushes them to increase their own fitness values, and (ii) the social interactions, which push individuals to reduce the diversity of their opinions in order to reach consensus. Results show that the performance of the group is strongly affected by the strength of social interactions and by the level of knowledge of the individuals. Increasing the strength of social interactions improves the performance of the team. However, too strong social interactions slow down the search of the optimal solution and worsen the performance of the group. In particular, we find that the threshold value of the social interaction strength, which leads to the emergence of a superior intelligence of the group, is just the critical threshold at which the consensus among the members sets in. We also prove that a moderate level of knowledge is already enough to guarantee high performance of the group in making decisions.
A synthetic model for the oxygen-evolving complex in Sr(2+)-containing photosystem II.
Chen, Changhui; Zhang, Chunxi; Dong, Hongxing; Zhao, Jingquan
2014-08-25
A novel heterometallic MnSr complex containing the Mn3SrO4 cuboidal moiety and all types of μ-O(2-) moieties observed in the oxygen-evolving complex (OEC) in Sr(2+)-containing photosystem II (PSII) has been synthesized and characterized, which provides a new synthetic model of the OEC.
Supervision of Group Work: A Model to Increase Supervisee Cognitive Complexity
ERIC Educational Resources Information Center
Granello, Darcy Haag; Underfer-Babalis, Jean
2004-01-01
This article describes a model for supervisors of group counselors to use to promote cognitive complexity in their supervisees. Counselor cognitive complexity has been linked to many positive counseling skills, including greater flexibility, empathy, confidence, and client conceptualization. Bloom's Taxonomy of Educational Objectives provides a…
The zebrafish as a model for complex tissue regeneration.
Gemberling, Matthew; Bailey, Travis J; Hyde, David R; Poss, Kenneth D
2013-11-01
For centuries, philosophers and scientists have been fascinated by the principles and implications of regeneration in lower vertebrate species. Two features have made zebrafish an informative model system for determining mechanisms of regenerative events. First, they are highly regenerative, able to regrow amputated fins, as well as a lesioned brain, retina, spinal cord, heart, and other tissues. Second, they are amenable to both forward and reverse genetic approaches, with a research toolset regularly updated by an expanding community of zebrafish researchers. Zebrafish studies have helped identify new mechanistic underpinnings of regeneration in multiple tissues and, in some cases, have served as a guide for contemplating regenerative strategies in mammals. Here, we review the recent history of zebrafish as a genetic model system for understanding how and why tissue regeneration occurs.
The zebrafish as a model for complex tissue regeneration
Gemberling, Matthew; Bailey, Travis J.; Hyde, David R.; Poss, Kenneth D.
2013-01-01
For centuries, philosophers and scientists have been fascinated by the principles and implications of regeneration in lower vertebrate species. Two features have made zebrafish an informative model system for determining mechanisms of regenerative events. First, they are highly regenerative, able to regrow amputated fins, as well as a lesioned brain, retina, spinal cord, heart, and other tissues. Second, they are amenable to both forward and reverse genetic approaches, with a research toolset regularly updated by an expanding community of zebrafish researchers. Zebrafish studies have helped identify new mechanistic underpinnings of regeneration in multiple tissues, and in some cases have served as a guide for contemplating regenerative strategies in mammals. Here, we review the recent history of zebrafish as a genetic model system for understanding how and why tissue regeneration occurs. PMID:23927865
Powerful decomposition of complex traits in a diploid model
Hallin, Johan; Märtens, Kaspar; Young, Alexander I.; Zackrisson, Martin; Salinas, Francisco; Parts, Leopold; Warringer, Jonas; Liti, Gianni
2016-01-01
Explaining trait differences between individuals is a core and challenging aim of life sciences. Here, we introduce a powerful framework for complete decomposition of trait variation into its underlying genetic causes in diploid model organisms. We sequence and systematically pair the recombinant gametes of two intercrossed natural genomes into an array of diploid hybrids with fully assembled and phased genomes, termed Phased Outbred Lines (POLs). We demonstrate the capacity of this approach by partitioning fitness traits of 6,642 Saccharomyces cerevisiae POLs across many environments, achieving near complete trait heritability and precisely estimating additive (73%), dominance (10%), second (7%) and third (1.7%) order epistasis components. We map quantitative trait loci (QTLs) and find nonadditive QTLs to outnumber (3:1) additive loci, dominant contributions to heterosis to outnumber overdominant, and extensive pleiotropy. The POL framework offers the most complete decomposition of diploid traits to date and can be adapted to most model organisms. PMID:27804950
Learning Preference Models for Autonomous Mobile Robots in Complex Domains
2010-12-01
utilize a terrain assessment approach named GESTALT [120, 121, 122], derived from an earlier approach named MORPHIN [123, 124, 125]. These ap...constructed to provide good empirical performance. For instance, the MORPHIN [123, 124, 125] and GESTALT [120, 121, 122] systems simply assume direct...the manual construction of preference models still result in a parameter tuning problem. For example, the MORPHIN [123, 124, 125] and GESTALT [120, 121
NREL's System Advisor Model Simplifies Complex Energy Analysis (Fact Sheet)
Not Available
2015-01-01
NREL has developed a tool -- the System Advisor Model (SAM) -- that can help decision makers analyze cost, performance, and financing of any size grid-connected solar, wind, or geothermal power project. Manufacturers, engineering and consulting firms, research and development firms, utilities, developers, venture capital firms, and international organizations use SAM for end-to-end analysis that helps determine whether and how to make investments in renewable energy projects.
Post-closure biosphere assessment modelling: comparison of complex and more stylised approaches.
Walke, Russell C; Kirchner, Gerald; Xu, Shulan; Dverstorp, Björn
2015-10-01
Geological disposal facilities are the preferred option for high-level radioactive waste, due to their potential to provide isolation from the surface environment (biosphere) on very long timescales. Assessments need to strike a balance between stylised models and more complex approaches that draw more extensively on site-specific information. This paper explores the relative merits of complex versus more stylised biosphere models in the context of a site-specific assessment. The more complex biosphere modelling approach was developed by the Swedish Nuclear Fuel and Waste Management Co (SKB) for the Formark candidate site for a spent nuclear fuel repository in Sweden. SKB's approach is built on a landscape development model, whereby radionuclide releases to distinct hydrological basins/sub-catchments (termed 'objects') are represented as they evolve through land rise and climate change. Each of seventeen of these objects is represented with more than 80 site specific parameters, with about 22 that are time-dependent and result in over 5000 input values per object. The more stylised biosphere models developed for this study represent releases to individual ecosystems without environmental change and include the most plausible transport processes. In the context of regulatory review of the landscape modelling approach adopted in the SR-Site assessment in Sweden, the more stylised representation has helped to build understanding in the more complex modelling approaches by providing bounding results, checking the reasonableness of the more complex modelling, highlighting uncertainties introduced through conceptual assumptions and helping to quantify the conservatisms involved. The more stylised biosphere models are also shown capable of reproducing the results of more complex approaches. A major recommendation is that biosphere assessments need to justify the degree of complexity in modelling approaches as well as simplifying and conservative assumptions. In light of
Using New Models to Analyze Complex Regularities of the World: Commentary on Musso et al. (2013)
ERIC Educational Resources Information Center
Nokelainen, Petri; Silander, Tomi
2014-01-01
This commentary to the recent article by Musso et al. (2013) discusses issues related to model fitting, comparison of classification accuracy of generative and discriminative models, and two (or more) cultures of data modeling. We start by questioning the extremely high classification accuracy with an empirical data from a complex domain. There is…
Simplifying the model of a complex heat-transfer system for solving the relay control problem
NASA Astrophysics Data System (ADS)
Shilin, A. A.; Bukreev, V. G.
2014-09-01
A method for approximating the high-dimensionality model of a complex heat-transfer system with time delay by a nonlinear second-order differential equation is proposed. The modeling results confirming adequacy of the nonlinear properties of the reduced and initial models and their correspondence to the controlled plant actual data are presented.
Developing and Modeling Complex Social Interventions: Introducing the Connecting People Intervention
ERIC Educational Resources Information Center
Webber, Martin; Reidy, Hannah; Ansari, David; Stevens, Martin; Morris, David
2016-01-01
Objectives: Modeling the processes involved in complex social interventions is important in social work practice, as it facilitates their implementation and translation into different contexts. This article reports the process of developing and modeling the connecting people intervention (CPI), a model of practice that supports people with mental…
Assessment of higher order turbulence models for complex two- and three-dimensional flowfields
NASA Technical Reports Server (NTRS)
Menter, Florian R.
1992-01-01
A numerical method is presented to solve the three-dimensional Navier-Stokes equations in combination with a full Reynolds-stress turbulence model. Computations will be shown for three complex flowfields. The results of the Reynolds-stress model will be compared with those predicted by two different versions of the k-omega model. It will be shown that an improved version of the k-omega model gives as accurate results as the Reynolds-stress model.
Modeling the evolution of complex conductivity during calcite precipitation on glass beads
NASA Astrophysics Data System (ADS)
Leroy, Philippe; Li, Shuai; Jougnot, Damien; Revil, André; Wu, Yuxin
2017-01-01